content
stringlengths
86
994k
meta
stringlengths
288
619
Geometrical Seminar Welcome to XVIII Geometrical Seminar Dedicated to 85th anniversary of Professor Mileva Prvanovic This is the 18th meeting of the Geometrical Seminar which started its activities during the eighties in the last century, under the name Yugoslav Geometrical Seminar. The 17th Geometrical Seminar, which took place at Zlatibor 2012, had more than 120 participants from about 30 countries. The aim of these meetings is to bring together mathematicians, physicists and engineers interested in geometry and its applications, to give lectures on new results, exchange ideas, problems and conjectures. XVIII Geometrical Seminar is organized by in collaboration with: XVIII Geometrical Seminar is supported by
{"url":"https://tesla.pmf.ni.ac.rs/people/geometrijskiseminarxviii/index.php","timestamp":"2024-11-03T00:36:37Z","content_type":"application/xhtml+xml","content_length":"8043","record_id":"<urn:uuid:afc9590d-6b67-46e6-b522-b443aa1ed277>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00715.warc.gz"}
Chemical Equilibrium In my previous couple of blog posts, I talked about a thermodynamic state function called enthalpy, and how it is used by scientists and engineers. This included covering a principle called Hess’s Law, which has led to the tabulation of enthalpy values for certain reactions under a set of standardized conditions, such that the idea could be generalized to make thermodynamic predictions about a huge variety of processes. Those posts laid the groundwork for the following topic. (more…) Enthalpy and Hess’s Law Applying Hess’s Law, and Generalizing it to Different Physical Situations In a recent post, I introduced a thermodynamic state function called enthalpy and introduced something called Hess’s Law. That post was following up on two previous posts, the first of which covered the 1^st and 2^nd Laws of Thermodynamics, entropy, and the distinction between state functions vs path functions, and the second of which covered the concepts of work, heat transfer, reversibility, and internal energy in thermodynamic systems. Just to recap, enthalpy H is a state function that scientists and engineers use to analyze the thermodynamic properties of certain physical processes (particularly chemical reactions). More accurately, it’s the change in this function that we’re most interested in, particularly at constant pressure, which is the case for most biological processes as well as many experimental situations. You can check out the previous article for the gory details, but the take home message was that the change in enthalpy at constant pressure is equal to the heat transferred to or from the system, and that its status as a state function led to an important general result called Hess’s Law. The tl; dr version of Hess’s Law is that the change in enthalpy for a reaction is the sum of the enthalpies of formation of the products, each multiplied by its corresponding coefficient (n) from its balanced chemical equation, minus the enthalpies of formation of the reactants, each (again) multiplied by its corresponding coefficient. Hess’s Law can also be concisely summarized by the following equation which uses sigma (summation) notation: This is important to scientists and engineers because it has permitted the tabulation of many experimentally-derived ΔH values under a set of standardized conditions. Adding and subtracting various combinations of these can facilitate convenient thermodynamic predictions for a wide variety of reactions. The remainder of this article deals with how Hess’s Law is applied and generalized to a variety of physical situations. (more…) Enthalpy and Hess’s Law Enthalpy: Exothermic vs Endothermic Processes In a recent article, I talked about the 1^st and 2^nd Laws of Thermodynamics, entropy, and the distinction between state functions vs path functions. More recently, I wrote another one in which I talked about heat, work, reversibility, and internal energy in thermodynamic systems. At the end of the latter post, I mentioned in passing that the path-dependence of heat and work done by non-conservative forces makes it desirable to work with state functions whenever feasible, because it’s not always easy to know the precise path by which a system arrived at its current state from a prior one. That’s where a function called enthalpy comes into play. (more…) Work, Heat, and Internal Energy Heat, and internal energy In a recent article, I mentioned in passing that the internal energy of a system is a state function. Just to quickly recap, state functions are properties of a physical system whose values do not depend on how they were arrived at from a prior state of the system. They depend only on the starting and ending states of the system. I then contrasted state functions with path-dependent functions, which can take on very different values depending on the path by which the system arrived at its current state from its previous state (the history of the system matters). Perhaps counter-intuitively, while it’s true that internal energy is a state function, the change in a system’s internal energy is the sum of two path-dependent functions. (more…) State Functions, Entropy, Path Dependence, and Energy Conservation in Thermodynamic Systems State Functions vs Path Dependent Functions In thermodynamics, scientists distinguish between what are called state functions vs path functions. State functions are properties of a system whose values do not depend on how they were arrived at from a prior state of the system. They depend only on the starting and ending states of the system. On the other hand, path functions can take on very different values depending on the path by which the system arrived at a state from its previous state. (more…)
{"url":"https://www.crediblehulk.org/index.php/tag/thermodynamics/","timestamp":"2024-11-08T12:28:56Z","content_type":"text/html","content_length":"80068","record_id":"<urn:uuid:ec7e5a73-3cf9-4d21-9182-3f0590f4a875>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00238.warc.gz"}
Research developed within the Algebra group connects with a variety of (sub)areas of Mathematics and Computer Science. Graph-theoretic, geometric or topological arguments have widespread use. Automata theory: Descriptional complexity in the average case through the analytic combinatorics of conversion methods between regular expressions and finite automata. Invertibility studies to develop a public-key cryptography system based on linear transducers. Use of pre-grammars for type inhabitation. Combinatorics: Fundamental groups, homotopy and homology within boolean representable simplicial complexes, combinatorics of hyperplane arrangements through the Pak-Stanley labeling of its regions, algebraic and geometric properties of combinatorial decision and optimization problems. Dynamical systems: Use of algebraic tools to characterize flow-invariant spaces of dynamical systems given by network (graph) structures. These results are intended to prove generic dynamical properties such as bifurcations and heteroclinic behavior. Group theory: Generation of finite groups and some graph-theoretical properties of the generating graph of 2-generated groups. Polyhyperbolic geodesic metric spaces and endomorphisms of hyperbolic groups. Submonoids of free groups. Representation and ring theory: Structure of non-commutative rings and Hopf algebras, with some emphasis on affine cellular algebras, generalized Weyl algebras, Ore extensions and quantum groups. Topics of interest are factorization in non-commutative rings, PI theory, automorphism groups, injective hulls of simple modules, Hochschild (co)homology and deformation theory. Semigroup theory: Relatively profinite semigroups versus symbolic dynamics and classification of pseudovarieties. Profinite approach to decision problems for pseudovarieties. Profinite topologies. Construction of a model for the bifree locally inverse semigroups by making use of a special type of graphs.
{"url":"https://cmup.fc.up.pt/main/content/algebra","timestamp":"2024-11-03T09:44:28Z","content_type":"text/html","content_length":"29676","record_id":"<urn:uuid:ef3a4591-58b3-4c8b-b3a9-1666b7337079>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00824.warc.gz"}
All the King's philosophers: part II (See part I.) Al-Khwārizmī spent the night at the royal palace. Very early the next morning he looked for a cool and quiet place on a veranda by the palace garden, asked for a few sheets of the famous Samarkand paper and some ink and put himself to work on the quest for the Philosopher Rule. The following is a mildly adapted version of the annotations made by al-Khwārizmī during that day. "To simplify things, let us assume that questions posed to the Council are of the form that can be asked simply by saying yes or no, as in 'Must we go to war against the Bactrians?' or 'Should taxes be raised?'. To every such question, the Council will be divided into those philosophers answering affirmatively and those whose reply is negative. The Philosopher Rule must then decide whether the final answer is yes or no based on the distribution of individual replies across the Council. The Rule will be complete if it can take care of every possible answer distribution." "Now, as philosophers pride themselves with having an answer to everything, we know that each of the Council members will either reply yes or no to every conceivable query they might be confronted with, that is, they never remain silent at a question. So, to a given question the group of negative philosophers is exactly the complementary of the group of affirmative philosophers, and we can in a sense concentrate only on the affirmative party. The Philosopher Rule can be then regarded as playing the following role: for each group of affirmative philosophers the Rule must decree whether the group is authoritative, that is, whether the final judgment on a question must be taken to be affirmative if it is this precise group of philosophers that reply in the positive." "We cannot choose at random which groups are authoritative, because such disposition would likely yield inconsistent results when the Council was consulted over time. Clearly, some constraints must apply to the way in which we choose the list of authoritative groups. I think the art of Logic will help us here, fortunately I brought with me some books on the subject." That was how far the mathematician arrived the first day. The second day, al-Khwārizmī ordered a fresh load of paper and sat on the veranda from dusk till dawn furiously scribbling symbols and consulting a heap of books by the school of the Stoics. When the last light of day vanished beyond the palace walls and the first crickets began chirping in the garden, the mathematician arose from his seat with a triumphal smile, as he thought he had the foundations of the problem basically laid down, and only some routinely calculations were necessary to reach a solution. This is a summary of his conclusions that day: "We explore the constraints imposed on eligible authoritative groups by studying their connection with the basic laws of Logic by which every wise man must abide. For it is our goal that the decisions taken by the Rule be free of contradiction much as those of any philosopher are bound to be." • "If the entire Council supports a decision (though according to the King this event has not ever happened), it is only sensible for the Rule to approve the decision as well. So, the group formed by all philosophers in the Council is an authoritative group." • "If a group A is deemed authoritative, then the complementary group, i.e. that comprising the philosophers outside A, cannot be authoritative. Otherwise, if we asked a question which is supported by A and then submitted exactly the opposite question, which would be supported by the complementary of A, the Rule would end up holding both a position and its negation." • If a certain group A is considered authoritative, so will be the case with any other group including A: having more philosophers supporting the decision can only make us more confident on the • "Let us suppose we pose a question, like 'Should we raise taxes?' to which an authoritative group A responds affirmatively, and then some other question, like 'Should we offer a sacrifice to the gods?', which is supported by another authoritative group B. If we had submitted the combined question 'Should we raise taxes and offer a sacrifice?' the laws of Logic teach us that the philosophers answering positively would be exactly those belonging to both A and B. Hence the Rule must have the intersection of authoritative groups as authoritative." "The only work left to do is finding a list of authoritative groups that satisfy the restrictions. This I will do tomorrow through some fairly easy if tedious calculations." Al-Khwārizmī had a light dinner and went to sleep with the peace of mind enjoyed by those who feel success within reach of their hand.
{"url":"http://bannalia.blogspot.com/2007/11/all-kings-philosophers-part-ii.html","timestamp":"2024-11-01T20:08:25Z","content_type":"application/xhtml+xml","content_length":"68915","record_id":"<urn:uuid:1e607759-881b-4ab1-a7ea-659e1e58778c>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00583.warc.gz"}
Get to Know about Financial Ratio! Do you know what’s the meaning of a financial ratio? In general, financial ratios have an understanding as a quantitative analysis of information contained in a company’s financial statements to evaluate aspects of operations and the performance of the company’s finances. A financial ratio is an appropriate action to know and assess how the level of health of the company before later looking at the company’s financial statements. Now, you can use the help of Financial ERP Software to make your company’s economic activities have more accurate results. The article will be a comprehensive discussion of financial ratios and their types. Check out to find out more! Table of Content Functions of Financial Ratio Source: Rawpixel.com There are several functions of financial ratios. The financial ratio can quickly detect a company’s potential situation, whether related to improvement or vice versa. Also, a company can analyze the strength of its finances using financial ratios. The level of a company’s assets used can be seen from this. In addition, financial ratios can also help analyze the company’s growth and development in the future. Types of Financial Ratio and Formula Source: Rawpixel.com Speaking about the financial ratio types, Darmawan (2020), in his book entitled ‘Dasar-Dasar Memahami Rasio dan Laporan Keuangan’ or ‘Basics of Understanding Financial Ratios and Statements’ explained that there are several types of financial ratios. Here is an explanation of the types of financial ratio and their formulas: Profitability ratio Understanding the profitability ratio is a way to figure out how well a company can make money in a certain amount of time by looking at things like sales transactions, cash, and capital as well as how many people work for the company. You can use the Sales Software if you want to make sales smarter, faster, and better. Sales software can help your sales team be more productive in many different ways when it comes to making sales. There are several types of profitability ratios. To begin with, there is Return on Investment (ROI). This ratio provides information about a company’s ability to generate profits based on the company’s asset amount. Here is the formula for calculating Return on Investment (ROI): Return on Investment Formula Furthermore, there is a return on equity. The ratio can calculate the company’s net income after tax by using the company’s capital. Here is the formula to calculate return on equity: Return on Equity Formula Another one, there is Net Profit Margin (NPM). The ratio is able to calculate the net income that the company earns compared to its sales. Net Profit Margin (NPM) can explain the company’s efficiency level. Here is the formula to calculate Net Profit Margin (NPM): Net Profit Margin Formula Then, the last type of profitability ratio is the gross profit margin. The ratio can calculate the efficiency of a company’s production cost control. It also indicates the ability of a company to produce efficiency. In other words, the gross profit margin is a percentage of the company’s gross profit by the sales in a period. Here is the formula to calculate the gross profit margin: Gross Profit Margin Formula Liquidity ratio This ratio can be defined as a parameter that describes a company’s ability to pay all the short-term financial obligations by the due date using the available assets. Thus, a company can be said to be liquid if it can pay its obligations. In contrast, a company can be said to be illiquid if it can’t pay its obligations. Also read: Financial Statements: Definition, Functions, and Examples There are several types of liquidity ratios. To begin with, there is the current ratio. This ratio is a comparison between current assets and current liabilities, which is the most common measure to know the company’s ability to fulfill its short-term obligations. In other words, this ratio can provide information about how current assets can cover a company’s current liabilities. Here is the formula for calculating the current ratio. Current Ratio Formula Furthermore, there is a quick ratio, often called the acid test ratio. One of the experts in Indonesia, Raharjaputra, expressed his opinion about the definition of a quick ratio. According to him, a quick ratio is a ratio used to measure the company’s ability to pay its obligations using current assets that are reduced by inventory. Also, Raharjaputra considers this as less liquid. Here is the formula for calculating the quick ratio: Quick Ratio Formula Another one, there is the cash ratio. This ratio shows the company’s cash position, which can cover the current debt. Here is the formula to calculate the cash ratio: Cash Ratio Formula The last type of liquidity ratio is the cash turnover ratio. This ratio can indicate the relative value of net sales on the net working capital. Here is the formula to calculate the cash turnover Cash Turnover Ratio Solvability ratio The solvability ratio provides information about how the company can settle its obligations if the company is liquidated. This ratio relates to the funding decisions that the company will choose to finance in debt rather than personal capital. In other words, the solvability ratio is a ratio that is used to measure how much company assets are funded through debt. There are several types of solvability ratios. To begin with, there is Debt to Asset Ratio (DAR). This ratio is the total liability to the asset. The ratio is able to inform the number of comparisons of total assets and total liabilities. Here is the formula to calculate Debt to Asset Ratio (DAR): Debt to Asset Ratio Formula Furthermore, there is Debt to Equity Ratio (DER). This ratio is the proportion of a company’s debt financing relative to its equity. Here is the formula to calculate the Debt to Equity Ratio (DER): Debt to Equity Ratio Formula Last but not least, there is an Interest Coverage Ratio (IC). This ratio can provide information about how well the company can pay the interest on its outstanding debts. Here is the formula to calculate the Interest Coverage Ratio (IC): Interest Coverage Ratio Activity ratio This ratio can evaluate a company’s ability to use its assets and liability to generate sales and increase profits. In addition, this ratio can provide the basics of comparison in a certain reporting period to analyze the changes that occur during the time. There are several types of activity ratios. To begin with, there is accounts receivable turnover ratio. The function of this ratio is to determine the entity’s ability to collect a certain amount of money from the customer. Here is the formula for calculating the accounts receivable turnover ratio: Account Receivable Turnover Furthermore, there is a merchandise inventory turnover ratio. This ratio can show the intensity of a company in selling and replacing its inventory in a certain period. Here is the formula to calculate the merchandise inventory turnover ratio: Merchandise Inventory Turnover Ratio Formula Last, there is the total assets turnover ratio. This ratio is used as a parameter of efficiency level used by a company to maximize revenue by using owned assets. Here is the formula for calculating the total assets turnover ratio: Total Asset Turnover Ratio Formula Also read: Debit and Credit: Explanation and Use in Accounting Financial ratios play an essential role in a company. Therefore, in making a decision or policy, a company will consider carefully looking at the financial ratio. A company can find out how the company’s future growth and development using financial ratios. HashMicro as a leading ERP Software vendor in Singapore provides solutions for your company to operate business automatically, such as Accounting System to Competency Management System. Feel free to contact us to get the best offer and free demos.
{"url":"https://www.hashmicro.com/blog/financial-ratio/","timestamp":"2024-11-09T15:26:29Z","content_type":"text/html","content_length":"467058","record_id":"<urn:uuid:84c81450-0b3d-4444-b71e-3b67122e4d32>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00712.warc.gz"}
Library iris.proofmode.class_instances_make IMPORTANT: Read the comment in classes_make about the "constant time" requirements of these instances. From iris.proofmode Require Export classes_make From iris.prelude Require Import options Import bi Section class_instances_make Implicit Types P Q R • make_affinely_affine adds no modality, but only if the argument is affine. • make_affinely_True turns True into emp. For an affine BI this instance overlaps with make_affinely_affine, since True is affine. Since we prefer to avoid emp in goals involving affine BIs, we give make_affinely_affine a lower cost than make_affinely_True. • make_affinely_default adds the modality. This is the default instance since it can always be used, and thus has the highest cost. (For this last point, the cost of the KnownMakeAffinely instances does not actually matter, since this is a MakeAffinely instance, i.e. an instance of a different class. What really matters is that the known_make_affinely instance has a lower cost than • make_absorbingly_absorbing adds no modality, but only if the argument is absorbing. • make_absorbingly_emp turns emp into True. For an affine BI this instance overlaps with make_absorbingly_absorbing, since emp is absorbing. For consistency, we give this instance the same cost as make_affinely_True, but it does not really matter since goals in affine BIs typically do not contain occurrences of emp to start with. • make_absorbingly_default adds the modality. This is the default instance since it can always be used, and thus has the highest cost. (For this last point, the cost of the KnownMakeAbsorbingly instances does not actually matter, since this is a MakeAbsorbingly instance, i.e. an instance of a different class. What really matters is that the known_make_absorbingly instance has a lower cost than make_absorbingly_default.) For affine BIs, we would prefer □ True to become True rather than emp, so we have this instance with lower cost than the next.
{"url":"https://plv.mpi-sws.org/coqdoc/iris/iris.proofmode.class_instances_make.html","timestamp":"2024-11-08T02:00:08Z","content_type":"application/xhtml+xml","content_length":"77226","record_id":"<urn:uuid:eff80070-56ca-4d1c-a7f4-e5afabe75147>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00126.warc.gz"}
3 Circle Venn. Venn Diagram Example | Venn diagrams - Vector stencils library | 3-Set Venn diagram - Template | 3 Part Venn Diagram This example shows the 3 Circle Venn Diagram. The Venn Diagrams visualize all possible logical intersections between several sets. On this example you can see the intersections of 3 sets. Venn Diagrams are widely used in mathematics, logic, statistics, marketing, sociology, etc. The vector stencils library "Venn diagrams" contains 12 templates of Venn and Euler diagrams. Use these shapes to draw your Venn and Euler diagrams in the ConceptDraw PRO diagramming and vector drawing software extended with the Venn Diagrams solution from the area "What is a Diagram" of ConceptDraw Solution Park. Use this template to design your three-set Venn diagrams. "Definition of VENN DIAGRAM: a graph that employs closed curves and especially circles to represent logical relations between and operations on sets and the terms of propositions by the inclusion, exclusion, or intersection of the curves" [merriam-webster.com/ dictionary/ venn%20 diagram] The template "3-set Venn diagram" is included in the Venn diagrams solution from the area "What is a Diagram" of ConceptDraw Solution Park. You need design Cylinder Venn Diagram? Nothing could be easier with ConceptDraw DIAGRAM diagramming and vector drawing software extended with Venn Diagrams Solution from the “Diagrams” Area. ConceptDraw DIAGRAM allows you to design various Venn Diagrams including Cylinder Venn Diagrams. This template shows the Venn Diagram. It was created in ConceptDraw DIAGRAM diagramming and vector drawing software using the ready-to-use objects from the Venn Diagrams Solution from the "Diagrams" area of ConceptDraw Solution Park. Venn Diagrams visualize all possible logical intersections between several sets and are widely used in mathematics, logic, statistics, marketing, sociology, etc. The vector stencils library "Venn diagrams" contains 12 templates of Venn and Euler diagrams. Use these shapes to draw your Venn and Euler diagrams in the ConceptDraw PRO diagramming and vector drawing software extended with the Venn Diagrams solution from the area "What is a Diagram" of ConceptDraw Solution Park. Venn diagrams are illustrations used in the branch of mathematics known as set theory. They show the mathematical or logical relationship between different groups of things (sets). A Venn diagram shows all the possible logical relations between the sets. To visualize the relationships between subsets of the universal set you can use Venn diagrams. To construct one, you should divide the plane into a number of cells using n figures. Each figure in the chart represents a single set of, and n is the number of represented sets. Splitting is done in a way that there is one and only one cell for any set of these figures, the points of which belong to all the figures from the set and do not belong to others. The plane on which the figures are represented, is the universal set U. Thus, the point which does not belong to any of the figures, belongs only to U. The vector stencils library "Venn diagrams" contains 12 templates of Venn and Euler diagrams. Use these shapes to draw your Venn and Euler diagrams in the ConceptDraw PRO diagramming and vector drawing software extended with the Venn Diagrams solution from the area "What is a Diagram" of ConceptDraw Solution Park. It's impossible to overestimate the usefulness and convenience of using the ready templates when you create your own diagrams and charts. And Venn Diagrams are not exception. ConceptDraw DIAGRAM diagramming and vector drawing software presents the Venn Diagrams solution from "Diagrams" area which offers a set of Venn Diagram templates and samples. Use the suitable Venn Diagram Template to create your own Venn Diagram of any complexity.
{"url":"https://www.conceptdraw.com/examples/3-part-venn-diagram","timestamp":"2024-11-09T00:23:09Z","content_type":"text/html","content_length":"50736","record_id":"<urn:uuid:ec506f64-434b-4f5b-9a52-c97670b99456>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00390.warc.gz"}
The Distance and Midpoint Formulas Recall from the Pythagorean Theorem that, in a right triangle, the hypotenuse c and sides a and b are related by a^2 + b^2 = c^2. Conversely, if a^2 + b^2 = c^2, the triangle is a right triangle (see the figure below). Suppose you want to determine the distance d between the two points (x[1], y[1]) and (x[2], y[2]) in the plane. If the points lie on a horizontal line, then y[1] = y[2] and the distance between the points is | x[2] - x[1] |. If the points lie on a vertical line, then x[1] = x[2 ]and the distance between the points is | y[2] - y[1] |. If the two points do not lie on a horizontal or vertical line, they can be used to form a right triangle, as shown in the figure below. The length of the vertical side of the triangle is | y[2] - y[1] | and the length of the horizontal side is | x[2] - x[1] |. By the Pythagorean Theorem, it follows that Replacing | x[2] - x[1] |^ 2 and |y[2] - y[1] |^ 2 by the equivalent expressions (x[2] - x[1])^ 2 and (y[2] - y[1])^ 2 produces the following result. Distance Formula The distance d between the points (x[1], y[1]) and (x[2], y[2]) in the plane is given by Example 1 Finding the Distance Between Two Points Find the distance between the points (-2, 1) and (3, 4). Example 2 Verifying a Right Triangle Verify that the points (2, 1), (4, 0), and (5, 7) form the vertices of a right triangle. The figure below shows the triangle formed by the three points. The lengths of the three sides are as follows. d[1]^2 + d[2]^2 = 45 + 5 = 50 Sum of squares of sides d[3]^2 = 50 Square of hypotenuse you can apply the Pythagorean Theorem to conclude that the triangle is a right triangle. Example 3 Using the Distance Formula Find x such that the distance between (x, 3) and (2, -1) is 5. Using the Distance Formula, you can write the following. Distance Formula 25 = (x^2 - 4x + 4) + 16 Square both sides. 0 = x^2 - 4x - 5 Write in standard form. 0 = (x - 5)(x + 1) Factor. Therefore, x = 5 or x = -1, and you can conclude that there are two solutions. That is, each of the points (5, 3) and (-1, 3) lies five units from the point as shown in the following figure. The coordinates of the midpoint of the line segment joining two points can be found by “averaging†the x-coordinates of the two points and “averaging†the y-coordinates of the two points. That is, the midpoint of the line segment joining the points (x[1], y[1]) and (x[2], y[2]) in the plane is For instance, the midpoint of the line segment joining the points (-5, -3) and (9, 3) is as shown in the figure below
{"url":"https://polymathlove.com/the-distance-and-midpoint-formulas.html","timestamp":"2024-11-06T08:43:56Z","content_type":"text/html","content_length":"107823","record_id":"<urn:uuid:525cb9f0-fd72-4162-bf8b-ffaf71105875>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00363.warc.gz"}
Tangent - (AP Physics 1) - Vocab, Definition, Explanations | Fiveable from class: AP Physics 1 In physics, tangent refers to a line that touches but does not cross or intersect with another curve at one specific point. congrats on reading the definition of Tangent. now let's actually learn it. © 2024 Fiveable Inc. All rights reserved. AP® and SAT® are trademarks registered by the College Board, which is not affiliated with, and does not endorse this website.
{"url":"https://library.fiveable.me/key-terms/ap-physics-1/tangent","timestamp":"2024-11-05T22:11:19Z","content_type":"text/html","content_length":"218194","record_id":"<urn:uuid:8c285393-65a5-4773-865c-d13f8ed72fbc>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00549.warc.gz"}
Buying guide - mobile air conditioner What do I need to consider when buying? How do I calculate the required cooling capacity? The following points must be well considered: • Room size (length x width x height) - Calculate the room size in cubic metres (m³). • Location of the room - Rooms in the attic, rooms with high ceilings or rooms that are exposed to strong sunlight need a stronger output. When used in offices, equipment such as computers and printers may need to be included as they also give off heat. The number of people in the room must also be considered, as each person emits an average of 80 watts of heat. • Capacity- Generally, an air conditioner requires about 20-25 BTU per square metre of living space. BTU/h is the cooling capacity per hour. For example, 1000 BTU/h is equivalent to 293 watts per □ How do I calculate the required cooling capacity? You can determine the watts and BTU required for an air conditioner by calculating 40 watts of power per cubic metre of volume as a general rule of thumb. An additional cooling capacity of approx. 100 watts per person is calculated. Therefore, it is better not to calculate too tightly. □ Calculation example 20m² room: Assuming a room height of 2.5m: 20m² x 2.5m = 50m³. Required power: 50m³ x 40 W = 2000 W 2000 W / 293 = 6.8 * 1000 = 6825.9 BTU/h -> Thus a unit with 8000 BTU/h would be sufficient here. • Energy efficiency - Units of energy efficiency class A or higher are to be preferred because of their low power consumption. Depending on the type, units of class A consume 11-15% less energy than a unit of class C, for example. • Installation - With monobloc air conditioners, you don't need any installation, you just need a socket for the power connection and an opening for the heat to escape (e.g. window). Split air conditioners, on the other hand, consist of two units that are connected to each other by a pipe. The two units are one unit outdoors and one unit indoors. The compressor of a split air conditioner is located outside. This is also where the waste heat and operating noise is generated. • Price - The prices of mobile monobloc air conditioners depend not only on their performance and size, but also on the range of features, the available functions such as timer and sleep mode, and the design. Good, mobile monobloc air conditioners can already be bought relatively cheaply. More information on the topic: Still not sure which unit is enough for your needs? No problem! Our consultants will be glad to help you make the right choice: contact ecofort.
{"url":"https://support.ecofort.ch/en/support/solutions/articles/6000257700-buying-guide-mobile-air-conditioner","timestamp":"2024-11-13T04:48:35Z","content_type":"text/html","content_length":"30035","record_id":"<urn:uuid:58b37bbe-af17-4c87-aaaa-1fdd5f384daa>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00156.warc.gz"}
Are we the only mathematically intelligent specie on this earth? - Think Different Nation Are we the only mathematically intelligent specie on this earth? No, not at all, many other species on this planet possess at least basic mathematical abilities far beyond our imagination. Many recent researches have exhibited animal’s ability to deal with basic mathematical operations like counting objects, summing up things or differentiating between quantities. Research on mathematical abilities of animals not only helps us in getting a deep understanding of the roots and evolution of our mathematical skills but also improves our logical-mathematical capabilities. Most non-human animals have the ability to count objects (also called number sense) though they don’t have a symbolic numeral system for numbers are abstract concepts for them. Indeed, they don’t hinge on true numbers for handling their mathematical operations, this is why their mathematics is referred to as rough math, non-verbal number sense or almost math. And the most interesting part is they can judge quantities without counting. We know about people who are famous for estimating quantities with the help of just a peek; similarly, babies not able to speak yet are still able to differentiate between more and less, so is the case with animals, they are able to count without the existence of a symbolic number system. Obviously accuracy does not matter to animals as much as it matters to us. They do involve in mathematical operations for undertaking their life activities rather than for ensuring correct answers. Unlike animals, our mathematical abilities strongly depend upon psychological factors and so we focus on accuracy of our answers; we know the solution of the problem but still we work till the achievement of correct answer. This is why, people think they don’t have good logical-mathematical skills just because they are not able to earn good score at math tests; they are totally wrong because if they are presented with similar problems in the form of real life activities they outperform. So, if we forget about accuracy for some time, we may understand the animal mathematical How mathematical abilities of non-human animals actually help them? Bees, fish, frogs, lions, hyenas, chimpanzees, monkeys, chicks, dogs etc. are among those animals tested for their mathematical abilities. In few researches, animals were taught mathematical skills but in other animals were just tested and observed for their mathematical abilities. These researchers explained the reasons why animals evolved basic mathematical skills: Survival and safety Animals apply their mathematical abilities for many reasons one of which is their survival, survival in terms of food security; survival in terms of their defense against predators etc. Lions are famous for applying their number sense for their defense against attackers. Lions live in the form of prides, whenever other prides attack them, they judge their defensing power by their counting sense and they judge the number of lions in attacking pride by listening to number of roars produced. And then they perceive about the strength of their pride, perceiving themselves weak if members of attacking pride outnumber the total members of their own pride and vice versa. Karen McComb researcher at the university of Sussex in Brighton UK, experimented Tanzanian lions to test Lionesses judged the number of sounds and observed the directions from which sound was coming, after ensuring that they outnumber the attacking pride, they started preparing for defense against them. The fact that matters is lionesses were observing the number of members of their pride with their eyes and judging the number of members of attacking pride with the help of number of sounds coming from different directions, so they were integrating their two senses (seeing, hearing) for counting the number of attackers. Later, similar experiments were performed for Hyenas, monkeys and chimpanzees to identify their counting abilities and interestingly outcomes of all these experiments resembled each other. Similarly, guppies, famous fish species prefer to be the part of larger shoals for having strong defense against Brian Butterworth of University College of London conducted an experiment for testing their counting abilities. In the test, guppies were placed in an open tank and one guppy was observed for its counting ability, it preferred to join larger shoal, not only it estimated the size of shoal by observing the whole shoal but also counted each individual fish passing it on either side. Experiment proofed the guppies counting as well as remembering abilities. Tracing the right specie In many cases animals of different species resemble each other and its not easy even for themselves to Butterworth says that frogs count the number of pulses in their croak to identify whether the fellow frog belong to their specie or not since the number of pulses in a croak of different species is not same. Frogs simply listen to croak, judge number of pulses and trace the specie of fellow Applications of mathematical abilities for different animals are different. Some use it for survival, some for honey bees use their counting abilities to navigate. This ability is the key for their navigation. Whenever they go out of their hives in search of a food source, they estimate the distance they have traveled so far by counting the number of landmarks between their hive and the food An experiment performed by Stony Brook at the State University of New York in the last decade of twentieth century, tested this ability of the honey bees. Special experiment setting consisting of tents was arranged for conducting this experiment. Number of tents between the food source and hive were altered by removing a tent or adding an extra tent. Honey bees were totally confused when number of landmarks on their way were altered. So honey bees faced trouble in returning back to their hives, as a result they either stopped for short time or traveled too far to find their hives. Other famous studies proofing mathematical abilities of animals In one recent study new born chicks of only three to four days were trained for counting numbers. They were allowed to imprint on objects, objects having different colors. Some of them were placed with pair of objects, some with individual two to three different colored objects. Now the chicks that were imprinted on three plastic objects preferred to hang out with three new ones instead of pair and the chicks imprinted on pair of objects preferred to hang out with pair. The counting abilities identified in young little chicks proves that counting is an innate or inborn ability Dogs have been the subject of many different researches related to mathematical skills of animals. In one recent study, dogs were presented with different number of treats and dogs responded eagerly when number of treats were increased, they just estimated the number of treats at a glance. In another study, Sedona a dog was the subject of extensive study when he was subjected to almost seven hundred trials for testing his counting ability. Dog responded correctly in more than 50% instances. Still there are questions at the mathematical abilities of dogs, in fact many other animals outperform the dogs in terms of mathematical skills. AI a famous chimp who is able to compare numerical value with the number of dots appearing on the screen, these findings help us trace our closet neighbors on the evolutionary tree in terms of our mathematical skills. Alex a famous bird proved his remarkable counting abilities in the animal world. After extensive training, was These are only the few instances of animal showing mathematical capabilities, animals are much more mathematically intelligent than our imagination Understanding mathematical capabilities of animals has many positive implications on human lives. As its innate ability of many animals so is the inborn abilities of humans, if we start thinking it this way, we may start introducing mathematical skills to our kids at very young age. When they will grow these learned skills will help them perform better in all fields of life. As counting is basic life skill, it is now obvious that all human beings are somehow mathematically intelligent. It is the only intelligence type that is possessed by all of us. If we accept this fact and start think differently, we may recognize the other intelligence types that we possess. Once we recognize intelligence type that we actually possess we may recognize our instinctive advantage and its recognition will help us do what we were born to do and it in turn will produce leap frog movements in our lives. Follow us to learn more about multiple intelligences to improve your mathimatical abilities: Facebook : Think Different Nation Instagram : Think Different Nation Twitter : @TDN_Podcast
{"url":"https://thinkdifferentnation.com/2020/08/are-we-the-only-mathematically-intelligent-specie-on-this-earth/","timestamp":"2024-11-09T01:20:26Z","content_type":"text/html","content_length":"165540","record_id":"<urn:uuid:7be2efa0-33e1-4f2c-86ce-9db72228a578>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00251.warc.gz"}
Multi-Querying: A Subsequence Matching Approach to Support Multiple Queries | Informatica | Vilnius University Institute of Data Science and Digital Technologies 1 Introduction In recent years, the plummet in the cost of sensors and storage devices has resulted in massive time series data being captured and has substantially driven the need for the analysis of time series data. Among various analysis and applications, the problem of subsequence matching is of primordial importance, which serves as the foundation for many other data mining techniques, such as anomaly detection (Boniol and Palpanas, ; Boniol et al. ; Wang et al. ), and classification (Wang et al. ; Abanda et al. ; Iwana and Uchida, ; Boniol et al. Specifically, given a long time series , for any query series , the subsequence matching problem finds the number of subsequences from most similar to query), or finds subsequences whose distance falls within the threshold (range query). In the last two decades, plenty of works have been proposed for this problem. Most existing works find results based on the strict distance, like Euclidean distance and Dynamic Time Warping. Among them are scanning-based approaches (Li et al. ; Rakthanmanon et al. ) and index-based ones (Linardi and Palpanas, ; Wu et al. Another type of works define the query in a more flexible way. Query-by-Sketch (Muthumanickam et al. ) and SpADe (Chen et al. ) approximate the query with a sequence of line segments, and find subsequences that can be approximated in a similar way. Nevertheless, in many real applications, users are unable to accurately and clearly elaborate the query intuition with a single query sequence. Specifically, users may have different strictness requirements for different parts of the query. We illustrate it with the following two examples. Case study 1. In the field of wind power generation, Extreme Operating Gust (EOG) (Hu et al. ) is a typical gust pattern which is a phenomenon of dramatic changes of wind speed in a short period. Early detection of EOG can prevent the damage to the turbine. A typical pattern of EOG, as in Fig , has three physical phases, where its corresponding shape contains a slight decrease (1–100), followed by a steep rise and a steep drop (101–200), and a rise back to the original value (200–300). Users usually emphasize the steep increase/decrease in the second part, which means is more preferred compared to in Fig . However, if the analyst submits query is more similar to under either ED or DTW distance. Case study 2. During the high-speed train’s work time, the sensor will continuously collect vibration data for monitoring. When the train passes by some source of interference, the value of sensors will increase sharply, and return to a normal value after some time, as in Fig . However, if is issued as a query, subsequence is more similar to it, which is an unexpected result. By combining together, we can learn that the pattern occurrences may have variable durations and distinct amplitudes. The most strict constraint is that the pattern should include an almost upright rise and an almost upright fall. The above examples clearly demonstrate the limitation of the single query mechanism. Although a single query can express the shape the user is interested with, it is not enough to express the extent of time shifting and amplitude scaling, as well as the error range. To solve this problem, in this paper, we propose a multiple query approach. Compared to a single query, submitting a small number of queries together can express the query intuition more accurately. Consider the example in Fig. : if users take as the query set, it indicates that the user can show more tolerance in realtion to the subsequence length and the value range, but less in relation to the slope of the increasing and decreasing part. Moreover, submitting multiple queries is also a natural solution in the real-world applications. For example, in the above case of train monitoring, analysts hope to find out all interfered subsequences, and then correct them. The analyst will go through a small part of the long sequence. Once coming across a few interfered subsequences, he/she can submit them together in order to find more instances. We first propose a probability-based representation of the multiple queries. Then, we design a novel distance function to measure the similarity of one subsequence to the multiple queries. In the end, a breadth-first search algorithm is proposed to find out the desired subsequences. To the best of our knowledge, this is the first work to study how to express the query intuition. Our contributions can be summarized as follows: The rest of the paper is organized as follows. The related works are reviewed in Section . Section introduces definitions and notations. In Section , we introduce our approach in detail. Section presents an experimental study of our approach using synthetic and real datasets, and we offer conclusions in Section 2 Related Work In last two decades, the problem of subsequence matching has been extensively studied. Existing approaches can be classified into two groups: Fixed Length Queries: Traditionally, to find out similar subsequences, two representative distance measures are adopted, Euclidean distance (ED) and Dynamic Time Warping (DTW). ED computes the similarity by a one-to-one mapping while DTW allows disalignment and thus supports time shifting. UCR Suite (Rakthanmanon et al. ) is a well-known approach that supports both ED and DTW for subsequence matching and proposes cascading lower bounds for DTW to accelerate search speed. FAST (Li et al. ) is based on UCR Suite, and further proposes some lower bounds for the sake of efficiency. Both UCR Suite and FAST have to scan the whole time series to conduct distance computation. EBMS et al. ), however, reduces the subsequence matching problem to the vector matching problem, and identifies the candidates of matches by the search of nearest neighbours in the vector space. Also, some index-based approaches have been proposed for similarity search. Most of them build indexes based on summarizations of the data series (e.g. Piecewise Aggregate Approximation (PAA) (Keogh et al. ), or Symbolic Aggregate approXimation (SAX) (Shieh and Keogh, )). Coconut (Kondylakis et al. ) overcomes the limitation that existing summarizations cannot be sorted while keeping similar data series close to each other and proposes to organize data series based on a -order curve. To further reduce the index creation time, adaptive indexing techniques have been proposed to iteratively refine the initial coarse index, such as ADS (Zoumpatianos et al. Variable Length Queries: For variable length queries, SpADe (Chen et al. ) proposes a continuous distance calculation approach, which is not sensitive to shifting and scaling in both temporal and amplitude dimensions. It scans data series to get local patterns, and dynamically finds the shortest path among all local patterns to be the distance between two sequences. Query-by-Sketch (Muthumanickam et al. ) proposes an interactive approach to explore user-sketched patterns. It extracts shape grammar, a combination of basic elementary shapes, from the sketched series, and then applies a symbolic approximation based on regular expressions. To better satisfy the user, Eravci and Ferhatosmanoglu ( ) attempts to improve the search results by incorporating diversity in the results for relevance feedback. Relatively speaking, indexing for variable length queries is more intractable. et al. ) utilizes multiple varied-length indexes to support normalized subsequence matching under either ED or DTW distance. ULISSE (Linardi and Palpanas, ), by comparison, uses a single index to answer similarity search queries of variable length. It organizes the series and their summaries in a hierarchical tree structure called the In summary, up to now, no existing work has attempted to express the query intuition via the multi-query mechanism. 3 Preliminaries In this section, we begin by introducing all the necessary definitions and notations, followed by a formal problem statement. 3.1 Definition In this work, we are dealing with time series. A time series $X=({x_{1}},{x_{2}},\dots ,{x_{N}})$ is an ordered sequence of real-valued numbers, where $N=|X|$ is the length of X. A subsequence S, ${S ^{\prime }}$ or $X[i,j]=({x_{i}},{x_{i+1}},\dots ,{x_{j}})$ $(1\leqslant i\leqslant j\leqslant n)$ denotes the continuous sequence of length $j-i+1$ starting from the i-th position in X. Note that the subsequence is itself a time series. Given a time series X, a query sequence Q, and a distance function D, the problem of subsequence matching is to find out the top-K subsequences from X, denoted as $\mathbb{R}=\{{S_{1}},{S_{2}},\dots ,{S_{K}}\}$, which are most similar to Q. The two representative distance measures are Euclidean Distance (ED) and Dynamic Time Warping (DTW). Formally, given two length-L sequences, S and ${S^{\prime }}$, their ED and DTW distance can be computed as the following: Definition 1. Euclidean Distance: $\textit{ED}(S,{S^{\prime }})=\sqrt{{\textstyle\sum _{i=1}^{L}}{({s_{i}}-{s^{\prime }_{i}})^{2}}}$, where ${s_{i}}$ and ${s^{\prime }_{i}}$ is the value at i-th $(1\leqslant i\ leqslant L)$ time stamp of S or ${S^{\prime }}$ respectively. Definition 2. Dynamic Time Warping: \[\begin{aligned}{}& \textit{DTW}\big(\langle \rangle ,\langle \rangle \big)=0;\hspace{2em}\textit{DTW}\big(S,\langle \rangle \big)=\textit{DTW}\big(\langle \rangle ,{S^{\prime }}\big)=\infty ;\\ {} & \textit{DTW}\big(S,{S^{\prime }}\big)=\sqrt{{\big({s_{1}}-{s^{\prime }_{1}}\big)^{2}}+\min \left\{\begin{aligned}{}& \textit{DTW}\big(\textit{suf}(S),\textit{suf}\big({S^{\prime }}\big)\big),\\ {} & \textit{DTW}\big(S,\textit{suf}\big({S^{\prime }}\big)\big),\\ {} & \textit{DTW}\big(\textit{suf}(S),{S^{\prime }}\big),\end{aligned}\right.}\end{aligned}\] $\langle \rangle $ indicates empty series and $\textit{suf}(S)=({s_{2}},\dots ,{s_{L}})$ is a suffix subsequence of In this paper, instead of processing one single query, we attempt to find out subsequences similar to multiple queries. That is, given a set of queries, $\mathbb{Q}=\{{Q_{1}},{Q_{2}},\dots ,{Q_{N}}\} $, our objective is to find out the top-K subsequences similar to queries in $\mathbb{Q}$, denoted as $\mathbb{R}$. Since each query sequence varies in length, we do not impose a constraint on the length of subsequences in $\mathbb{R}$. In this way, we find out variable-length subsequences answering multiple queries, which is worthy of wide use in real time series applications. 4 Query Representation and Distance Definition In this section, we first present the probability-based representation of the query set, and then propose a distance definition based on the representation. 4.1 Query Representation In this paper, instead of processing the queries in $\mathbb{Q}$ independently, we first represent them by a unified formation, which is a multi-dimensional probability distribution. Then we find target subsequences from X based on the representation. In many real world applications, the meaningful query sequence can be approximately represented as a sequence of line segments. Recall the EOG pattern in Fig. . The query sequence can be approximated with 4 line segments. These line segments capture the most representative characteristics in In response, we propose a two-step approach to represent the bundle of queries together. In the first step, we represent each single query ${Q_{i}}$ in $\mathbb{Q}$ individually by a traditional segmentation in which each segment is described by some features. Then, in the second step, we represent each feature as a Gaussian distribution over the values from multiple queries. 4.1.1 Step One: Represent Each Single Query In step one, we perform a traditional segmentation. We use a bottom-up approach to convert the query ${Q_{i}}=({q_{1}},{q_{2}},\dots ,{q_{f}})$ into a piecewise linear representation, where ${q_{f}}$ is the segment of single query ${Q_{i}}$ $(1\leqslant f\leqslant |{Q_{i}}|)$. Initially, we approximate ${Q_{i}}$ with $\big\lfloor \frac{|{Q_{i}}|}{2}\big\rfloor $ line segments. The j-th line, ${H_ {j}}$, connects ${q_{2j-1}}$ and ${q_{2j}}$. Next, we iteratively merge the neighbouring lines. In each iteration, we merge the two neighbouring segments into one new line segment that has the minimal approximation error. The merging process repeats until we have m (a pre-set parameter) number of line segments. For each ${Q_{i}}$, we obtain its segmentation, denoted as $({Q_{i}^{1}},{Q_{i} ^{2}},\dots ,{Q_{i}^{m}})$ and its linear representation, denoted as $({H_{i}^{1}},{H_{i}^{2}},\dots ,{H_{i}^{m}})$. For each line segment ${H_{i}^{j}}$ ($1\leqslant i\leqslant N$ and $1\leqslant j\leqslant m$), we represent it as a 4-dimension vector, ${f_{i}^{j}}=({l_{i}^{j}},{\theta _{i}^{j}},{v_{i}^{j}},{\ varepsilon _{i}^{j}})$, which corresponds to the length, slope, the value of the starting point and MSE error of ${H_{i}^{j}}$, respectively. As a result, the query sequence ${Q_{i}}$ is represented by a $4m$-dimension vector, ${F_{i}}=({f_{i}^{1}},{f_{i}^{2}},\dots ,{f_{i}^{m}})$. 4.1.2 Step Two: Represent Multiple Queries After obtaining ${F_{i}}$’s ($1\leqslant i\leqslant N$), we can generate the uniform representation of query set $\mathbb{Q}$, which is a multi-dimensional probability distribution. We first present the formal distribution, and then give our approach to generate the specific distribution for $\mathbb{Q}$. Specifically, given query set $\mathbb{Q}$, its representation, denoted as ${P_{\mathbb{Q}}}$, consists of $4m$ number of individual Gaussian distributions, each of which corresponds to a feature in ${f_{i}^{j}}$. For each feature, we produce a Gaussian distribution to capture latent semantics, which is determined by two parameters: the mean value and the standard deviation. The former encodes the ideal value of the feature, while the latter provides an elastic range. Formally, we denote the representation of ${P_{\mathbb{Q}}}=({P^{1}},{P^{2}},\dots ,{P^{m}})$ , where ${P^{j}}=({p_{l}^{j}},{p_{\theta }^{j}},{p_{v}^{j}},{p_{\varepsilon }^{j}})$ corresponds to the -th line segments $({H_{1}^{j}},{H_{2}^{j}},\dots ,{H_{N}^{j}})$ . Take the slope feature as an example, that is, ${p_{\theta }^{j}}$ is the Gaussian density function of the slope of the -th segment, denoted as \[ {p_{\theta }^{j}}(x)=\frac{1}{\sqrt{2\pi }{\sigma _{\theta }^{j}}}\exp \bigg(-\frac{{(x-{\mu _{\theta }^{j}})^{2}}}{2{({\sigma _{\theta }^{j}})^{2}}}\bigg),\] ${\mu _{\theta }^{j}}$ ${\sigma _{\theta }^{j}}$ ) is the mean value (or standard deviation) of the slope values of $({H_{1}^{j}},{H_{2}^{j}},\dots ,{H_{N}^{j}})$ . Specifically, ${\mu _{\theta }^{j}}=\frac{{\textstyle\sum _{i=1}^{N}}{\theta _{i}^{j}}}{N}$ ${\sigma _{\theta }^{j}}=\sqrt{\frac{1}{N}{\textstyle\sum _{i=1}^{N}}{({\theta _{i}^{j}}-{\mu _{\theta }^{j}})^{2}}}$ . The mean value ${\mu _{\theta }^{i}}$ describes the slope the user prefers, and the standard deviation ${\sigma _{\theta }^{i}}$ represents how strictly the user stresses on this feature. Apparently, the smaller the value of ${\sigma _{\theta }^{i}}$ is, the stricter the user’s requirement. Now we introduce our approach to generate ${P_{\mathbb{Q}}}$ for the query set $\mathbb{Q}$. We first get the representations $({l_{i}^{j}},{\theta _{i}^{j}},{v_{i}^{j}},{\varepsilon _{i}^{j}})$ that correspond to the features of the j-th line segment in ${Q_{i}}$. To get the specific Gaussian distribution ${P^{j}}=({p_{l}^{j}},{p_{\theta }^{j}},{p_{v}^{j}},{p_{\varepsilon }^{j}})$, we directly compute the mean value and the standard deviation of the feature values of $({H_{1}^{j}},{H_{2}^{j}},\dots ,{H_{N}^{j}})$. Then, ${P_{\mathbb{Q}}}=({P^{1}},{P^{2}},\dots ,{P^{m}})$. 4.2 Distance Definition Given the fact that the query representation consists of $4m$ number of Gaussian distributions rather than a sequence, the existing distance measures, like ED and DTW, are inapplicable. In this paper, we propose a novel distance function $D(S,\mathbb{Q})$ based on the probability distribution. Formally, to define the distance between subsequence and the query representation , we first approximate line segments. We indicate the segmentation as $seg=({S^{1}},{S^{2}},\dots ,{S^{m}})$ , where $1\leqslant j\leqslant m$ ) denotes the -th segment of . Note that the segment here infers rather than line segment . We extract four features, the length , the slope ${\theta ^{j}}$ , the value of the starting point , and the MSE error ${\varepsilon ^{j}}$ from the linear representation of . For ease of presentation, we represent all the -th segments in . That is, ${\mathbb{Q}^{j}}=({Q_{1}^{j}},{Q_{2}^{j}},\dots ,{Q_{N}^{j}})$ . Then the distance between \[ \textit{dist}\big({S^{j}},{\mathbb{Q}^{j}}\big)=-\log \big({p_{l}^{j}}\big({l^{j}}\big){p_{\theta }^{j}}\big({\theta ^{j}}\big){p_{v}^{j}}\big({v^{j}}\big){p_{\varepsilon }^{j}}\big({\varepsilon ^ is the negative logarithm of the probability. The smaller the value, the more similar are. Accordingly, under segmentation , the distance between and the query set can be computed as \[ D(S,\mathbb{Q},\textit{seg})={\sum \limits_{j=1}^{m}}\textit{dist}\big({S^{j}},{\mathbb{Q}^{j}}\big).\] can be segmented by different segmentations, the value of may be different. In this paper, we define the distance between as the minimal one among all possible segmentations, that is, 5 Query Processing Approach In this section, we introduce the search process. Obviously, it is exhaustive to find out the best segmentation of all subsequences in the time series . In response, we divide the search process into two phases: • 1. Candidate generation. Given the submitted query set $\mathbb{Q}$, we utilize a Breadth-First Search (BFS) strategy to find at most ${n_{c}}$ number of candidates from X, denoted as $\textit 5.1 BFS-Based Search Process We first introduce our search strategy to generate the candidate set $\textit{CS}$ with size not exceeding the parameter ${n_{c}}$. We utilize an iterative approach, and generate the candidates segment-by-segment. In the first round, we generate at most ${n_{c}}$ number of candidates with only one segment. The candidate set is denoted as ${\textit{CS}_{1}}=\{c{s_{1}},c{s_{2}},\dots ,c{s_ {{n_{c}}}}\}$. Each candidate, $c{s_{i}}$, is a triple $\langle {s_{i}},{e_{i}},{d_{i}}\rangle $, in which ${s_{i}}$ is its starting point and ${e_{i}}$ is its ending point. So $c{s_{i}}$ corresponds to the subsequence $X[{s_{i}},{e_{i}}]$. The third element ${d_{i}}$ is distance $\textit{dist}(c{s_{1}},{\mathbb{Q}^{1}})$. All candidates in ${\textit{CS}_{1}}$ are ordered based on the values of $ {d_{i}}$ ascendingly. In other words, ${\textit{CS}_{1}}$ contains ${n_{c}}$ number of subsequences with smallest distance with ${\mathbb{Q}^{1}}$. We discuss how to select top-${n_{c}}$ candidates in the next section. In the second round, we obtain candidate set ${\textit{CS}_{2}}$ by extending the candidates in ${\textit{CS}_{1}}$ with the second segment. Specifically, given any candidate subsequence $cs=\langle s,e,d\rangle $ in ${\textit{CS}_{1}}$, if we want to extend $cs$ to $c{s^{\prime }}$ with a length-L segment, the new candidate $c{s^{\prime }}=\langle s,{e^{\prime }},{d^{\prime }}\rangle $ contains two segments: one corresponds to $X[s,e]$ and the other to $X[e+1,e+L]$. Note that the new starting point s keeps unchanged, and the new ending point ${e^{\prime }}$ changes to $e+L$. Also, the new distance ${d^{\prime }}$ is updated to $d+\textit{dist}(X[e+1,{e^{\prime }}],{\mathbb{Q}^{2}})$. Since from each candidate $cs$ in ${\textit{CS}_{1}}$, we can extend it to multiple candidates by concatenating $X[s,e]$ with variable length segments, we generate all possible candidates, compute the distance and add top-${n_{c}}$ candidates into ${\textit{CS}_{2}}$. After m rounds, we obtain the candidate set ${\textit{CS}_{m}}$, which is the final candidate set $\textit{CS}$. Now each candidate in $\textit{CS}$ consists of m segments. 5.2 Candidate Generation Now we introduce our approach to generate possible candidates in each round. In the first round, we enumerate all subsequences from all possible starting points, that is, we try all ’s within . To avoid the low-quality candidates, we only select subsequences with length satisfying the $3\sigma $ standard. Formally, given any starting point , the ending point must satisfy $e\in [{\mu _{l}^{1}}-3{\sigma _{l}^{1}},{\mu _{l}^{1}}+3{\sigma _{l}^{1}}]$ . For each candidate subsequence , we compute the optimal linear approximation, $y=\theta \cdot x+b$ , as well as the MSE error as follows, \[ \left\{\begin{array}{l}\theta =\displaystyle \frac{12\textstyle\sum ix-6(l+1)\textstyle\sum x}{l(l+1)(l-1)},\hspace{1em}\\ {} b=\displaystyle \frac{6\textstyle\sum ix-2(2l+1)\textstyle\sum x}{l (1-l)},\hspace{1em}\\ {} \varepsilon =\displaystyle \sum {x^{2}}+{\theta ^{2}}\displaystyle \sum {i^{2}}+l{b^{2}}-2\theta \displaystyle \sum ix-2b\displaystyle \sum x+2\theta b\displaystyle \sum i,\ is the length of . After that, we compute . Also, if only any other feature of a candidate ( ) violates the $3\sigma $ standard, we ignore this candidate. During enumerating different candidates, we maintain as a priority queue to keep top- In the j-th round ($2\leqslant j\leqslant m$), we generate candidates by extending previous ones in ${\textit{CS}_{j-1}}$. For each candidate $cs=\langle s,e,d\rangle $ in ${\textit{CS}_{j-1}}$, we try all possible segments next to $X[s,e]$ whose length falls within $[{\mu _{l}^{j}}-3{\sigma _{l}^{j}},{\mu _{l}^{j}}+3{\sigma _{l}^{j}}]$. Similar with the first round, we also dismiss the candidates violating the $3\sigma $ standard. When extending candidate $cs=\langle s,e,d\rangle $ to $c{s^{\prime }}=\langle s,{e^{\prime }},{d^{\prime }}\rangle $, except the new ending point ${e^{\ prime }}$, we also update the distance as ${d^{\prime }}=d+\textit{dist}(X[e+1,{e^{\prime }}],{\mathbb{Q}^{j}})$. 5.3 Post-Processing Note that the subsequences in $CS$ are not approximated optimally. Here, an additional refinement step has to be performed, where subsequences are verified and re-ordered using the optimal segmentation. Specifically, we fetch the subsequences in $\textit{CS}$, approximate each subsequence $cs$ with m line segments via a dynamic programming algorithm, and thus get the actual distance $D (cs,\mathbb{Q})$ under the new segmentation. The objective of the segmentation is to minimize the distance between . We search the optimal segmentation from left to right sequentially on . We define $1\leqslant i\leqslant m$ $1\leqslant j\leqslant |cs|$ ) to be the minimal distance between the prefix of ) and the prefix of segments (i.e. $[{\mathbb{Q}^{1}},{\mathbb{Q}^{2}},\dots ,{\mathbb{Q}^{i}}]$ ). We begin by initializing to be . When computing $E(i,j)$ $(2\leqslant i\leqslant m)$ , we consider all the possible segmentations of $X[1,k]$ $(i\leqslant k\leqslant j)$ segments, compare the sum of , and define to be the minimal one. Formally, the dynamic programming equation is presented as the following \[ E(i,j)=\left\{\begin{array}{l@{\hskip4.0pt}l}\textit{dist}(X[1,j],{\mathbb{Q}^{1}}),\hspace{1em}& i=1,\\ {} +\infty ,\hspace{1em}& i\gt j,\\ {} \underset{i\leqslant k\leqslant j}{\min }\big(E (i-1,k)+\textit{dist}\big(X[k,j],{\mathbb{Q}^{k}}\big)\big),\hspace{1em}& \text{otherwise}.\end{array}\right.\] We re-compute the distance between each subsequence in . Before re-ordering, we have to dismiss the trivial match subsequences since candidates can overlap with each other. Formally, given two candidates $c{s^{\prime }}=X[{s^{\prime }},{e^{\prime }}]$ , if their overlapping ratio, $\frac{cs\cap \hspace{0.1667em}c{s^{\prime }}}{cs\cup \hspace{0.1667em}c{s^{\prime }}}$ , exceeds $\min (0.8,\frac{\min (|Q|)}{\max (|Q|)})$ , where the latter indicates the ratio between the minimum length and the maximum length of queries , we dismiss the subsequence with the larger distance. Afterwards, we simply sort the remaining subsequences in $\textit{CS}$ according to $D(cs,\mathbb{Q})$ and select the K smallest as the final results $\mathbb{R}$. 5.4 Optimization In this section, we propose two optimization strategies to further accelerate the search process. 5.4.1 Basic Aggregates Based Linear Representation Computation In the search process, for each candidate, we adopt linear regression to find out the best line segment $y=\theta \cdot x+b$ in a least square sense using Eq. ( ). It is noteworthy that , and can be computed by some combination of three basic aggregates: $\textstyle\sum x$ $\textstyle\sum {x^{2}}$ $\textstyle\sum ix$ , with cost , as proposed in Wasay et al. ). As long as we maintain these three in-memory arrays to store these basic aggregates of time series , linear regression can be conducted in time for any subsequence in However, to process extremely long sequences, the storage overheads are unaffordable. As a consequence, we propose to split the time series into several blocks and conduct subsequence matching within each block sequentially and summarize the results in the end. Obviously, this approach reduces memory consumption while being accurate and efficient. 5.4.2 Adjusting the Searching Order Up to now, we have generated the candidates segment-by-segment sequentially. However, different standard deviation of the feature values of different segments in $\mathbb{Q}$ will result in different search space. For example, in $\mathbb{Q}$, one segment has the standard deviation of length ${\sigma _{l}}=5$, while in other segment, ${\sigma _{l}}=10$. According to the $3\sigma $ standard, the former has $5\cdot 3\cdot 2+1=31$ candidates, while the latter has 61 candidates. Obviously, we should first search based on the latter. Specifically, we perform the search process in an optimized order, where we first consider the segment whose standard deviation of the value of length feature is the smallest, and then consider its neighbouring segments. Suppose there are 3 sets of line segments in $\mathbb{Q}$, i.e. $m=3$, their standard deviations of the value of length feature are $2,3,1$ respectively, that is, ${\sigma _{l}^{1}}=2$, ${\sigma _{l}^{2}}=3$, ${\sigma _{l}^{3}}=1$. Then, in the search process, we generate candidates from right to left sequentially. 5.5 Complexity Analysis The overall process of our approach can be divided into two phases: query representation and query processing. Before the two phases, we have to scan the whole time series X once to store the basic aggregates. Its time cost is $O(n)$, where n is the length of X. Then, in the first phase, we scan all the queries and perform traditional piecewise linear approximations. Thanks to the basic aggregates, the time cost of conducting linear regression is negligible. This phase is then $O(N|Q{|^{2}})$ in time complexity, where N is the number of queries. For ease of presentation, although multiple queries can vary in length, we denote $|Q|$ as the length of the In the second phase, we first assume ${w_{i}}=6{\sigma _{l}^{i}}+1$, where ${\sigma _{l}^{i}}$ is the standard deviation of the length values of $({H_{1}^{i}},{H_{2}^{i}},\dots ,{H_{N}^{i}})$. Suppose we generate candidates from left to right sequentially; when considering the first segments, it requires $O({w_{1}}n\log ({n_{c}}))$ time to maintain the candidate set. For the following i-th segments, the time complexity is $O({w_{i}}{n_{c}}\log ({n_{c}}))$. In the post-processing process, we have to verify and compute the actual distance between $\mathbb{Q}$ and every subsequence $cs$ in the candidate set. It takes $O({n_{c}}m|cs{|^{2}})$ time to conduct the dynamic programming algorithm for the ${n_{c}}$ candidates, where m is the number of line segments. Moreover, the re-ordering process is at most $O({n_{c}}\log {n_{c}})$ in time complexity. Since ${w_{i}}$ is a constant, ${n_{c}}$ is proportional to n, and $|cs|$ and $|Q|$ is much smaller than n, we can infer that the time complexity of our approach is $O(n\log ({n_{c}}))$ 6 Experiments In this section, we conduct extensive experiments to verify the effectiveness and efficiency of our approach. All experiments are run on a PC with Intel Core i7-7700K CPU (8 cores @ 4.2 GHz) and 16 GB RAM. 6.1 Datasets 6.1.1 Synthetic Datasets We generate the synthetic sequence as follows. First, we generate a long random walk sequence T. Then we embed in T some meaningful pattern instances, in which some are taken as queries, and others as target results. We generate two types of patterns, W-wave from the Adiac dataset in UCR archive (Dau et al. ), and backward wave, common in stock series. For each pattern, we first generate a seed instance and add some noise following a Gaussian distribution , as shown in Fig. . Then we modify them to generate more instances. Specifically, we adopt three types of variations, length, amplitude, and shape as follows. • ∙ Length: It is obvious that the W-wave in Fig. (a) can be split into four segments. For the middle two segments, we change the segment length with a scaling factor by inserting or deleting some data points. In other words, for a length- segment, the length of the new segment is $l\cdot \lambda $ • ∙ Amplitude: Similar to the case of length scaling, we increase or decrease the amplitude of the middle two segments in the W-wave. We still use a factor λ to control the extent. Specifically, every value in new segment changes to $v\cdot \lambda $, where v is the original value. • ∙ Shape: For the backward wave shown in Fig. (b), we change the global shape of the pattern by modifying the last two segments on both length and amplitude. To be more specific, we change the segment length from $l\cdot \lambda $ , and the data values from $v\cdot \lambda $ Note that the three types of variations will influence different features in the line segments. We list them respectively in Table For each type of variation, we test our approach under different extents. Take the length variation as an example. To generate a dataset, we first set a parameter r, which determines the length variation range. Specifically, given r, we can only set the length scaling factor λ within the range $[1/(1+r),1+r]$. Obviously, the larger the value of r, the larger the length scaling extent. For each variation, given the fixed , we pick out 50 values from the range as the value of . Then we generate 50 corresponding pattern instances, 40 of which are randomly planted into the random walk long time series. The rest 10 instances form the query set . A summary of the synthetic datasets is presented in Table 6.1.2 Real Datasets The real dataset is the train monitoring dataset, which is the time series collected by the vibration sensor. Its total length is 15 million. There exist more than 100 interference subsequences that vary in length and amplitude. The length of these subsequences is within the range of 200 to 2500. Consequently, we maintain a query set of size 15 whose lengths almost distribute uniformly between 200 and 2500. The rest ones are leaved as the target results. 6.2 Counterpart Approaches Note that it is difficult to find reasonable baselines to compare our approach to because the existing methods are only devised for the case of a single query. Since no approach can deal with multiple queries as a whole, we choose two representative subsequence matching algorithms, UCR Suite (Rakthanmanon et al. ) and SpADe (Chen et al. ), and then enable them to handle the problem of multiple queries. UCR Suite finds the best normalized subsequences and supports both Euclidean distance and Dynamic Time Warping (UCR-ED and UCR-DTW for short). SpADe finds the shortest path within several local patterns, and is able to handle shifting and scaling both in temporal and amplitude dimensions. To make UCR-Suite (or SpADe) support multiple queries, we first find out top-K similar subsequences for each query based on UCR-Suite (or SpADe). We then sort these $K\cdot N$ number of subsequences in the ascending order of their distances (normalized by the subsequence length) and pick out the top-K ones excluding any trivial results as final. For UCR-Suite, we utilize both ED and DTW as the distance, denoted as UCR-ED and UCR-DTW, respectively. Fig. 4 Accuracy comparisons under different variations. Let N be the number of queries in $\mathbb{Q}$, we denote our approach and the other three competitors as MQ-N, UCR-ED-N, UCR-DTW-N and SpADe-N, respectively. Note that the three rivals have to scan the time series for N times. For fairness, we do not count I/O time when comparing efficiency. 6.3 Results on Synthetic Datasets Fig. 5 Efficiency comparisons under different variations. In the first experiment, we compare our approach MQ with UCR-ED, UCR-DTW, and SpADe on synthetic datasets. Both accuracy and efficiency are tested. To compare these approaches extensively, we vary the parameter r for all three variations, length, amplitude and shape. Specifically, for the length variation, we set the maximal scaling factor r from 0.25 to 0.7 with step 0.05. For the amplitude variation, we varied r in a range of 0.2 to 2 with step 0.2. For the shape variation, we changed r in a range of 0.05 to 0.5 with step 0.05. We set the number of queries, , as 5 and 10, respectively. The experimental results are then indicated by MQ-5, MQ-10, UCR-DTW-5, UCR-DTW-10, SpADe-5, and SpADe-10. For MQ, we set the only parameter, the size of the candidate set , to be $0.05\cdot n$ , where is the length of the time series In each set of experiments, we attempt to find out the top-40 subsequences in the time series. The accuracy is the ratio between the number of correct subsequences and 40. Subsequence S is correct, if the overlapping ratio between S and certain planted instance exceeds the tolerance parameter, $\epsilon =\min \big(0.8,\frac{\min (|Q|)}{\max (|Q|)}\big)$. The results are shown in Fig. and Fig. , respectively. It can be seen that MQ outperforms UCR-DTW and SpADe in both accuracy and efficiency under all variations. The reason is that MQ summarizes common characteristics in multiple queries while the other approaches are only able to find out the subsequences that are similar to certain given query. As a consequence, when the number of query sequences increases, UCR-DTW and SpADe yield better results. Nevertheless, UCR-DTW-10 (or SpADe-10) is twice as slow as UCR-DTW-5 (or SpADe-5) on average, which means with more query sequences provided, UCR-DTW and SpADe can find out more satisfying subsequences, but at the cost of efficiency. Instead, MQ captures latent semantics and thus demonstrates its superiority in both accuracy and efficiency under different variations. The only exception is in Fig. (c), where the accuracy of MQ sharply decreases when , which is mainly because the undue scaling in shape will result in ambiguity of what users really want. 6.4 Results on Real Datasets In this experiment, we compare our approach with other ones as the number of query sequences varied on real dataset. We pick out queries from the query set of size 15 so that their lengths distribute uniformly. Note that no matter the value of , we find out top-100 subsequences from the real time series. The results are shown in Fig. It can be seen that in Fig. (a), the accuracy of MQ, UCR-ED, and UCR-DTW increases when increases while the accuracy of SpADe is consistently low. The reason is that SpADe extracts local patterns by using a fixed size of sliding window, and consequently, it fails to capture the query intuition. Moreover, it is noteworthy that when , the accuracy of MQ has already exceeded 0.9, which means that MQ is able to find out what users really want with only a small query set. (b) compares MQ and other approaches on efficiency. Obviously, MQ outperforms all other approaches, and its running time is insensitive to since MQ summarizes all the queries and searches for results in the time series only once. Due to the specific pattern shape and the lack of abundant pruning strategies, UCR-ED is inferior to UCR-DTW in terms of efficiency in this dataset. 6.5 Influence of Parameter ${n_{c}}$ In this experiment, we investigate the influence of the parameter ${n_{c}}$, the size of the candidate set. On the real dataset, we varied ${n_{c}}$ from 0.01 to 0.1, and the number of queries, N was set to 3, 6, and 9 respectively. Results are shown in Fig. . It can be seen that as gets larger, the accuracy of MQ increases while the efficiency decreases. It is because once the query set is fixed, we can maintain more candidates by enlarging the candidate set, i.e. increasing the value of at the cost of more running time. Also, we can find that MQ has already achieved satisfying results when , so we set the default value of to 0.05 in previous experiments. Generally, as shown in Fig. (a), the accuracy of MQ increases as the number of queries increases. The only exception is when . The reason is that the standard deviation of the query feature values experiences an increase as increases, resulting in a larger search space. The small candidate set then fails to maintain enough candidates, and thus achieves poor results. Meanwhile, the changes in the search space cause the running time to increase proportionally, as shown in Fig. 7 Conclusions In this paper, we have proposed a novel subsequence matching approach, Multi-querying, to reflect the query intuition. Given multiple queries, we use a multi-dimensional probability distribution to represent them. Then, a breadth-first search algorithm is then applied to finding out the top-K most similar subsequences. Extensive experiments have demonstrated that Multi-querying outperforms the state-of-the-art algorithms in terms of accuracy and performance. To the best of our knowledge, this is the first study to introduce the concept of multiple queries to express the query intuition.
{"url":"https://informatica.vu.lt/journal/INFORMATICA/article/1299/read","timestamp":"2024-11-12T06:21:12Z","content_type":"text/html","content_length":"136742","record_id":"<urn:uuid:13ac1532-0f9c-4b7f-b34b-1ce630ab1a56>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00872.warc.gz"}
3D Modeling of the SARS-CoV-2 Virus in the Wolfram Language Late 2019 and continuing through 2020 has seen a historic pandemic that has caused worldwide lockdowns, economic difficulties and of course sickness and death resulting from the onset of COVID-19. Because of the novel nature of the virus, known as SARS-CoV-2 or severe acute respiratory syndrome coronavirus 2, a massive amount of research is going into trying to understand its nature. Although the virus is tiny, trying to model the virus from its constituent molecules is a challenging task, but with a home computer and the Wolfram Language, along with reasonable hardware resources, this task can be tackled. The original data and PDB files provided by the authors have been imported, cleaned and made computable. I want to thank Ondrej Strnad for his guidance in using their data. Due to the large size of the data and graphics, this can be taxing to systems with low system resources. For the record, I evaluated this using a Windows 10 system with an Intel Core i9 processor with 16 GB of system memory and a Nvidia GeForce 1660 graphics card. Get a dataset that contains the 3D graphic, molecule representation and VertexCount for each element of the SARS-CoV-2 model: The resulting dataset is approximately 30 MB total: The byte count of the “S” element has been simplified by removing spheres and cylinders to reduce the memory footprint and is about 22 MB, and so makes up the bulk of the total size of the model: We can view the 3D models of the individual elements: We also need to know how to assemble the full model from the individual elements. This is done using a dataset containing structural information for all of the elements: As mentioned before, the “S” element is the largest structure, and there are 100 instances of it: The structure data includes nearly 250,000 instances of the individual elements: We can use the VertexCoordinates for all of the elements to estimate the number of atoms besides hydrogen that are used in the full model: So the full model has nearly 25,000,000 atoms in it. Analyze the Distance between Structures Analyze the mean radial distance between structures: Quaternion Rotation Approach The structure data contains not just the 3D positions of the elements, but also the rotation for each element represented as a quaternion. For each quaternion, this needs to be converted into a rotation matrix: Now we need to provide functions that will rotate and translate the specified element based on the structure data: The “primary” elements seem to be the submodels called “M” and “S”. The position data is straightforward, and can just be plotted as points. The blue points are the positions of the “M” structures, and the orange ones are the positions of the “S” structures: From a structural point of view, the “M” and “S” elements are the most important. The other elements have less impact on the visual appearance, and there are huge numbers of them: Now we can actually construct the full 3D model from the elements. Because the spheres and cylinders that make up some of the models are so small, there is no need to render them in full detail, so we can use the Method option to reduce their complexity during rendering: We can “slice” the model in half to let us see the internal detail: Another approach to visualizing the SARS-CoV-2 virus is based on volumetric data. This can be useful for getting a view that looks more like a fuzzy microscope image. The original structural data file provides a scale factor that will be needed: Next we use the vertex coordinates of the elements and scale them: We can view the results of the above on the “S” element: Next we rotate and translate all instances of the models: In order to represent the model as a volumetric region, we need to discretize all of the vertex coordinates into voxels of size pp×pp×pp: We can test the above my looking only at the instances of the “S” element, which make up the “spikes” of the virus: We can visualize the volumetric results by looking at slices through the model: Another variation also involves volumetric visualization, but is based on vertex density. To start, we make a gray-level version of atom density: Since the “S” and “M” proteins make up the primary components of the structure, we use only those, and use a coarse grid initially: We can also apply different color schemes: We can include all of the structures, even the small ones, for a more complete and accurate picture. (Even on a 512×512×512 grid, you can rotate interactively with fairly good responsiveness.) Media-modeled images of the SARS-CoV-2 virus often come color-coded. We can incorporate some of those concepts as well. We can use colors for the “E”, “M” and “S” elements from here. Some of the small elements are not obvious in media images, so we just assign random colors to those: Viewing the volumetric data in slices makes it easier to see the internal structure: Nguyen, N., Strnad, O., Klein, T., Luo, D., Alharbi, R., Wonka, P., Maritan, M., Mindek, P., Autin, L., Goodsell, D. and Viola, I. 2020. “Modeling in the Time of COVID-19: Statistical and Rule-based Mesoscale Models.” https://arxiv.org/abs/2005.01804.
{"url":"https://www.wolframcloud.com/obj/dd466828-57a4-4396-ae80-5854ff93cc7b","timestamp":"2024-11-13T18:02:01Z","content_type":"text/html","content_length":"245153","record_id":"<urn:uuid:0876d405-9e2a-446c-8f75-f85717992032>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00732.warc.gz"}
Ponds to Newtons Converter ⇅ Switch toNewtons to Ponds Converter How to use this Ponds to Newtons Converter 🤔 Follow these steps to convert given force from the units of Ponds to the units of Newtons. 1. Enter the input Ponds value in the text field. 2. The calculator converts the given Ponds into Newtons in realtime ⌚ using the conversion formula, and displays under the Newtons label. You do not need to click any button. If the input changes, Newtons value is re-calculated, just like that. 3. You may copy the resulting Newtons value using the Copy button. 4. To view a detailed step by step calculation of the conversion, click on the View Calculation button. 5. You can also reset the input by clicking on button present below the input field. What is the Formula to convert Ponds to Newtons? The formula to convert given force from Ponds to Newtons is: Force[(Newtons)] = Force[(Ponds)] × 0.009806650000000272 Substitute the given value of force in ponds, i.e., Force[(Ponds)] in the above formula and simplify the right-hand side value. The resulting value is the force in newtons, i.e., Force[(Newtons)]. Calculation will be done after you enter a valid input. Consider a small object that exerts a force of 600 pond on a surface. Convert this force from pond to Newtons. The force of object in ponds is: Force[(Ponds)] = 600 The formula to convert force from ponds to newtons is: Force[(Newtons)] = Force[(Ponds)] × 0.009806650000000272 Substitute given weight of object, Force[(Ponds)] = 600 in the above formula. Force[(Newtons)] = 600 × 0.009806650000000272 Force[(Newtons)] = 5.884 Final Answer: Therefore, 600 p is equal to 5.884 N. The force of object is 5.884 N, in newtons. Consider a mechanical press applying 1,000 pond of force to compress material. Convert this force from pond to Newtons. The force of mechanical press in ponds is: Force[(Ponds)] = 1000 The formula to convert force from ponds to newtons is: Force[(Newtons)] = Force[(Ponds)] × 0.009806650000000272 Substitute given weight of mechanical press, Force[(Ponds)] = 1000 in the above formula. Force[(Newtons)] = 1000 × 0.009806650000000272 Force[(Newtons)] = 9.8067 Final Answer: Therefore, 1000 p is equal to 9.8067 N. The force of mechanical press is 9.8067 N, in newtons. A pond (p) is an older unit of force equal to gram-force. It is largely obsolete but was once used to measure small forces, similar to those exerted by small masses in everyday situations. A newton is the standard unit of force in the International System of Units (SI). It is named after Sir Isaac Newton in honor of his work in physics, particularly his second law of motion. One newton is the amount of force needed to accelerate a one-kilogram mass by one meter per second squared. Newtons are widely used to measure forces in engineering, mechanics, and daily life, such as the force you exert when pushing a door. Frequently Asked Questions (FAQs) 1. What is the formula for converting Ponds to Newtons in Force? The formula to convert Ponds to Newtons in Force is: Ponds * 0.009806650000000272 2. Is this tool free or paid? This Force conversion tool, which converts Ponds to Newtons, is completely free to use. 3. How do I convert Force from Ponds to Newtons? To convert Force from Ponds to Newtons, you can use the following formula: Ponds * 0.009806650000000272 For example, if you have a value in Ponds, you substitute that value in place of Ponds in the above formula, and solve the mathematical expression to get the equivalent value in Newtons.
{"url":"https://convertonline.org/unit/?convert=pond-newton","timestamp":"2024-11-13T05:58:13Z","content_type":"text/html","content_length":"72248","record_id":"<urn:uuid:a9c92389-4e8a-46dd-b0a3-ffbfe54dcc2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00602.warc.gz"}
There will be two internal tests in a semester and the average of these two will be taken into account for the award of marks for internal tests. The end-semester examination of the project work will be conducted by the same committee appointed for the industry-oriented mini-project. The evaluation of the project work takes place at the end of the fourth year. Table 1: Compulsory Subjects Serial Number Subject Particulars The goal is to find the relationship between the variables x and y from the given data (x,y). Solution of Algebraic and Transcendental Equations and Linear System of Equations: Introduction – Graphical interpretation of the solution of equations. The bisection method – The false position method – The iteration method – Newton-Raphson method. One will be able to find the expansion of a given function by Fourier series and Fourier transform of the function. In the future, global problems and issues will require an in-depth understanding of chemistry to have a global solution. This syllabus aims to bridge concepts and theory of chemistry with examples from practical areas of application, thus strengthening the connection between natural sciences and engineering. It deals with the basic principles of various branches of chemistry which are basic tools necessary for a skilled engineer. Computer Assisted Language Learning (CALL) Lab Interactive Communication Skills (ICS) Lab A computer-based language laboratory for 40 students with 40 systems, one main console, LAN capability and English language software for students to learn independently. Spoken English (CIEFL) in 3 volumes with 6 cassettes, OUP English Pronouncing Dictionary Daniel Jones Current Edition with CD Prescribed Lab Manual: A Manual entitled “English Language Communication Skills (ELCS) Lab Manual- cum- Work Book”, The Internet & World Wide Web module introduces the various ways of connecting the PC to the Internet from home and work and using the Internet effectively. Importance of LaTeX and MS office 2007/ equivalent (FOSS) tool Word as a word processor Details of the three tasks and functions that will be covered in each, using LaTeX and Word – Access, overview of toolbars, saving files , Use of help and resources, rulers, format templates. Week 15 - Excel Orientation: The mentor must tell the importance of MS office 2007/equivalent (FOSS) tool Excel as a spreadsheet tool, give the details of the two tasks and functions that will be covered in each. Complex functions – Differentiation and Integration: Complex functions and its representation in the Argand plane, Concepts of limit continuity, differentiability, analyticity, Cauchy-Riemann conditions, Harmonic functions – Milne – Thompson method. After passing this course the student will be able to analyze complex functions with reference to their analyticity, Integration using Cauchy's integral theorem,. Multiple Random Variables: Vector Random Variables, Joint Distribution Function, Properties of Joint Distribution, Marginal Distribution Functions, Conditional Distribution and Density – Point Conditional, Conditional Distribution and Density – Interval Conditional, Statistical Value of independence of independence S, Variables, Central Limit Theorem (Proof not expected), Unequal Distribution, Uniform Distributions. Operations with several random variables: expected value of a function of random variables: joint moments about the origin, joint central moments, joint characteristic functions, joint Gaussian random variables: two random variables, case of N random variables, properties , Transformations of several random variables, Linear transformations of Gaussian random variables variables. This course provides an in-depth knowledge of switching theory and digital circuit design techniques, which are fundamental to the design of any digital circuit. To learn basic digital circuit design techniques and fundamental concepts used in digital system design. Design and Analysis of Sequential Circuits: Introduction, State Diagram, Analysis of Synchronous Sequential Circuits, Design Approaches to Synchronous Sequential State Machines, Design Aspects, State Reduction, Design Steps, Realization Using Flip-Flops. Counters - Designing single mode counter, ripple counter, ring counter, shift register, sequence shift register, ring counter using shift register. Kirchhoff's laws, network reduction techniques - series, parallel, series parallel, star-to-delta or delta-star transformations, nodal analysis, mesh analysis, super node and super mesh for DC excitations. To acquaint the student with the principle of operation, analysis and design of Junction diode, BJT and FET transistors and amplifier circuits. N Junction Diode: Qualitative Theory of N Junction, N Junction as a Diode, Diode Equation, Volt-Ampere Characteristics, Temperature To get an in-depth knowledge about signals, systems and their analysis using various transformations. Calculation of Unit Sample, Unit Step and Sinusoidal responses of the given LTI system and verify its physical realizability and stability properties. This course introduces the basic concepts of transient analysis of the circuits, the basic two-port network parameters and the design analysis of filters and attenuators and their use in circuit theory. The emphasis of this course is on the basic operation of DC machines and transformers, including DC generators and motors, single phase transformers. To familiarize the student with the analysis and design of basic transistor amplifier circuits and their frequency response characteristics, feedback amplifiers, oscillators, large signal amplifiers and tuned amplifiers. Single-stage amplifiers: classification of amplifiers - distortion in amplifiers, analysis of CE, CC and CB configurations using simplified hybrid model, analysis of CE amplifier with emitter resistor and emitter follower, Miller's theorem and his dual design of single-stage RC coupled amplifier using from BJT. This course aims to provide students with the understanding of the various technologies associated with HDLs, to construct, compile and run Verilog HDL programs using software tools. After completing this course, the student gets a thorough knowledge about open-loop and closed-loop control systems, concept of feedback in control systems, mathematical modeling and transfer function derivations of Synchros, AC and DC servo motors, Transfer function representation through block diagram algebra and signal flow graphs, time response analysis of different ordered systems through their characteristic equation and time domain specifications, stability analysis of control systems in S domain by R-H criteria and root locus techniques, frequency response analysis by bode diagrams, Nyquist, polar plots and the basics of state space analysis, design of PID controllers, delay, lead, delay -lead compensators, with which he/she can be able to apply the above conceptual things to real electrical and electronic problems and applications. Microprogrammed control: control memory, address sequence, microprogram examples, control unit design, hardwired control, microprogrammed The organization of the control unit, arithmetic and logic unit, the memory unit and the I/O unit. Analyze the electric and magnetic field emission from various basic antennas and mathematical formulation of the analysis. The basic concept of units, measurement error and accuracy, construction and design of measuring devices and circuits, measuring instruments and their correct Unit II To enable the student to understand and appreciate, with a practical overview, the importance of several fundamental issues that govern business operations, namely: demand and supply, the production function, cost analysis, markets, forms of business organizations, capital budgeting and financial and financial accounting. analysis. Unit III Introduction to Financial Accounting & Financial Analysis: Accounting concepts and conventions - Introduction IFRS - Double Entry Bookkeeping, Journal, Ledger, Trial Balance - Final Accounts (Trading Account, Profit and Loss Account and Balance Sheet with simple adjustments). Financial Analysis: Analysis and interpretation of liquidity ratios, activity ratios and capital structure ratios and profitability ratios. Understand the market dynamics namely supply and demand, demand forecasting, elasticity of supply and demand, pricing methods and pricing in different market structures. Gain insight into how the production function is executed to achieve the least cost combination of inputs and cost analysis. To help students appreciate the essential complementarity between 'VALUES' and 'SKILLS' to ensure sustained happiness and prosperity which are the core aspirations of all human beings. To facilitate the development of a holistic perspective among students to congregational life, profession and app i ess, based on a correct understanding of human reality and the rest of existence. Course Introduction - Need, Basic Guidelines, Content and Process of Value Education: Understanding the need, basic guidelines, content and process of Value Education. The right understanding, conditions and physical facilities - the basic requirements for the fulfillment of every human being's aspirations with their proper priority. Method to fulfill the above human aspirations: to understand and live in harmony on different levels. Understand the harmony of Self with the Body: Sanyam and Swasthya; correct assessment of Physical needs, meaning of Prosperity in detail. Unit III Type Tubes Internet Overview: Protocol, Layer Scenario, TCP/IP Protocol Suite: OSI Model, Internet History Standards and Administration;. To provide the student with an understanding of the cell concept, frequency reuse, handover strategies. At the end of the course, the student will be able to analyze and design wireless and cellular cellular systems. Have an appreciation of the fundamentals of digital image processing, including the topics of filtering, transformations and morphology, and image analysis and compression. By the end of the course the student should have a clear impression of the breadth and practical scope of digital image processing and have reached a level of understanding that is the foundation for much of the work currently being done in this area. Presentation of the Advanced Communication Skills Laboratory is considered essential at the 3rd year level. Out of 25 points, 15 points will be awarded for the daily work and 10 points will be awarded for performing the internal laboratory test(s). In case of unavailability of the External Examiner, the other teacher of the same department can act as External Examiner. Mechanical function, the electrical conduction system of the heart, the cardiac cycle, the relationship between the electrical and mechanical activities of the heart. Explain the function of Back-prop, Hopfield and SOM type artificial neural networks. Describe the assumptions behind and the derivations of the ANN algorithms covered in the course. To provide an analytical perspective on the design and analysis of the traditional and new wireless networks, and to discuss the nature of and solution methods to the basic problems in wireless networking.
{"url":"https://azpdf.net/in/docs/electronics-and-communication-engineering.10449027","timestamp":"2024-11-10T06:22:47Z","content_type":"text/html","content_length":"148982","record_id":"<urn:uuid:dd87e078-db67-4104-b49f-0556de0cb1bd>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00542.warc.gz"}
seminars - Decompositions of 3-manifolds and hyperbolic geometry (second talk) A highly useful technique for studying a 3-manifold is to decompose it into simpler pieces, such as tetrahedra, and to examine normal surfaces within the pieces. If the pieces admit additional data, e.g. an angle structure, then there are concrete geometric consequences for the manifold and the surfaces it contains. For example, we may determine conditions that guarantee the manifold is hyperbolic, estimate its volume, and identify quasifuchsian surfaces embedded within it. In the first talk, I will briefly describe some history of these decompositions, including work of Thurston, Menasco, and Lackenby, and then describe how to generalise their work to extend results to broader families of 3-manifolds. For example, we may allow pieces that are not simply connected, glued along faces that are not disks. We give examples of manifolds with these structures, particularly families of knot and link complements. In the second talk, using a generalisation of normal surfaces, angle structures, and combinatorial area, I will describe geometric consequences of these decompositions and further applications. This is joint work with Josh Howie.
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=83&sort_index=room&order_type=desc&l=en&document_srl=783369","timestamp":"2024-11-06T07:41:49Z","content_type":"text/html","content_length":"46168","record_id":"<urn:uuid:133b7bb2-3570-466b-9c36-99ce009d7a44>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00736.warc.gz"}
Study of CFD-DEM on the Impact of the Rolling Friction Coefficient on Deposition of Lignin Particles in a Single Ceramic Membrane Pore College of Electromechanical Engineering, Qingdao University of Science and Technology, Qingdao 266061, China Dongyue Group, Zibo 256401, China Authors to whom correspondence should be addressed. Submission received: 6 January 2023 / Revised: 19 February 2023 / Accepted: 25 March 2023 / Published: 27 March 2023 The discrete element method coupled with the computational fluid dynamic (CFD-DEM) method is effective for studying the micro-flow process of lignin particles in ceramic membranes. Lignin particles may exhibit various shapes in industry, so it is difficult to model their real shapes in CFD-DEM coupled solutions. Meanwhile, the solution of non-spherical particles requires a very small time-step, which significantly lowers the computational efficiency. Based on this, we proposed a method to simplify the shape of lignin particles into spheres. However, the rolling friction coefficient during the replacement was hard to be obtained. Therefore, the CFD-DEM method was employed to simulate the deposition of lignin particles on a ceramic membrane. Impacts of the rolling friction coefficient on the deposition morphology of the lignin particles were analyzed. The coordination number and porosity of the lignin particles after deposition were calculated, based on which the rolling friction coefficient was calibrated. The results indicated that the deposition morphology, coordination number, and porosity of the lignin particles can be significantly affected by the rolling friction coefficient and slightly influenced by that between the lignin particles and membranes. When the rolling friction coefficient among different particles increased from 0.1 to 3.0, the average coordination number decreased from 3.96 to 2.73, and the porosity increased from 0.65 to 0.73. Besides, when the rolling friction coefficient among the lignin particles was set to 0.6–2.4, the spherical lignin particles could replace the non-spherical particles. 1. Introduction With its rapid development, the papermaking industry has become a pillar in various countries [ ]. However, it discharges a large amount of pulping waste liquor, resulting in environmental pollution and waste of resources. Pulping waste liquor is produced in the pulping and cooking stage and is the main pollution source of the papermaking industry. Due to its dark brown color, high viscosity, and stench, it is called black liquor [ ]. As a representative organic matter in black liquor, lignin is not only a biomass energy with great utilization value, but also the only non-petroleum resource in nature that can provide renewable aryl compounds [ ]. In the current context of a forest-based circular bioeconomy, lignin is of considerable commercial value, accounting for approximately 24–47% of the current pulp production revenue [ ]. Therefore, an efficient and low-loss separation of lignin from black liquor is of great significance to alleviate environmental pollution and improve waste recycling. The commonly used lignin separation methods include acid precipitation and membrane separation methods [ ]. The acid precipitation method is characterized by a simple process and a low cost, but it requires a quantity of acid, which easily causes secondary pollution. Currently, the membrane separation technologies have made a figure in lignin separation due to its low pollution and high efficiency. Ceramic membranes present excellent thermal, chemical, and mechanical stabilities, long service life, high separation efficiency, and high mechanical strength [ ]. Therefore, they have been well accepted as one of the most effective materials for lignin separation [ ]. However, fouling in the filtration has become a key factor hindering the continuous industrial production of ceramic membranes, greatly reducing the lignin separation from black liquor. Therefore, the micro filtration mechanisms have attracted increasing attention, such as the movement of lignin particles, sedimentation of lignin particles on the membrane surface, and formation of filter cake. Although the visualization of micropore blockage has been rapidly developed [ ], dynamic tracking of the movement and capture of lignin particles can’t be achieved due to the limitations of current experimental conditions and observation techniques. Meanwhile, the size of lignin particles is generally micron-level, and the experimental results cannot provide microscopic information on the interaction among lignin particles and that between lignin particles and fluid. Therefore, the discrete element method coupled with the computational fluid dynamic (CFD-DEM) has been proposed to investigate the dynamic characteristics of lignin particles and the formation mechanism of filter cake during filtration of ceramic membranes [ The CFD-DEM solution is a relatively new method that can dynamically characterize the particle trajectory, deposition morphology, interaction between particles and fluid, and collision among particles during filtration [ ]. Therefore, it is extensively applied in the filtration and separation field. Deshpande et al. [ ] adopted the DEM and CFD-DEM methods to investigate the precipitation and consolidation of packed bed/filter cake formed by monodisperse and bidisperse spherical particles under different flow conditions. Li and Marshall [ ] simulated the precipitation of particles on a single fiber by the CFD-DEM method, and then discussed the impacts of adhesion on particle deposition. Hund et al. [ ] employed the CFD-DEM method to simulate the formation of particle bridges in solid-liquid separation, studied the impacts of particle concentration and feed flow rate on the formation of particle bridges, preliminarily revealed the formation mechanism of particle bridges, and verified the accuracy of the simulation results by comparing them with the experimental results. Additionally, Qian et al. [ ] and Cao et al. [ ] established a multi-layer three-dimensional fiber medium filtration model and simulated the flow and precipitation of particles in the model by using the CFD-DEM method. The results revealed that it was convenient and feasible to study the flow and precipitation of small particles in filter media by the CFD-DEM method, which were consistent with the experimental results. However, in these studies, the shape of the particles was specially processed and simplified to be spherical. Most of the particles applied in industrial production are non-spherical, and the shape greatly influences the dynamic behavior of the particles [ ]. The ellipsoid models [ ], hyper ellipsoid models [ ], bonded sphere models [ ], multi-sphere models [ ], and polyhedron models [ ] have been developed in DEM to describe the shape of particles more accurately to achieve a more realistic simulation. The most important difference between spherical particles and non-spherical particles is that the formers are prone to rolling under the same conditions when the CFD-DEM method is adopted [ ]. However, with more accurate descriptions of the shape of the particles, the amount of calculation in simulation is increasing, especially in solving the movement of micron-sized particles. Rolling motion of the spherical particles is controlled by the rolling friction coefficient among particles and that between the particles and the membrane, so the coefficient can be introduced into the DEM to generate the rolling resistance. Therefore, spherical particles can replace the non-spherical particles by controlling the numerical range of the rolling friction coefficient. In this context, people began to replace non-spherical particles with spherical particles, and the rolling friction coefficient has been artificially increased to show some properties of non-spherical particles [ ], aiming to satisfy the simulation requirements and reduce the amount of calculation. There are many studies on the range of the rolling friction coefficient for spherical particles to replace non-spherical particles. Wensrich and Katterfeld [ ] revealed that the rolling friction coefficient could indeed bring some properties of non-spherical particles to spherical particles. Xie et al. [ ] proposed a method of replacing non-spherical particles with spherical particles based on the rolling friction and then studied the applicability of this method under different processes and operating conditions. The results indicated that the non-spherical particles can be replaced, to some extent, to reduce the amount of calculation and speed up the solution if a reasonable rolling friction coefficient can be applied. Xiong et al. [ ] simulated particle deposition on a single fiber, studied the impacts of the rolling friction coefficient on the deposition morphology, coordination number, and porosity of particles, and then calibrated the rolling friction coefficient among particles and that between particles and fibers in fiber filtration, making the simulation results closer to the real observations. In general, the rolling friction coefficient significantly affects the movement of spherical particles. Therefore, correctly defining the rolling friction coefficient is the key to ensure that the simulation process and results are in line with reality. Many researchers have noticed that spherical particles can be substituted for non-spherical particles by setting a proper rolling friction coefficient. However, calibration of the rolling friction coefficient is always the key- and difficult-point during the simplification. In this study, non-spherical lignin particles were simplified to spherical particles by applying the rolling friction coefficient to the latter. Effects of the rolling friction coefficient on the deposition of lignin particles in ceramic membranes were studied by the CFD-DEM coupling method at the single pore level. Firstly, the capture of lignin particles was demonstrated dynamically. Then, effects of the rolling friction coefficient on the morphology, coordination number distribution, and porosity of the lignin particles were investigated and analyzed. Finally, the rolling friction coefficient was calibrated. Introducing the correct rolling friction coefficient could simplify the DEM model and improve its accuracy, providing a more reasonable rolling friction coefficient for subsequent research on the more complex deposition of lignin particles in porous ceramic membranes. 2. Governing Equation of Fluid–Solid Two-Phase Flow The coupling solution process based on CFD-DEM was completed under the Euler-Lagrange coupling framework, in which the Navier-Stokes (N-S) control equation was adopted to solve the fluid movement based on the continuum assumption. In addition, the Newton’s Second Law was adopted to solve the movement of each particle, and the coupling between the two is realized by the Newton’s Third Law. The flowchart of coupling is shown in Figure 1 2.1. Fluid Phase Control Equation In the coupling solution process, the particle phase greatly affected the movement characteristics of the fluid. Therefore, the void fraction was necessary to be incorporated into the traditional N-S equation to characterize the volume of the fluid phase in a specific computational grid. The control equation of the fluid phase can be expressed as follows [ $∂ ρ φ ∂ t + ∇ ρ φ u = 0$ $∂ ∂ t ρ φ u + ∇ ρ φ ϑ u = − ∇ ρ − S + ∇ ϑ φ ∇ u + ρ φ g$ , and refer to the viscosity, velocity, and density of the fluid with the units of Pa·s, m/s, and kg/m , respectively; is the sum of the fluid resistance $F D$ acting on the volume of a grid cell, N; and $∆ V$ is the volume of the mesh unit, m 2.2. Discrete Model During the movement of particles, the Newton’s Second Law was employed to solve the velocity and position of particles at each specific time based on the external force on particles. In this study, the external forces on particles included gravity, the interaction force between particles and fluid, the collision force among particles and that between particles and membranes, and the Van der Waals adhesion force among particles and that between particles and membranes. Therefore, the particle-phase control equation can be expressed as follows [ $m p d u p d t = m p g + F J K R + F f p + F c$ $I p d ω p d t = ∑ i = 1 k M i + M J K R , i$ $m p$ $u p$ $I p$ $ω p$ , and $M i$ , are mass, velocity, inertia moment, rotational speed, and collision torque of the particle, respectively, with the units of kg, m/s, kg·m , rad/s, and N·m, respectively. $M J K R , i$ is the adhesive torque in collisions with other particles, N·m; $F J K R$ refers to the Van der Waals adhesion force, N; $F f p$ is the particle-fluid force, N; and $F c$ denotes the particle–particle contact force, N. $F J K R$ is the Van der Waals adhesion, which can be calculated by Equation (6) [ $F J K R = − 4 π γ E * α 3 / 2 + 4 E * 3 R * α 3$ is the surface energy, J/m $E *$ refers to the relative Young’s Modulus, Pa; is the normal overlap, m; and $R *$ represents the relative radius, m. The equivalent Young’s Modulus $E *$ and relative radius $R *$ are defined as follows: $1 E * = 1 − υ i 2 E i + 1 − υ j 2 E j$ $E i$ $E j$ are the Young’s Modulus of particle- and particle- , respectively, Pa; $υ i$ $υ j$ are the Passion’s ratios of particle- and particle- , respectively; and $R i$ $R j$ are radii of particle- and particle- , respectively, m. $F f p$ represents the force between the particle and fluid. In this study, the interaction force between liquid and solid phases only considers the drag force $F d r a g$ and pressure gradient force $F p$ , so $F f p$ can be expressed as Equation (9). $F f p = F d r a g + F p$ $F d r a g$ represents the fluid drag force of the particle, as shown in Equation (10). $F d r a g = 0.5 C D ρ A u − u p u − u p$ $F d r a g$ is the drag force of the particle, N; is the projection area of the particle, m $C D$ refers to the drag coefficient, which depends on the Reynolds number $R e$ , and can be calculated by Equation (11) [ $C D = 24 R e R e ≤ 0.5 24 1.0 + 0.25 R e 0.687 R e 0.5 < R e ≤ 1000 0.44 R e > 1000 , R e = φ ρ d p u − u p μ$ $F p$ is the pressure gradient force caused by the pressure gradient of particles moving in the flow field, as shown in Equation (12) [ $F p = d p / d x = − ρ g − ρ u d u / d x$ $F c$ is the particle–particle contact force, as shown in Equation (13) [ $F c n$ is the normal contact force acting on particles after particle collision, N, as expressed as Equation (14); and $F c t$ is the tangential contact force acting on particles after particle collision, N, as written as Equation (15). $F c n , i j = − k n α 3 / 2 − c n u i j · n n$ $F c t , i j = − k t δ − c t v c t$ $u i j = u i − u j$ refers to the velocity of particle- relative to particle- , m/s; is the unit vector pointing from the centroid of particle- to the centroid of particle- represents the tangential displacement of contact point, m; and $v c t$ denotes the sliding velocity vector. 2.3. The Directional Constant Torque Model As mentioned in the first section, the most important difference between spherical and non-spherical particles during the movement is that the formers are easy to roll under the same conditions. Therefore, the principle of simplifying non-spherical particles into spherical particles can be described as follows. The rolling resistance is applied to the simplified spherical particles, so that they show the movement characteristics of non-spherical particles. In this case, they can replace the non-spherical particles. However, the rolling resistance is difficult to be determined in various industries. In the DEM, the directional constant torque model is the most widely accepted rolling resistance model, which can be calculated with Equation (16). $M r = − μ F c n R c ω c$ $M r$ is the rolling friction resistance, N; represents the rolling friction coefficient; $R c$ refers to the distance between the contact point and the center of the sphere, m; and $ω c$ is the angular velocity of the contact point, rad/s. Therefore, the determination of the rolling resistance can be converted to the determination of the rolling friction coefficient. Changing the rolling friction coefficient can alter the size of the rolling resistance, and the rolling friction coefficient can be calibrated through studying a series of physical quantities. In this way, the non-spherical particles can be replaced with the spherical particles. 3. Computational Set-Up 3.1. Geometry and Computational Domain According to the microscopic characterizations of the structure and morphology, the filter channel in the ceramic membrane was formed by the accumulation of ceramic particles with a shape similar to a sphere. Such a porous structure for a ceramic membrane was difficult to model, and many membrane pores would negatively affect the ability to characterize the deposition morphology of particles in each membrane pore, making it difficult to calibrate the rolling friction coefficient. Therefore, the porous structure of the ceramic membrane was simplified in this study by analogy with the method introduced by Tao et al. [ ] and Xiong et al. [ Figure 2 illustrated the simplified pore structure of a single ceramic membrane and the size of the computational domain. Among them, the X and Y directions of the computational domain size were twice the diameter of the ceramic particle, which exhibits similar results with those obtained by Xiong et al. [ ] in the single fiber filtration model. The computational domain size along the direction of fluid flow is set to six times the diameter of the ceramic particle, according to the reports of Xiong et al. [ ] and Qian et al. [ ], to ensure the uniform flow of the black liquor and lignin particles in the inlet and outlet. Moreover, the shape of the ceramic particles was simplified accordingly. The ceramic particle spheres, similar to spheres, were simplified as spheres, and the rolling friction coefficient between the lignin particles and ceramic membranes was introduced in the solution setting. Calibrating the rolling friction coefficient between the lignin particles and ceramic membranes can make the physical model closer to reality and greatly accelerate the calculation. 3.2. Boundary Conditions and Parameter Settings In this study, the EDEM 2020 and Fluent 2020R2 were coupled to calculate the particles’ movement and fluid fields through User-Defined Function (UDF). The coupling between CFD and DEM is implemented using the Eulerian coupling method, which considers the momentum exchange between liquid phase and particle phase, and the influence of particles relative to the liquid phase. Particles in the DEM are translated and rotated by the explicit time integral method, while the governing equation of fluid in the CFD is solved by the SIMPLE algorithm under the pressure base solver. The second order scheme is employed to disperse the pressure term, and the first order scheme discretizes the other terms. Meanwhile, the drag model of Ergun and Wen & Yu was selected. In addition, the black liquor adopted the speed import and pressure export. The lignin particles entered the flow field under the driving force of the black liquor. To ensure that the lignin particles entered the flow fields smoothly, the lignin particle generating surface was behind the velocity entrance. In consideration of the structural characteristics of the ceramic membrane, surfaces of the ceramic particles were set as the non-slip boundary condition, and the other boundaries of the computational domain were set as the symmetric boundary condition. The total calculation time was 2.2 ms, and the lignin particles were generated only within 0–2 ms, which ensured the complete deposition or flowing out of all the lignin particles in the computational domain at the end of the simulation. The main parameters used for simulation in this study are shown in Table 1 Table 2 A semi-analytical CFD-DEM coupling interface was employed in this study. Instead of precisely analyzing the flow state around each particle, the semi-analytical interface redistributed the particle-phase volume spatially by introducing kernel functions and capturing the physical information of the background flow fields in the expanding region. The physical information in the background flow fields could be obtained reasonably and accurately no matter how many grids were covered. Therefore, the semi-analytical interface could simulate the movement of particles in a fluid grid that is equivalent to or even slightly smaller than the size of particles. It solved the problem that the unresolved CFD-DEM coupling interface cannot be applied to the field of ceramic membrane filtration, and solved the problem that the resolved CFD-DEM coupling interface had a large amount of calculation. Therefore, the size of the fluid domain grid was set within a range equal to or slightly smaller than the diameter of the lignin particle in this study. In this case, the grid independence was verified based on the clean filtration stage, as displayed in Table 3 . The semi-analytical CFD-DEM coupling interface exhibited good stability, and the simulation results could be correctly represented when the number of grids was 46,400. Figure 3 demonstrated the fluid domain grid model when the number of grids was 46,400. As it illustrated, the fluid domain was meshed by ICEM CFD, and the hexahedral mesh was generated by the block topology function under the ICEM CFD. The minimum and maximum mesh sizes were 0.2 μm and 0.7 μm, respectively, and the maximum mesh mass was 1. It should be noted that the grid quality here was measured by the determinant with a value range of 0–1. The more the value was close to 1, the more perfect the grid was. 4. Results and Discussion 4.1. Particle Deposition Process Figure 4 illustrates the deposition process of the lignin particles with the rolling friction coefficient among the lignin particles of $μ p − p = 1.2$ , and that between the lignin particles and membranes of $μ p − m = 1.0$ . The deposition process of the lignin particles could be roughly divided into two stages: the capture of the ceramic membrane (A–B), and the capture of the deposited lignin particles (B–F). In the initial stage of filtration (A–B), the lignin particles were mainly captured and approximately evenly distributed on the upstream side surface of the ceramic membrane. At this stage, the number of lignin particles captured was not large, and the pressure drop increased slightly. As indicated by the pressure drop curve in the A–B stage ( Figure 5 ), the pressure drop increased linearly but the growth rate was relatively slow. As the filtration process proceeded (B–F), the lignin particles began to be captured by the lignin particles deposited, leading to a significant change in the deposition morphology. The deposited lignin particles began to grow outward, forming a dendritic structure, which became more obvious and independent as the filtration process progressed. Finally, the deposition morphology of the lignin particles showed the shape of a “forest.” At this stage, most of the lignin particles were captured by the dendritic structure, greatly increasing the number of captured particles and the pressure drop. As demonstrated in the pressure drop curve in the B–F stage in Figure 5 , the slope of the pressure drop curve with time gradually increased, indicating that the efficiency in capturing lignin particles was constantly increasing. However, it should be noted that the lignin particles were only generated within 0–2 ms, so the pressure drop decreased gradually when the filtration time was longer than 2 ms (E–F stage in Figure 5 The cumulative number of penetrating particles and the efficiency in capturing lignin particles during filtration were counted and calculated to further investigate the impacts of dendritic structure growth on the efficiency in capturing lignin particles. As demonstrated in Figure 6 , the cumulative number of penetrating particles and the efficiency in capturing lignin particles both increased as the filtration time increased. However, the cumulative number of penetrated particles grew slowly, and the efficiency in capturing lignin particles exhibited a faster growth rate. This is because the deposited lignin particles formed dendritic structures, exerting a secondary trapping effect on the lignin particles, so that an increasing number of lignin particles were captured, and the retention rate was increased. In other words, formation of the dendritic structure significantly affected the efficiency in capturing lignin particles. 4.2. Impacts of the Rolling Friction Coefficient on Deposition Morphology of Lignin Particles The actual lignin particles had greater rolling friction resistance than the ideal spherical lignin particles due to their shapes, thereby limiting the movement among lignin particles, and that between lignin particles and membranes [ ]. Previous studies have shown that the deposition morphology of lignin particles was correlated with the external force. When the rolling friction resistance was not enough to resist the external force, the lignin particles would move until they reached a new equilibrium state [ ], which significantly affected the deposition morphology of the lignin particles. Therefore, effects of the rolling friction coefficient on the deposition morphology of lignin particles were quantitatively characterized when the rolling friction coefficient among the lignin particles was $μ p − p = 0 − 3$ , and that between the lignin particles and membranes was $μ p − m = 0 − 3$ Figure 7 shows the deposition morphology of lignin particles under different rolling friction coefficients. It revealed that regardless of $μ p − m$ , the deposition morphology of lignin particles showed a strong regularity with increasing $μ p − p$ . When $μ p − p ≤ 0.6$ , the particles were deposited on the surface of the ceramic membrane in the form of agglomeration, and no dendritic structure was observed, which was inconsistent with the experimental results of Huang et al. [ ]. The reason is that the lignin particles were easier to roll because of a too-small rolling resistance, and the dendritic structure formed at the early stage lodged down to the ceramic membrane surface so that the deposition morphology of lignin particles exhibited an agglomeration. As the rolling friction coefficient among lignin particles increased, the corresponding rolling resistance increased, and the lignin particles were difficult to rotate around the contact point. At this moment, an obvious dendritic structure was observed. As there was an increase in the rolling friction coefficient among the lignin particles, the dendritic structure became increasingly obvious and independent, making the deposition morphology of the lignin particles look like a “forest.” This finding is consistent with the deposition morphology of the particles on the single fiber observed by Huang et al. [ Figure 7 discloses that the rolling friction coefficient between the lignin particles and membranes had no obvious effect on the deposition morphology of the lignin particles. This might be because the number of contacts among the lignin particles was much larger than that between the lignin particles and membranes [ Table 4 showed the proportion of the number of contacts between the lignin particles and membranes to the total number of contacts under different rolling friction coefficients. The number of contacts between the lignin particles and membranes accounted for only 18.4% at most, weakening the effects of the rolling friction coefficient between them on the deposition morphology of lignin particles. 4.3. Impacts of the Rolling Friction Coefficient on the Deposition Structure of Lignin Particles Generally, coordination number ( ) and porosity ( ) are two important parameters describing the structural properties of particle packing [ ]. The former refers to the number of particles in contact with the central particles, and its size reflects the agglomeration properties of the stacking structure. The latter reflects the compactness of the accumulation structure. Therefore, this section discusses the effects of the rolling friction coefficient on the average coordination number, coordination number distribution, and porosity of lignin particles. Figure 8 displays the curve of the average coordination number of lignin particles with the rolling friction coefficient. It illustrated that the average coordination number decreased with the increasing rolling friction coefficient among the lignin particles, and the change law was consistent with the results obtained by Xiong et al. [ ]. When the rolling friction coefficient among the lignin particles increased from 0.1 to 3.0, the average coordination number decreased from 3.96 to 2.73; it indicated that the number of contacts among the lignin particles gradually decreased, and the dendritic structure became more independent as the rolling friction coefficient among the lignin particles increased. Such a conclusion was consistent with the change law of the deposition morphology of lignin particles mentioned in Section 4.2 . Besides, the rolling friction coefficient between the lignin particles and membranes slightly affected the average coordination number, which might be because the number of contacts between the lignin particles and membranes was too small (as described in Section 4.2 ). Yang et al. analyzed the stacking structure of settling particles, only under the action of gravity, and revealed that the average coordination number of particles at 1 μm was 2.13 [ ]. However, the fluid force would increase the average coordination number, so the reasonable and acceptable minimum average coordination number was determined as 2.13 in this study; that is, the rolling friction coefficient in this simulation range could meet the requirements of the coordination number. Figure 9 describes the impacts of the rolling friction coefficient among the lignin particles on the coordination number distribution. The increase in the rolling friction coefficient made the dendritic structure formed by the deposition of lignin particles more independent, decreasing the distribution range of the coordination number with an increase in the rolling friction coefficient among the lignin particles. The research conducted by Yang et al. suggested that particles at 1 μm were only subject to the action of gravity, and the distribution range of the coordination number was mainly 1–4 [ ]. However, the contact among different particles became closer due to the action of fluid force, which expanded the distribution range of the coordination number. Figure 10 exhibits the impacts of the rolling friction coefficient between the lignin particles and membranes on the coordination number distribution. It illustrates that the rolling friction coefficient between the lignin particles and membranes did not affect the coordination number distribution. Therefore, it can be concluded that the rolling friction coefficient among the lignin particles was the primary factor affecting the coordination number distribution. The porosity of the particle packing structure can be calculated by Equation (17) [ $ε = 1 − Q − / Q 0 − − 1 / m − n 1 / 4$ $Q −$ is the mean coordination number; $Q 0 −$ = 2.02; m = 87.38; and n = 25.81. Figure 11 displays the variation curve of porosity with the rolling friction coefficient. As the rolling friction coefficient between the lignin particles and membranes did not affect the average coordination number, it also did not affect the porosity [ ]. We observed that the porosity increased from 0.65 to 0.73 when the rolling friction coefficient among the lignin particles increased from 0.1 to 3.0. This was because the increase in the rolling friction coefficient between the lignin particles improved the rolling resistance among the lignin particles, which enhanced the interaction among the lignin particles, thus increasing the porosity. According to the experimental results of Dingwell et al., porosity of the filter cake formed by the lignin particles was 0.61–0.71 [ ]. When the rolling friction coefficient among the lignin particles was 0.1–2.4, the porosity was 0.65–0.72. Therefore, it was more in line with the actual porosity when the rolling friction coefficient among the lignin particles was set to 0.1–2.4 in the simulation, according to the experimental results of Dingwell et al. 5. Conclusions In this study, a method of simplifying the shape of lignin particles into spheres was proposed to solve the difficult CFD-DEM solution and low computational efficiency of non-spherical lignin particles. Then, the CFD-DEM method was adopted to simulate the deposition of spherical lignin particles in the pores of a single ceramic membrane. Later, impacts of the rolling friction coefficient on the deposition morphology, average coordination number, coordination number distribution, and porosity of lignin particles were investigated and analyzed. In addition, the rolling friction coefficient was calibrated. Finally, the following conclusions were drawn: The deposition of lignin particles on ceramic membranes was dynamic, which mainly included capturing ceramic membranes in the initial filtration and deposited lignin particles. Formation of a dendritic structure not only made the deposition morphology of lignin particles look like a “forest,” but also greatly improved the efficiency in capturing the lignin particles. The rolling friction coefficient among the lignin particles crucially affected the deposition morphology, average coordination number, coordination number distribution, and porosity of the particles; the average coordination number decreased from 3.96 to 2.73, and the porosity increased from 0.65 to 0.73, when it increased from 0.1 to 3.0. Reasonably providing a rolling friction coefficient among the lignin particles could replace spherical lignin particles with non-spherical particles. Impacts of the rolling friction coefficient on the deposition morphology, coordination number, and porosity of lignin particles enabled the simulation to be closer to the real lignin filtration by setting the rolling friction coefficient among the lignin particles as 0.6–2.4. Author Contributions Conceptualization, Methodology, software, writing—original draft, H.W. and J.W.; data curation, investigation, resources, J.W. and P.F.; writing—review & editing, X.W. and Y.L.; Writing—original draft, S.W. and Y.W. All authors have read and agreed to the published version of the manuscript. This research received no external funding. Institutional Review Board Statement Not applicable. Data Availability Statement Not applicable. The authors give sincere gratitude to Wu Junfei from Qingdao University of Science and Technology for his continued support of research activities. Conflicts of Interest The authors declare that there is no conflict of interest. Figure 2. Pore structure of the ceramic membrane and the size of the computational domain. (a) Isometric view; (b) YZ plan view. Figure 4. Change in the deposition morphology of the lignin particles with time. (a) A (T = 0.1 ms); (b) B (T = 0.5 ms); (c) C (T = 1.0 ms); (d) D (T = 1.5 ms); (e) E (T = 2.0 ms); (f) F (T = 2.2 Figure 6. Curves of the cumulative number of penetrated particles and efficiency in capturing lignin particles with time. Figure 7. Impacts of the rolling friction coefficient on the deposition morphology of lignin particles. (a) $μ p − m = 0.1$; (b) $μ p − m = 1.0$; (c) $μ p − m = 2.0$; (d) $μ p − m = 3.0$. Figure 9. Impacts of the rolling friction coefficient among the lignin particles on coordination number distribution. (a) $μ p − m = 0.1$; (b) $μ p − m = 1.0$; (c) $μ p − m = 2.0$; (d) $μ p − m = Figure 10. Impacts of the rolling friction coefficient between the lignin particles and membranes on the coordination number distribution (a) $μ p − p = 0.6$; (b) $μ p − p = 1.8$; (c) $μ p − p = 3.0$ Material Particle Membrane Black Liquor Diameter (μm) 1 10 - Density (kg/m^3) 1451 3100 1004 Shear modulus (Pa) 2 × 10^7 7 × 10^10 - Poisson’ ratio 0.25 0.2 - Viscosity (Pa·s) - - 1.467 Velocity (m/s) 0.5 - 0.5 Table 2. Collision and simulation parameters [ Collision Parameters Coefficient of Restitution Coefficient of Static Friction Coefficient of Rolling Friction Surface Energy (J/m^2) Particle–particle 0.1 2.0 0.1–3 0.6 Particle–membrane 0.1 2.0 0.1–3 1 Simulation Parameters Particle Generation Rate/s Total Number of Particles Time Step of DEM/s Time Step of CFD/s 5 × 10^5 1000 1 × 10^−10 1 × 10^−8 Group Mesh Quantity Pressure Drop (Pa) 1 28,900 527.34924 2 37,544 527.9762 3 46,400 528.85345 4 50,270 529.08069 5 59,048 529.62501 Table 4. Proportion of the number of contacts between the lignin particles and membranes to the total. $μ p − m$ 0.1 1.0 2.0 3.0 $μ p − p$ 0.1 13.8% 17.3% 13.7% 17.3% 0.6 12.1% 11.1% 10.4% 11.3% 1.2 16.7% 11.2% 14.6% 15.5% 1.8 18.4% 12.8% 13.0% 14.2% 2.4 17.5% 15.1% 14.4% 16.3% 3.0 17.2% 14.2% 16.4% 13.1% Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Wang, H.; Wang, X.; Wu, Y.; Wang, S.; Wu, J.; Fu, P.; Li, Y. Study of CFD-DEM on the Impact of the Rolling Friction Coefficient on Deposition of Lignin Particles in a Single Ceramic Membrane Pore. Membranes 2023, 13, 382. https://doi.org/10.3390/membranes13040382 AMA Style Wang H, Wang X, Wu Y, Wang S, Wu J, Fu P, Li Y. Study of CFD-DEM on the Impact of the Rolling Friction Coefficient on Deposition of Lignin Particles in a Single Ceramic Membrane Pore. Membranes. 2023; 13(4):382. https://doi.org/10.3390/membranes13040382 Chicago/Turabian Style Wang, Hao, Xinyuanrui Wang, Yongping Wu, Song Wang, Junfei Wu, Ping Fu, and Yang Li. 2023. "Study of CFD-DEM on the Impact of the Rolling Friction Coefficient on Deposition of Lignin Particles in a Single Ceramic Membrane Pore" Membranes 13, no. 4: 382. https://doi.org/10.3390/membranes13040382 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2077-0375/13/4/382","timestamp":"2024-11-10T22:41:49Z","content_type":"text/html","content_length":"508711","record_id":"<urn:uuid:badcb7bd-6f61-4b95-bbdd-aa991d8bbaa7>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00161.warc.gz"}
ISI-B.Math: Online Competitive Encyclopedia - EdNite ISI-B.Math is a three-year degree programme offers comprehensive instruction in basic mathematics and basic courses in probability, statistics. computing and physics. It is designed that on successful completion, the students will be able to pursue career in applied mathematics. Visit http://www.isical.ac.in for more information. Syllabus for ISI-B.Math- Algebra and Number Theory • Sets, operations on sets • Prime numbers, factorization of integers and divisibility • Rational and irrational numbers • Permutations and combinations • Binomial Theorem • Logarithms • Polynomials: relations between roots and coefficients • Remainder theorem, theory of quadratic equations and expressions • Arithmetic and geometric progressions • Inequalities involving arithmetic, geometric and harmonic means • Complex numbers 2. Geometry • Class 10 level plane geometry • Geometry of 2 dimensions with cartesian and polar coordinates, concept of a locus, equation of a line • Angle between two lines • Distance from a point to a line, area of a triangle, equations of a circle, parabola, eclipse and hyperbola and equations of their tangents and normal, mensuration 3. Trigonometry • Measures of angles, trigonometric and inverse trigonometric functions, trigonometric identities including addition formulae, solutions of trigonometric equations • Properties of triangles, heights and distances 4. Calculus • Sequences – bounded sequences, monotone sequences, limit of a sequence • Functions • Limit, continuity and differentiability of functions of a single real variable • Derivatives and methods of differentiation • The slope of a curve, tangents and normal • Maxima and minima • Use of calculus in sketching graphs of functions • Methods of integration- definite and indefinite integrals • Evaluation of areas using integrals Preparation Tips for ISI-B.Math Start with the basics- When you start your preparation, make sure to have a good hold on your basics because that is the only way you can solve advanced level questions without much difficulty. Make sure to have a very strong foundation. Build your mathematical aptitude- You need to have an analytical thinking so that you can handle the questions easily. Improve your logical thinking abilities. Do not overlook NCERTs- NCERTs will help you to comprehend topics with much ease. Solve all the problems of NCERTs and try to derive proofs of your own. It will help you to approach difficult questions easily. Time table is a must- You must have a time table while preparing for any exam. Time table will help you to plan things easily and you can complete your syllabus on time. Choose the right study material- When preparing for this exam, you need to be very cautious while choosing the book which will help you with the preparation. The market is full of many books but only limited no. of the books will help you to sail through. You can also take the help of the resources available online- Google or YouTube. Make Notes- While studying, keep on making notes side by side. It will be easy to comprehend a topic when you keep on writing down things and also it will help you in the final revision. Practice makes a man perfect. In order to reach your goal of clearing this exam, you have to practice many questions. 0 Comments Inline Feedbacks View all comments | Reply
{"url":"https://www.ednite.com/isi-b-math/","timestamp":"2024-11-06T19:05:39Z","content_type":"text/html","content_length":"138601","record_id":"<urn:uuid:b036a100-6931-4a5c-822c-521bfa42a0db>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00612.warc.gz"}
RANSAC and 2D point clouds RANSAC and 2D point clouds I have 2 point clouds in 2D and I want to use RANSAC to determine transformation matrix between them. I have no pairs of points, just 2 sets of points. How can I do this in OpenCV? I tried to write ransac-like scheme for my purposes. 1. Get 4 random points from 1st set and from 2nd set. 2. Compute transform matrix H using getPerspectiveTransform. 3. Warp 1st set of points using H and test how they aligned to the 2nd set of points using some metric(I don't know what metric I should use I tried sum of min distance for all points in 1st set, but it seems it's not work good if points group together after transform) 4. Repeat 1-3 N times and choose best transform according to metric. maybe I should use different metric? Also I have idea that I can match points using shape context algorithm and only then use RANSAC(or some other algorithm) to determine transformation. 2 answers Sort by ยป oldest newest most voted Just call cv::Mat transform = cv::findHomography(firstPtVector, secndPtVector, CV_RANSAC); The last parameter is optional, and it defaults to RANSAC. In C++, the point clouds must be either std::vector<cv::Point> either cv::Mat with one point per line. edit flag offensive delete link more but we must have pair point in vectors firstPtVector, secndPtVector isn't it? I have just 2 point sets without pairs. mrgloom ( 2012-08-28 06:12:38 -0600 )edit If you do not have pairing, you cannot define anything. There are infinite combinations of possibilities of transforms between two points sets. sammy ( 2012-08-28 07:03:47 -0600 )edit I don't fink so, for example http://www.mrpt.org/Iterative_Closest_Point_(ICP)_and_other_matching_algorithms I just don't know how to generalize ICP algorithm to perspective transform. mrgloom ( 2012-08-28 08:15:01 -0600 )edit @sammy , under some constraints (you know the relation is an affine transform for example), in most real world cases there is only one transform. Mehdi ( 2016-10-14 08:36:27 -0600 )edit I don't think RANSAC is a good idea in your case. RANSAC can be used when you have a number of measurements (e.g. pairs of corresponding points from 2 sets) containing some outliers (e.g. incorrectly matched points). The more outliers you have the more RANSAC iterations are needed to estimate parameters with a given confidence. In your scenario, choosing matching points randomly, you'll need a really HUGE number of iterations to ensure proper matching is found. Computational cost may be prohibitive. Can you give more details about your problem. Why are you trying to estimate perspective transform? Are these 2D points projections of some 3D points? edit flag offensive delete link more I need to align raster and vector data, raster data is distorted, I represent the, as point clouds in 2D. Look here there is no need for pair points http://www.mrpt.org/Iterative_Closest_Point_(ICP) _and_other_matching_algorithms but I need non rigid ICP. mrgloom ( 2012-08-29 01:11:40 -0600 )edit If distortion is not too big you may still use ICP. ICP doesn't require that 2 point clouds are exactly identical. One point cloud may contain points corrupted with a noise and ICP will still work. The catch is that ICP requires point clouds to be roughly aligned. So if corresponding points in your point clouds are quite close you may use ICP. Otherwise you'll need to use some method to roughly align your point clouds (e.g. compute some kind of feature descriptors for each point, use these feature descriptors to create matching between points in both clouds and roughly align point clouds using these matchings) and then use ICP for final registration. Jacek ( 2012-08-29 03:40:58 -0600 )edit The problem is that implementation of ICP that I found have only translate and rotation invariance, but I need to register images with perspective transform. mrgloom ( 2012-08-29 08:21:25 -0600 )edit
{"url":"https://answers.opencv.org/question/1834/ransac-and-2d-point-clouds/","timestamp":"2024-11-15T01:39:14Z","content_type":"application/xhtml+xml","content_length":"73280","record_id":"<urn:uuid:7cb9cd7a-4579-4e8c-b46f-577b2c962d2c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00221.warc.gz"}
The Development of Intuitionistic Logic First published Thu Jul 10, 2008; substantive revision Wed May 4, 2022 “Intuitionistic logic” is a term that unfortunately gains ever greater currency; it conveys a wholly false view on intuitionistic mathematics. —Freudenthal 1937 Intuitionistic logic is an offshoot of L.E.J. Brouwer’s intuitionistic mathematics. A widespread misconception has it that intuitionistic logic is the logic underlying Brouwer’s intuitionism; instead, the intuitionism underlies the logic, which is construed as an application of intuitionistic mathematics to language. Intuitionistic mathematics consists in the act of effecting mental constructions of a certain kind. These are themselves not linguistic in nature, but when acts of construction and their results are described in a language, the descriptions may come to exhibit linguistic patterns. Intuitionistic logic is the mathematical study of these patterns, and in particular of those that characterize valid inferences. An inference rule is valid if, whenever the statements in the premises describe truths of intuitionistic mathematics, a construction can be found that makes true the statement that is obtained by applying the rule. What the principles of logic need to preserve is therefore not, as in classical logic, mind-independent truth, but mental constructibility. Various principles of classical logic, most notably the Principle of the Excluded Middle, then become insufficiently grounded, and certain classical theorems even contradictory. The theorems in intuitionistic logic that formally contradict classical theorems depend on elements of intuitionistic mathematics that are incompatible with classical mathematics; this illustrates how in intuitionism logic is based on mathematics and not the other way around. The systematic explanation and formalization of intuitionistic logic was begun by Brouwer’s student Arend Heyting in 1928. An “explanation” here is an account of what one knows when one understands and correctly uses the logical connectives. Since the 1970s Heyting’s explanation and its variants are known as “the Proof Interpretation”, as the role played by mind-independent truth in explanations of classical logic is here played by proof. For Heyting, a proof was primarily a mathematical construction in Brouwer’s sense, and, secondarily, a linguistic description of it. But it turned out that a Proof Interpretation can also be based on other notions of proof. The Proof Interpretation understood in a more general sense has found many applications outside its historical origin, notably in other constructive but non-intuitionistic forms of mathematics, philosophy, computer science, and linguistics. This widening of the range of application was possible because the original Proof Interpretation depends mostly on the fact that the intuitionistic notion of mathematical truth is of a verificationist nature, so that broadly verificationist theories in other domains allow for an analogous explanation of their logic. In this article, the principal concern is with the development of the Proof Interpretation within its original context of intuitionistic mathematics. Section 1 comments on terminology. Section 2 examines the basis of Brouwer’s conception of logic in his early writings. Section 3 presents his later refinements and also his views on Hilbert’s Program, Gödel’s incompleteness theorems, and the debate in the foundations of mathematics in the 1920s. Section 4 discusses formalizations of intuitionistic logic, which had begun even before the Proof Interpretation had been made explicit, and briefly looks at mathematical interpretations of the formalisms obtained. The explanation of the Proof Interpretation in Heyting’s writings from 1930 to 1956 is treated in section 5. The sensitivity of intuitionistic logic to the exact conception of mathematical construction was at the root of strong objections to parts of the Proof Interpretation that arose from within intuitionism. These are discussed in section 6. At the end, a short list is given of topics that will be added in future updates. 1. Introduction 1.1 The Proof Interpretation The standard explanation of intuitionistic logic today is the BHK-Interpretation (for “Brouwer, Heyting, Kolmogorov”) or Proof Interpretation as given by Troelstra and van Dalen in Constructivism in Mathematics (Troelstra & van Dalen 1988: 9): • (H1)A proof of \(A \wedge B\) is given by presenting a proof of \(A\) and a proof of \(B\). • (H2)A proof of \(A \vee B\) is given by presenting either a proof of \(A\) or a proof of \(B\) (plus the stipulation that we want to regard the proof presented as evidence for \(A \vee B\)). • (H3)A proof of \(A \rightarrow B\) is a construction which permits us to transform any proof of \(A\) into a proof of \(B\). • (H4)Absurdity \(\bot\) (contradiction) has no proof; a proof of \(\neg A\) is a construction which transforms any hypothetical proof of \(A\) into a proof of a contradiction. • (H5)A proof of \(\forall xA(x)\) is a construction which transforms a proof of \(d \in D\) (\(D\) the intended range of the variable \(x)\) into a proof of \(A(d)\). • (H6)A proof of \(\exists xA(x)\) is given by providing \(d \in D\), and a proof of \(A(d)\). Notions such as “construction”, “presenting” and “transformation” can be understood in different ways, and indeed they have been. Similarly, there have been different ideas as to how one may justify that concrete instances of clauses H3 and H4 indeed work for any (possibly hypothetical) proof of the antecedent. Logical principles that are valid on one understanding of these notions may not be valid on another. As Troelstra and van Dalen indicate, it is even possible to understand these clauses in such a way that they validate the principles of classical logic (Troelstra & van Dalen 1988: 9, 32–33; see also Sundholm 2004 and Sato 1997). In the context of the foundational programs of intuitionism and constructivism, all notions are of course understood to be effective; but even then there is room for differences of understanding. Such differences can have mathematical consequences. On some understandings, intuitionistic logic turns out formally to be a subsystem of classical logic (namely, classical logic without the Principle of the Excluded Middle). But that is not the understanding of intuitionistic mathematicians, who, in analysis, have constructed intuitionistically valid instances of the schema \(\neg \forall x(Px \vee \neg Px)\), while classically there can be none (see the section on Strong counterexamples and the Creating Subject (3.3), below). Troelstra and van Dalen specify that the clauses H1–H6 go back to Heyting’s explanation from 1934 (hence “H”). Heyting’s aim had been to clarify the conception of logic in Brouwer’s foundational program in mathematics, which would motivate adding the following clause: • (H0) A proof of an atomic proposition \(A\) is given by presenting a mathematical construction in Brouwer’s sense that makes \(A\) true. Indeed, as we will see, a version of the Proof Interpretation is implicit already in Brouwer’s early writings from 1907 and 1908, and was notably used by him in his proofs of the bar theorem from 1924 and 1927, which predate Heyting’s papers on logic. We will therefore begin our account of the historical development of intuitionistic logic with Brouwer’s ideas, and then show how, via Heyting and others, the modern Proof Interpretation was arrived at. 1.2 Interpretation, Explanation, and Names As Sundholm (1983: 159) points out, in the terms “BHK-Interpretation” and “Proof Interpretation” it would be appropriate to replace “Interpretation” by “Explanation”. For in a logical-mathematical context, “interpretation” has come to refer to the interpretation of one formal theory in another.^[1] An interpretation of a formal system \(U\) in a formal system \(V\) is given by a translation \ ('\) of formulas of \(U\) to formulas of \(V\) that preserves provability:^[2] \[ \textrm{If } U \vdash A \textrm{ then } V \vdash A' \] For the moment, we note that the BHK-Interpretation or Proof Interpretation is not an interpretation in this mathematical sense, but is rather a meaning explanation; we will come back to such interpretations and their difference from explanations in Section 2 of the supplementary document The Turn to Heyting’s Formalized Logic and Arithmetic. While accepting Sundholm’s point, we keep the terms themselves, considering that they have perhaps become too common to change. Section 5.3 below is the appropriate place to explain our preference for “Proof Interpretation” over “BHK-Interpretation”. The name “Proof Interpretation” for the explanation that Heyting published in the 1930s and later seems to have made its first appearance in print only in 1973, in papers by van Dalen and Kleene, presented at the same conference (van Dalen 1973a; Kleene 1973). Heyting himself spoke simply of the “interpretation” (1958A: 107; 1974: 87) or “the intuitionistic interpretation” (1958A: 110) of The name “BHK-Interpretation” was coined by Troelstra (1977: 977), where “K” initially stood for “Kreisel” (because of Kreisel 1962), later for “Kolmogorov”, e.g., in Troelstra 1990: 6; this replacement is, in keeping with Sundholm’s point, a correction. 2. Brouwer’s Views on Logic in 1907 and 1908 2.1 Mathematics, Language, and Logic In his dissertation (1907), Brouwer presents his conception of the relations between mathematics, language, and logic. Both the intuitionistic view of logic as essentially sterile, and the existence of results in intuitionistic logic that are incompatible with classical logic, depend essentially on that conception. For Brouwer, pure mathematics consists primarily in the act of making certain mental constructions (Brouwer 1907: 99n.1 [1975: 61n.1]).^[3] The point of departure for these constructions is the intuition of the flow of time.^[4] This intuition, when divested from all sensuous content, allows us to perceive the form “one thing and again a thing, and a continuum in between”. Brouwer calls this form, which unites the discrete and the continuous, “the empty two-ity”. It is the basic intuition of mathematics; the discrete cannot be reduced to the continuous, nor the continuous to the discrete (Brouwer 1907: 8 [1975: 17]). As time flows on, an empty two-ity can be taken as one part of a new two-ity, and so on. The development of intuitionistic mathematics consists in the exploration which specific constructions the empty two-ity and its self-unfolding or iteration allows and which not: The only possible foundation of mathematics must be sought in this construction under the obligation carefully to watch which constructions intuition allows and which not. (Brouwer 1907: 77 [1975: 52]) or, in Heyting’s words, [Brouwer’s] construction of intuitionist mathematics is nothing more nor less than an investigation of the utmost limits which the intellect can attain in its self-unfolding. (Heyting 1968A: 314) Brouwer and other intuitionists have shown how on this basis arithmetic, real analysis, and topology can be constructed. Moreover, Brouwer considers any exact thought that is not itself mathematics an application of mathematics. For whenever we consciously think of two things in an exact manner, that is, think them together while keeping them separate, we do so, according to Brouwer, by projecting the discrete parts of an empty two-ity onto them (Brouwer 1907: 179n.1 [1975: 97n.1]). Brouwer takes the intuition of time to belong to pre-linguistic consciousness. Mathematics, therefore, is essentially languageless. It is the activity of effecting non-linguistic constructions out of something that is not of a linguistic nature. Using language we can describe our mathematical activities, but these activities themselves do not depend on linguistic elements, and nothing that is true about mathematical constructional activities owes its truth to some linguistic fact. Linguistic objects such as axioms may serve to describe a mental construction, but they cannot bring it into being. For this reason, certain axioms from classical mathematics are rejected by intuitionists, such as the completeness axiom for real numbers, which says that if a non-empty set of real numbers has an upper bound, then it has a least upper bound: we know of no general method that would allow us to construct mentally the least upper bound whose existence the axiom claims. As Brouwer later put it, “Formal language accompanies mathematics as a score accompanies a symphony by Bach or an oratorio by Handel”. (Brouwer et al. 1937: 262; translation mine).^[5] Correspondingly, establishing properties of formal systems may have many uses, but ultimately has no foundational significance for mathematics. In a lecture from 1923, Brouwer expresses optimism about Hilbert’s proof theory, but denies that it would have significance for mathematics: We need by no means despair of reaching this goal [of a consistency proof for formalized mathematics], but nothing of mathematical value will thus be gained: an incorrect theory, even if it cannot be inhibited by any contradiction that would refute it, is none the less incorrect, just as a criminal policy is none the less criminal even if it cannot be inhibited by any court that would curb it. (Brouwer 1924N: 3 [van Heijenoort 1967: 336]) At the same time, Brouwer was well aware of the practical need for language, both in order to communicate mathematical results to others and to help ourselves in remembering and reconstructing our previous results (Brouwer 1907: 169 [1975: 92]). Only an ideal mathematician with perfect and unlimited memory would be able to practice pure mathematics without recourse to language (Brouwer 1933A2: 58 [van Stigt 1990: 427]). Clearly, given these two practical functions of language, the more precise the language is, the better. Logic, in this framework, seeks and systematizes certain patterns in the linguistic recordings of our activities of mathematical construction. It is an application of mathematics to the language of mathematics. Specifically, logic studies the patterns that characterize valid inference. The aim is to establish general rules operating on statements about mathematical constructions such that, if the original statements (the premises) convey a mathematical truth, so will the statement obtained by applying the rule (the conclusion; Brouwer 1949C: 1243). What is preserved in an inference from given premises to a conclusion is therefore not, as in classical logic, a kind of possibly evidence-transcendent truth, but constructibility. This view is quite explicit already in Brouwer’s dissertation (Brouwer 1907: 125–132, 159–160 [1975: 72–75, 88]), but a more memorable passage is in the paper from 1908: Can one, in the case of purely mathematical constructions and transformations, temporarily neglect the presentation of the mathematical system that has been erected, and move in the accompanying linguistic building, guided by the principles of the syllogism, of contradiction, and of tertium exclusum, always confident that, by momentary evocation of the presentation of the mathematical constructions suggested by this reasoning, each part of the discourse could be justified? (Brouwer 1908C: 4 [van Atten & Sundholm 2017: 40]) (He then goes on to argue that the answer is “yes” for the principles of the syllogism and of contradiction, but, in general, “no” for the Principle of the Excluded Middle (PEM); more on this below, section 2.4.) But if a certain mathematical construction can be constructed out of another one, this is a purely mathematical fact, and as such independent of logic. Logic therefore is descriptive but not creative: by the use of logic, one will never obtain mathematical truths that are not obtainable by a direct mathematical construction (Brouwer 1949C: 1243). Hence, in the development of intuitionistic mathematics, logic can never play an essential role. It follows from Brouwer’s view that logic is subordinate to mathematics. The classical view that mathematics is subordinate to logic is closely related to the view that pure logic has no particular subject matter or domain, and is prior to all. From that perspective, Brouwer’s conception of logic as dependent on mathematics will seem too restrictive. But for Brouwer logic always presupposes mathematics, because in his view it is, like any exact thought, an application of mathematics. The resulting linguistic system of logic may in turn be studied mathematically, even independently of the mathematical activities and their recordings that it was originally abstracted from. Iterating the process, an infinite hierarchy arises of mathematical activities, their linguistic recordings, and the mathematical study of these recordings as linguistic objects independently of their original meaning. Brouwer describes this hierarchy (in more detail than we have done here) at the end of his dissertation (Brouwer 1907: 173ff), and criticizes Hilbert for not respecting it. Of particular interest is the distinction Brouwer makes between mathematics and “mathematics of the second order” (Brouwer 1907: 99n.1, 173 [1975: 61n.1, 94]), where one instance of the latter is the mathematical study of the language of the former in abstraction from its original meaning; this way, Brouwer made fully explicit the distinction between mathematics and (what became known as) metamathematics (e.g., Hilbert 1923: 153). Later, Brouwer claimed priority for this distinction, adding in a footnote that he had explained it to Hilbert in a series of conversations in 1909 (Brouwer 1928A2: 375 [Mancosu 1998: 44n.1]). 2.2 The Hypothetical Judgement Brouwer realized (1907: 125–128 [1975: 72–73]) that the hypothetical judgement seems to pose a problem for his view on logic as described above. For what is peculiar to the hypothetical judgement, Brouwer says, is that there the priority of mathematics over logic seems to be reversed. Among the examples he refers to are the proofs found in elementary geometry of the problems of Apollonius. Here is one of them: Given three circles, defined by their centers and their radii, construct a fourth circle that is tangent to each of the given three. The way this is usually solved is first to assume that such a fourth circle exists, then to set up equations that express how it is related to the three given circles, and then, via algebraic manipulations and logic, arrive at explicit definitions of the center and radius of the required circle, and, from there, at corresponding mathematical constructions. So it seems that here one first has to assume the existence of the required circle, then use logic to make various judgements about it, and only thereby arrives at a mathematical construction for it. However, Brouwer argues, this is not what really happens. His general interpretation of such cases is as follows. Having first remarked that logical reasoning accompanies or mirrors mathematical activity which is at least conceptually prior to that reasoning, Brouwer then says: There is a special case […] which really seems to presuppose the hypothetical judgment from logic. This occurs where a structure in a structure is defined by some relation, without it being immediately clear how to effect its construction. Here one seems to assume to have effected the required construction, and to deduce from this hypothesis a chain of hypothetical judgments. But this is no more than apparent; what one is really doing in this case is the following: one starts by constructing a system that fulfills part of the required relations, and tries to deduce from these relations, by means of tautologies, other relations, in such a way that in the end the deduced relations, combined with those that have not yet been used, yield a system of conditions, suitable as a starting-point for the construction of the required system. Only by this construction will it then have been proved that the original conditions can indeed be satisfied. (Brouwer 1907: 126–127 [1975: 72; translation modified]) Different readings of this concise passage have been proposed. According to one, Brouwer’s passage bears on \(A \rightarrow B\) in the following way: • \((\alpha)\) Brouwer points out in the above lines that if the conditions and specifications for \(A\) are given, then we try to add more information in such a way that, after a certain amount of constructional activity, we can really carry out a construction of \(A\) which respects the specifications. Once this is accomplished, we can turn to the “implication” construction for \(B\), which yields the construction for \(B\) and to the required embedding of the structure for \(A\) into the structure for \(B\). (van Dalen 2004: 250–251) According to interpretation \(\alpha\), \(A \rightarrow B\) just means \(A \wedge B\) with the extra information that the construction for \(B\) was obtained from that for \(A\). On this reading \(A \rightarrow B\) can be asserted only after a construction for \(A\) has been found. The idea is clear: namely, to avoid hypothetical constructions, and the use of logic they require, by insisting that a construction be supplied that proves the antecedent. (As will be explained in Section 2.1 of the supplementary document Objections to the Proof Interpretation, Freudenthal (1937b) too has suggested this strategy, albeit with a different motivation than this passage in Brouwer’s dissertation.) But, as van Dalen also notices, it is also in effect a rejection of the hypothetical judgement in the general case where one does not know whether there is a construction for \(A\). An alternative reading is \(\beta\): • \((\beta)\) In order to establish \(A \rightarrow B\), one has to conceive of \(A\) and \(B\) as conditions on constructions, and to show that from the conditions specified by \(A\) one obtains the conditions specified by \(B\), according to transformations whose composition preserves mathematical constructibility: if, by hypothesis, a construction for \(A\) has been made, then we can make a construction for \(B\) (van Atten 2009: 128). On this reading, Brouwer’s explanation of the hypothetical judgement avoids hypothetical constructions and the concomitant use of logic by considering conditions on constructions instead of constructions themselves. Instead of a “chain of hypothetical judgements” that one seems to make, one is really making a chain of transformations in which from required relations (i.e., given conditions) further relations are derived. It is, of course, a requirement that these transformations preserve mathematical constructibility, and that this preservation is itself intuitively known. The role of conditions is explicit in a statement Brouwer made at the other end of his publishing career: [T]he wording of a mathematical theorem has no sense unless it indicates the construction either of an actual mathematical entity or of an incompatibility (e.g., the identity of the empty two-ity with an empty unity) out of some constructional condition imposed on a hypothetical mathematical system. (Brouwer 1954A: 3) Brouwer speaks not of an incompatibility constructed out of a hypothetical mathematical system, but out of some condition on its construction. Be that as it may, the preservation of constructibility from \(A\) to \(B\) is essential to both reading \(\alpha\) and reading \(\beta\), so on either it is clear that Brouwer had the Proof Interpretation of the implication in mind already in 1907. For further discussion of Brouwer’s passage on the hypothetical judgement and the two readings of it mentioned here, see Kuiper 2004, van Dalen 2004, van Dalen 2008, and van Atten 2009. 2.3 Negation Intuitionistically, to say that a proposition \(A\) is true is primarily to say that we have effected a construction that is correctly described by \(A\); the proposition \(A\) is made true by the construction. Idealizing to a certain extent, we say that \(A\) is true if we possess a construction method that, when effected, will yield a construction that is correctly described by \(A\). According to Brouwer, to say that a proposition \(A\) is false then must mean that it is impossible to effect an appropriate construction; notation \(\neg A\). Such an impossibility is recognized either immediately (e.g., the impossibility to identify 1 unit and 2 units) or mediately. In the former case, one observes directly that an intended construction is blocked; it “does not go through” (Brouwer 1907: 127 [1975: 73]). In the latter case, one shows that a proposition \(A\) is contradictory by reducing \(A\) to a known falsehood, e.g., one shows that \(A \rightarrow 1=2\) (Brouwer 1954A: 3). In practice, one defines \(\neg A := A \rightarrow 1=2\) (and hence \(\neg 1=2\) is seen as a particular case of \(A \rightarrow A)\). The notion of “negation as impossibility” is known as “strong negation”. One speaks of the “weak negation” of \(A\) to express that so far no proof of \(A\) has been found. This excludes neither finding a proof of \(A\) nor finding a proof of \(\neg A\) later. Clearly, then, to assert the weak negation of \(A\) is not to assign a truth value besides true and false to it; Barzin and Errera’s claim (see section 4.3 below) that its treatment of negation turns Brouwer’s logic into a three-valued one is groundless. The distinction between weak and strong negation is important for the so-called “weak counterexamples”. 2.4 Weak Counterexamples (“unreliability”) and Excluded Middle As the rules of logic operate on linguistic objects, and these linguistic objects may be considered separately from the precise mathematical context in which they described a truth, it is possible to apply the rules of logic and obtain new linguistic objects without providing a precise mathematical context for the latter. In other words, the logical principles, which can be stated without specifying the context in which they are applied, and thereby suggest context-independence, are for their correctness sensitive to the context. There is no general guarantee that logical principles which are valid in one context, will be equally valid in a different one. This is what Brouwer means when he speaks of “the unreliability of the logical principles”, the title and theme of his seminal paper Brouwer 1908C; see also Brouwer 1949A: 1243. In the 1908 paper, Brouwer draws a consequence of his general view on logic that he had overlooked in his dissertation: PEM, \(A \vee \neg A\), is not valid. Its constructive validity would mean that we have a method that, for any \(A\), either gives us a construction for \(A\), or shows that such a construction is impossible. But we do not have such a general decision method, and there are many open problems in mathematics. Brouwer states “Every number is finite or infinite” as an example of a general proposition for which so far no constructive proof has been found. As a consequence, he says, it is at present uncertain whether problems such as the following are solvable: Is there in the decimal expansion of \(\pi\) a digit which occurs more often than any other one? Do there occur in the decimal expansion of \(\pi\) infinitely many pairs of equal consecutive digits? (Brouwer 1908C: 7 [van Atten & Sundholm 2017: 44]) In effect, Brouwer is saying that we can assert the weak negations of the propositions expressed in these questions; hence, these propositions are so-called “Brouwerian counterexamples” or “weak counterexamples” to PEM. On the constructive reading of PEM, of course any as yet unsolved problem is a weak counterexample to PEM. Brouwer began to publish weak counterexamples to PEM in international journals only much later (1921A, 1924N, 1925E). Brouwer remarks in the 1908 paper that the fact that PEM is not valid does not mean that it is false: \(\neg(A \vee \neg A)\) implies \(\neg A \wedge \neg \neg A\), a contradiction. In other words, \ (\neg \neg(A \vee \neg A)\) is correct. Brouwer concludes that it is always consistent to use PEM but that it does not always lead to truths. In the latter case, the argument that appeals to PEM establishes not the truth, but the consistency of its conclusion. Brouwer proposes to divide the theorems that are usually considered as proved into the true and the non-contradictory ones (Brouwer 1908C: 7n.2 [van Atten & Sundholm 2017: 44n.14]). That is not a suggestion that there are three truth values, true, non-contradictory, false; for a non-contradictory proposition might be proved one day and thereby become true. A mathematical context in which PEM is valid, Brouwer points out, is that of the question whether a given construction of finite character is possible in a given finite domain. In such a context there are only finitely many possible attempts at that construction, and each will succeed or fail in finitely many steps (for clarity, the phrasing here is not that of Brouwer 1908C but that of Brouwer 1955). So on these grounds \(A \vee \neg A\) holds, where \(A\) is the proposition stating that the construction exists. Brouwer ascribed the belief in the general validity of PEM to an unwarranted projection from such finite cases (in particular, those arising from the application of finite mathematics to everyday phenomena) to the infinite.^[6] In his dissertation of 1907, Brouwer still accepted PEM as a tautology, (mis)understanding \(A \vee \neg A\) as \(\neg A \rightarrow \neg A\) (Brouwer 1907: 131, 160 [1975: 75, 88]).^[7] Curiously, he did realize at the same time that there is no evidence for the principle that every mathematical proposition is either provable or refutable (Brouwer 1907: 142n.3 [1975: 101]); this principle is the constructively correct reading of PEM. In the paper from 1908, he corrected his earlier understanding of PEM: Now the principium tertii exclusi: this demands that every supposition is either correct or incorrect, mathematically: that of every supposed fitting in a certain way of systems in one another, either the termination or the blockage by impossibility, can be constructed. (Brouwer 1908C: 5 [van Atten & Sundholm 2017: 42]) 2.5 There Are No Absolutely Undecidable Propositions Brouwer continues this last quotation as follows: It follows that the question of the validity of the principium tertii exclusi is equivalent to the question whether unsolvable mathematical problems can exist. There is not a shred of proof for the conviction, which has sometimes been put forward [here Brouwer refers in a footnote to Hilbert 1900] that there exist no unsolvable mathematical problems. Here he seems to overlook that, constructively, there is a difference between the claim that every mathematical problem is solvable and the weaker claim that there are no absolutely unsolvable problems. The former is equivalent to \(A \vee \neg A\), the latter to \(\neg \neg(A \vee \neg A)\); and Brouwer had demonstrated the intuitionistic validity of the latter in the same paper. Indeed, in the Brouwer archive there is a note from about the same period 1907–1908 in which the point is made explicitly: Can one ever demonstrate of a proposition, that it can never be decided? No, because one would have to so by reductio ad absurdum. So one would have to say: assume that the proposition has been decided in sense \(a\), and from that deduce a contradiction. But then it would have been proved that not-\(a\) is true, and the proposition is decided after all. (van Dalen 2001b: 174n.a; translation mine) Brouwer never published this note. Wavre in 1926 gave the argument for a particular case, clearly seeing the general point: It suffices to give an example of a number of which one does not know whether it is algebraic or transcendent in order to give at the same time an example of a number that, until further information comes in, could be neither the one nor the other. But, on the other hand, it would be in vain, it seems to me, to want to define a number that indeed is neither algebraic nor transcendent, as the only way to show that it is not algebraic consists in showing that it is absurd that it would be, and then the number would be transcendent. (Wavre 1926: 66; translation The explicit observation that \(\neg \neg(A \vee \neg A)\) means that no absolutely unsolvable problem can be indicated was made in Heyting 1934: 16. 3. Brouwer’s Later Refinements and Applications, 1921–1955 3.1 The Implicit Proof Interpretation Three examples can be given that show that by the mid-1920s, Brouwer in practice worked with the hypothetical judgement and with the clause for implication in the Proof Interpretation (which was published later): an equivalence in propositional logic, the proof of the bar theorem, and his reading of ordering axioms. 3.1.1 An equivalence in propositional logic In a lecture in 1923, Brouwer presented a proof of \(\neg \neg \neg A \leftrightarrow \neg A\) (Brouwer 1925E: 253 [Mancosu 1998: 291]).^[8] This equivalence is the one theorem in propositional logic that Brouwer ever published. The argument begins by pointing out that \(A \rightarrow B\) implies that \(\neg B \rightarrow \neg A\) (because \(\neg B\) is \(B \rightarrow \bot\) and the two implications can be composed because the consequent of the one is the antecedent of the other). It would not have been possible for Brouwer to make this inference if at the time it would have been among his proof conditions of an implication to have a proof of the antecedent, as then a proof of \(A \rightarrow B\) would lead to a proof of \(B\) and thereby make it impossible to begin establishing the second implication by proving its antecedent \(\neg B\). Later, Brouwer pointed out the following consequence of the validity of \(\neg \neg \neg A \leftrightarrow \neg A\): the proof method of reductio ad absurdum can be used to establish negative propositions \(\neg A\) (Brouwer 1929A: 163 [Mancosu 1998: 52]). For if the assumption of \(\neg \neg A\) leads to a contradiction, that is, to \(\neg \neg \neg A\), the equivalence allows one to simplify that to \(\neg A\). On the other hand, reductio ad absurdum in general cannot be used to establish positive propositions \(A\); the derivation of a contradiction from the assumption \(\neg A \) only leads to \(\neg \neg A\), which intuitionistically is weaker than \(A.\) 3.1.2 The proof of the bar theorem Brouwer’s bar theorem is crucial to intuitionistic analysis; for a detailed explanation of the notions involved and of Brouwer’s proof, see Heyting 1956 (Ch. 3), Parsons 1967, and van Atten 2004b (Ch. 4). Here we will rather be concerned with the logical aspects. Brouwer’s proof of the bar theorem from 1924 (later versions of the proof appeared in 1927 and in 1954) proves a statement of the form “If \(A\) has been demonstrated, then \(B\) is demonstrable” (Brouwer 1924D1, 1927B, 1954A). This would evidently not be an implication \(A \rightarrow B\) if the latter were understood as a transformation of the proof conditions of \(A\) into those of \(B\), because in the former case there is the additional information that, by hypothesis, \(A\) has been demonstrated. In other words, we have, by hypothesis, a concrete proof of \(A\) at hand. (However, both are hypothetical judgements in the sense that neither requires that we actually have demonstrated \(A\).) It may be possible to exploit this extra information, and below it will be indicated how Brouwer did this. (Heyting in 1956 also chose to understand implication in this stronger sense, that is, in terms of assertion conditions; see section 5.4 below.) A simple but relevant version of the bar theorem (for the universal tree over the natural numbers, T) would be: If it has been demonstrated that every path through \(T\) intersects a given set of nodes \(B\), then it can be demonstrated that every path through \(T\) has a node in common with a set of nodes \(B′\) that can be well-ordered.^[9] Sets like \(B\) and \(B′\) are called bars. Brouwer first formulates a condition for any demonstration that may be found of the proposition “Tree \(T\) contains a bar”. This condition is that any demonstration of that proposition must be analyzable into a certain canonical form. Brouwer then gives a method to transform any such demonstration, when analyzed into that canonical form, into a mathematical construction that makes the proposition “\(T\) contains a well-ordered bar”, true, thereby establishing the consequent. This strategy clearly shows that Brouwer’s operative explanation of the meaning of \(A \rightarrow B\) was a version of clause (H3) of the Proof Interpretation as formulated in the Introduction, if we understand “proof” in that clause as “demonstration”. A demonstration or concrete proof of the antecedent, be it an actual or a hypothetical one, is required to obtain a canonical form. The reason is that the existence of a canonical proof of a proposition \(A\) cannot be logically derived from the mere proof conditions of \(A\), as the form such a canonical proof takes may well depend on specific non-logical details of mathematical constructions for \(A\). In Brouwer’s proof of the bar theorem, the applicability of the transformation method to any demonstration of the antecedent is guaranteed by the fact that the condition on such demonstrations that he formulates is a necessary condition. Brouwer obtained this necessary condition by exploiting the fact that on his conception, mathematical objects, so in particular trees and bars, are mental objects; this opens the possibility that reflection on the way these objects and their properties are constructed in mental acts provides information on them that can be put to mathematical use, in particular if this information consists in constraints on these acts of construction. This is how Brouwer arrived at his canonical form. In effect, Brouwer’s argument for the bar theorem is a transcendental argument. On other conceptions of mathematics such considerations need not be acceptable, and indeed no proofs of the (classically valid) bar theorem are known in other varieties of constructive mathematics (where bar induction is either accepted as an axiom, a possibility that Brouwer had also suggested (Brouwer 1927B: 63n.7 [van Heijenoort 1967: 460n.7]), or not accepted, as in the Markov School). For a more detailed discussion of this matter, see Sundholm and van Atten 2008. 3.1.3 Ordering axioms Around 1925, Brouwer introduced the notion of “virtual ordering”. A (partial) ordering \(\lt\) is virtual if it satisfies the following axioms (Brouwer 1926A: 453): 1. The relations \(r = s, r \lt s\) and \(r \gt s\) are mutually exclusive. 2. From \(r = u, s = v\) and \(r \lt s\) follows \(u \lt v\). 3. From the simultaneous failure of the relations \(r \gt s\) and \(r = s\) follows \(r \lt s\). 4. From the simultaneous failure of the relations \(r \gt s\) and \(r \lt s\) follows \(r = s\). 5. From \(r \lt s\) and \(s \lt t\) follows \(r \lt t\). In a lecture course on Order Types in 1925, of which David van Dantzig’s notes are preserved in the Brouwer Archive, Brouwer commented: The axioms II through V are to be understood in the constructive sense: if the premisses of the axiom are satisfied, the virtually ordered set should provide a construction for the order condition in the conclusion. (van Dalen 2008: 19) This is a clear instance of the clause for implication in the Proof Interpretation. Note that Brouwer did not include this elucidation in the published paper (1926A), nor in later presentations. 3.2 Widening the Scope of the Weak Counterexamples As we saw above, in the paper from 1908 Brouwer had given weak counterexamples to PEM. In the 1920s Brouwer developed a general technique for constructing weak counterexamples which also made it possible to widen their scope and include principles of analysis. The development began in 1921, when Brouwer gave a weak counterexample to the proposition that every real number has a decimal expansion (Brouwer 1921A). The argument proceeded by defining real numbers whose decimal developments are dependent on specific open problems concerning the decimal development of \(\pi\). Brouwer closed by observing that, should these open problems be solved, then other real numbers without decimal expansion can be defined (Brouwer 1921A: 210 [Mancosu 1998: 34]). The general technique was made explicit in a lecture from 1923 (Brouwer 1924N: 3 and footnote 4 [van Heijenoort 1967: 337 and footnote 5]) and reached its perfection with the “oscillatory number” method in the first Vienna lecture in 1928 (Brouwer 1929A: 161 [Mancosu 1998: 51]). The method involves the reduction of the validity of a mathematical principle to the solvability of an open problem of the following type: we have a decidable property \(P\) (defined on the natural numbers) for which we have as yet shown neither \(\exists xP(x)\) nor \(\forall x\neg P(x)\). This reduction is carried out in such a way that it only uses the fact that \(P\) induces an open problem of this type, and does not depend on the exact definition of \(P\); that is, if the open problem is solved, one can simply replace it by another one of the same type, and exactly the same reduction still works. This uniformity means that, as long as there are open problems of this type at all (and this is practically certain at any time), there is no intuitionistic proof of the general mathematical principle in question. In the 1920s, Brouwer constructed weak counterexamples to the following general mathematical propositions, among others (where \(\mathbf{R}\) stands for the set of intuitionistic real numbers, and \(\mathbf{Q}\) for the set of rationals): 1. The continuum is totally ordered (Brouwer 1924N) 2. Every set is either finite or infinite (Brouwer 1924N) 3. The Heine-Borel theorem (Brouwer 1924N) 4. \(\forall x \in \mathbf{R}(x \in \mathbf{Q} \vee x \not\in \mathbf{Q})\) (Brouwer 1925E) 5. Any two straight lines in the Euclidean plane are either parallel, or coincide, or intersect (Brouwer 1929A) 6. Every infinite sequence of positive numbers either converges or diverges (Brouwer 1929A) 3.3 Strong Counterexamples and the Creating Subject A weak counterexample shows that we cannot at present prove some proposition, but it does not actually refute it; in that sense, it is not a counterexample proper. From 1928 on, Brouwer devised a number of strong counterexamples to classically valid propositions, that is, he showed that these propositions were contradictory. This should be understood as follows: if one keeps to the letter of the classical principle but in its interpretation substitutes intuitionistic notions for their classical counterparts, one arrives intuitionistically at a contradiction. So Brouwer’s strong counterexamples are no more counterexamples in the strict sense of the word than his weak counterexamples are (but for a different reason). One way of looking at strong counterexamples is that they are non-interpretability results. That strong counterexamples to classical principles are possible at all is explained as follows. As mentioned, on the intuitionistic understanding, logic is subordinate to mathematics, whereas classically it is the other way around. Hence, if intuitionistic mathematics contains objects and principles that do not figure in classical mathematics, it may come about that intuitionistic logic, which then depends also on these non-classical elements, is no proper part of classical logic. Brouwer’s first strong counterexample was published in Brouwer 1928A2, where he showed: \[ \neg \forall x\in \mathbf{R}(x \in \mathbf{Q} \vee x \not\in \mathbf{Q}) \] This is a strengthening of the corresponding weak counterexample from 1923, but the argument is entirely different. The strong counterexample depends on the theorem in intuitionistic analysis, obtained in 1924 and improved in 1927, that all total functions \([0,1] \rightarrow \mathbf{R}\) are uniformly continuous. The non-classical elements in that theorem are the conception of the continuum as a spread of choice sequences, and the bar theorem based on it (for further explanations of this conception, see Heyting 1956 (Ch.3) and van Atten 2004b (Chs.3 and 4)).^[10] From 1948 on, Brouwer also published counterexamples that are based on so-called “creating subject methods”. (He mentions in Brouwer 1948A that he has been using this method in lectures since 1927.) Their characteristic property is that they make explicit reference to the subject who carries out the mathematical constructions, to the temporal structure of its activities, and the relation of this structure to the intuitionistic notion of truth. These methods can be used to generate weak as well as strong counterexamples. (In the earlier “oscillatory number” method for generating weak counterexamples, the creating subject is not explicitly referred to.) Using creating subject methods, Brouwer showed, for instance, \[ \begin{align*} \neg \forall x & \in \mathbf{R}(\neg \neg x \gt 0 \rightarrow x \gt 0) & \textrm{(Brouwer 1949A)}\\ \neg \forall x & \in \mathbf{R}(x \ne 0 \rightarrow x \lt 0 \vee x \gt 0) & \ textrm{(Brouwer 1949B)} \end{align*} \] The actual arguments using these methods introduce no new logical phenomena as such—weak and strong counterexamples can also be given by other means. For the moment, we refer to the literature for further details: Brouwer 1949A, 1949B, 1954F; Heyting 1956: 117–120; Myhill 1966; Dummett 2000: 244–245; van Atten 2018. But we do note here one particular aspect of this method. It seems to introduce a further notion of negation, by accepting that, if it is known that the creating subject will never prove \(A\), then \(A\) is false. But this is actually no different from the notion of negation as impossibility. Heuristically, this can be seen as follows: given the freedom the creating subject has to construct whatever it can, the only way to show that there can be no moment at which the subject demonstrates \(A\) is to show that a demonstration of \(A\) itself is impossible. An actual justification of the principle is this: If the creating subject demonstrates a proposition \(A\), it does so at a particular moment \(n\); so, by contraposition, if it is contradictory that there exists a moment \(n\) at which the subject demonstrates \(A\), then \(A\) is 3.4 The Classification of Propositions In Brouwer 1955, the four possible cases a proposition \(\alpha\) may be in at any particular moment are made explicit: 1. \(\alpha\) has been proved to be true; 2. \(\alpha\) has been proved to be false, i.e., absurd; 3. \(\alpha\) has neither been proved to be true nor to be absurd, but an algorithm is known leading to a decision either that \(\alpha\) is true or that \(\alpha\) is absurd; 4. \(\alpha\) has neither been proved to be true nor to be absurd, nor do we know an algorithm leading to the statement either that \(\alpha\) is true or that \(\alpha\) is absurd. In a lecture from 1951, Brouwer lists only cases 1, 2, and 4 from the above list, adding that case 3 “obviously is reducible to the first and second cases” (Brouwer 1981A: 92). That remark emphasizes an important idealization permitted in intuitionistic mathematics: we may make the idealization that, once we have obtained a decision method for a specific proposition, we also know its outcome. Brouwer also adds that a proposition for which case 4 holds may at some point pass to another case, either because we have in the meantime found a decision method, or because the objects involved in proposition \(\alpha\) in the meantime have acquired further properties that permit to make the decision (as may happen for propositions about choice sequences). 3.5 Brouwer’s View on the Formalist Program and Gödel’s Incompleteness Theorems In 1908, Brouwer had shown that \(\neg \neg(A \vee \neg A)\); in 1923, when Hilbert’s program was in full swing, this result inspired Brouwer to say that “We need by no means despair of reaching this goal [of a consistency proof for formalized mathematics]”; see section 2.1 above for the full quotation. (At the time, Brouwer suspected that \(\neg \neg A \rightarrow A\) was weaker than PEM; Bernays quickly corrected this impression in a letter to Brouwer (Brouwer 1925E: 252n.4 [Mancosu 1998: 292n.4]).) In 1928, he added to this the consistency of finite conjunctions of instances of PEM, and considered these results to “offer some encouragement” for the formalist project of a consistency proof (Brouwer 1928A2: 377 [Mancosu 1998: 43]).^[12] The strongest statement based on these results he made at the end of the first Vienna lecture from 1928: An appropriate mechanization of the language of this intuitionistically non-contradictory mathematics should therefore deliver precisely what the formalist school has set as its goal. (Brouwer 1929A: 164; translation mine) But, for reasons explained above, such a consistency proof would have no mathematical value for Brouwer; and the best a classical mathematician can be said to be doing, according to the view Brouwer sketches, is to be giving relative consistency proofs. Gödel’s incompleteness theorems showed that Hilbert’s Program, in its most ambitious form, cannot succeed. Brouwer’s assistant Hurewicz discussed the incompleteness theorem in a seminar (van Dalen 2005: 674n.7). There is no comment from Brouwer on Gödel’s first theorem in print; on the other hand, he clearly had the second theorem in mind when he wrote, in 1952, that The hope originally fostered by the Old Formalists that mathematical science erected according to their principles would be crowned one day with a proof of non-contradictority, was never fulfilled, and, nowadays, in view of the results of certain investigations of the last few decades, has, I think, been relinquished. (Brouwer 1952B: 508) Hao Wang reports: In the spring of 1961 I visited Brouwer at his home. He discoursed widely on many subjects. Among other things he said that he did not think G’s incompleteness results are as important as Heyting’s formalization of intuitionistic reasoning, because to him G’s results are obvious (obviously true). (Wang 1987: 88)^[13] With respect to the first incompleteness theorem, Brouwer’s reaction is readily understandable. Already in his dissertation, he had noted that the totality of all possible mathematical constructions is “denumerably unfinished”; by this he meant that we can never construct in a well-defined way more than a denumerable subset of it, but when we have constructed such a subset, we can immediately deduce from it, following some previously defined mathematical process, new elements which are counted to the original set. (Brouwer 1907: 148 [1975: 82]) And in one of the notebooks leading up to his dissertation, he stated that “The totality of mathematical theorems is, among other things, also a set which is denumerable but never finished”.^[14] Indeed, according to Carnap, it had been an argument of Brouwer’s that had stimulated Gödel in finding the first theorem. In a diary note for December 12, 1929, Carnap states that Gödel talked to him that day about the inexhaustibility of mathematics (see separate sheet). He was stimulated to this idea by Brouwer’s Vienna lecture. Mathematics is not completely formalizable. He appears to be right. (Wang 1987: 84) On the “separate sheet”, Carnap wrote down what Gödel had told him: We admit as legitimate mathematics certain reflections on the grammar of a language that concerns the empirical. If one seeks to formalize such a mathematics, then with each formalization there are problems, which one can understand and express in ordinary language, but cannot express in the given formalized language. It follows (Brouwer) that mathematics is inexhaustible: one must always again draw afresh from the “fountain of intuition”. There is, therefore, no characteristica universalis for the whole mathematics, and no decision procedure for the whole mathematics. In each and every closed language there are only countably many expressions. The continuum appears only in “the whole of mathematics” … If we have only one language, and can only make “elucidations” about it, then these elucidations are inexhaustible, they always require some new intuition again. (As quoted, in translation, in Wang 1987: 50) This record contains in particular elements from the second of Brouwer’s two lectures in Vienna, in which one finds the argument that Gödel refers to: on the one hand, the full continuum is given in a priori intuition, while on the other hand, it cannot be exhausted by a language with countably many expressions (Brouwer 1930A: 3, 6 [Mancosu 1998: 56, 58]). The second incompleteness theorem, on the other hand, must have surprised Brouwer, given his optimism in the 1920s about the formalist school achieving its aim of proving the consistency of formalized classical mathematics (see the quotation at the beginning of this subsection). In his final original published paper (1955), Brouwer was, in his own way, quite positive about the study of classical logic. After showing that various principles from the algebraic tradition in logic (e.g., Boole, Schröder) are intuitionistically contradictory, he continues: Fortunately classical algebra of logic has its merits quite apart from the question of its applicability to mathematics. Not only as a formal image of the technique of common-sensical thinking has it reached a high degree of perfection, but also in itself, as an edifice of thought, it is a thing of exceptional harmony and beauty. Indeed, its successor, the sumptuous symbolic logic of the twentieth century which at present is continually raising the most captivating problems and making the most surprising and penetrating discoveries, likewise is for a great part cultivated for its own sake. (Brouwer 1955: 116) 3.6 Brouwer’s Logic and the Grundlagenstreit Brouwer’s logic has played a role in the Grundlagenstreit (the Foundational Debate) only to the extent that this logic could be seen as a fragment of classical logic. Constructive logic in that sense was a success, and (with a different meaning) it became fundamental to Hilbert’s Program as well. On the other hand, phenomena specific to Brouwer’s full conception of logic, in particular the strong counterexamples, played no role in the Foundational Debate whatsoever. The main reason for this may be that, in their dependence on choice sequences, they use objects that are not acceptable in classical mathematics. (A more subtle matter is whether they are acceptable in Hilbert’s finitary mathematics. According to Bernays, Hilbert never took a position on choice sequences (Gödel 2003a: 279), and more generally never read Brouwer’s papers (van Dalen 2005: 637).^[15]) In addition, Brouwer did not announce the existence of strong counterexamples in a loud or polemical way; and when in 1954 he finally did publish (in English) a paper with a polemical title—“An example of contradictority in classical theory of functions”—the Foundational Debate was, in the social sense, long over. Intuitionistic logic and mathematics had been widely accepted to the extent that they could be seen as the constructive part of classical mathematics, while the typically intuitionistic innovations were ignored. It is not surprising, then, that the presentation of the strong counterexamples in the 1950s did not at all lead to a reopening of the debate. For further discussion of this matter, see Hesseling 2003 and van Atten 2004a. 4. Early Partial Formalizations and Metamathematics As Brouwer was more interested in developing pure mathematics than in developing logic, which for him was a form of applied mathematics, he never made an extensive study of the latter. In particular, he never made a systematic comparison of intuitionistic logic and classical logic as formalized, for example, in Principia Mathematica (Whitehead & Russell 1910) or by the Hilbert school (Hilbert 1923; Hilbert & Ackermann 1928). What motivated others to make such comparisons was the publication by Brouwer in international journals of weak counterexamples that showed how these also affected very general mathematical principles such as trichotomy for real numbers (see above, section 3.2). Clearly, to make a systematic comparison possible, one needs a codification of intuitionistic logic in a formal system. On the intuitionistic view that cannot exist, as logic is as open-ended as the mathematics it depends on. But one may formalize fragments of intuitionistic logic. The relevant papers here are Kolmogorov 1925, Heyting 1928 (unpublished), Glivenko 1928, Glivenko 1929, and Heyting 1930.^[16] But perhaps the first to give systematic thought to the matter was Paul Bernays. In a letter to Heyting of November 5, 1930, he wrote: The lectures that Prof. Brouwer at the time held in Göttingen (for the first time) [1924 (van Dalen 2001: 305)], led me to the question how a Brouwerian propositional logic could be separated out, and I arrived at the result that this can be done by leaving out the single formula \(\neg \neg a \supset a\) (in your symbolism). I then also wrote to Prof. Brouwer [correcting Brouwer’s impression at the time that this formula is weaker than PEM]. (Troelstra 1990: 8)^[17] (Bernays’ correction was included, at the proof stage, in Brouwer’s paper (1925E: 252n.4 [Mancosu 1998: 292n.4]).) However, Bernays did not publish his idea for a Brouwerian logic. (Kolmogorov would publish the same idea in 1925; see below.) In setting up a formal system to capture, albeit necessarily only in part, logic as it figures in Brouwer’s foundations, naturally some priorly obtained meaning explanation is needed to serve as the criterion for intuitionistic validity. Yet none of the papers by Kolmogorov, Heyting, and Glivenko just mentioned made an explicit contribution to the meaning explanation of intuitionistic logic. As we will see, the explanations as given in the papers (which is not necessarily all their respective authors had in mind) were too vague for that. It is perhaps not surprising, then, that the systems were not equivalent; notably, Kolmogorov rejected Ex Falso, while Heyting and Glivenko accepted it. We will now discuss these papers in turn. 4.1 Kolmogorov 1925 In 1925, Andrei Kolmogorov, at the age of 22, published the first (partial) formalization of intuitionistic logic, and also made an extensive comparison with formalized classical logic, in a paper called “On the principle of the excluded middle”. As van Dalen has suggested (Hesseling 2003: 237), Kolmogorov probably had come into contact with intuitionism through Alexandrov or Urysohn, who were close friends of Brouwer’s. Kolmogorov was in any case remarkably well-informed, citing even papers that had only appeared in the Dutch-language “Verhandelingen” of the Dutch Academy of Sciences (Brouwer 1918B, 1919A, 1921A). The task Kolmogorov set himself in the paper is to explain why “the illegitimate use of the principle of the excluded middle” as revealed in Brouwer’s writings “has not yet led to contradictions and also why the very illegitimacy has often gone unnoticed” (van Heijenoort 1967: 416). In effect, as other passages make clear (van Heijenoort 1967: 429–430), the (unachieved) aim is to show that classical mathematics is translatable into intuitionistic mathematics, and thereby give a consistency proof of classical mathematics relative to intuitionistic mathematics. The technical result established in the paper is: Classical propositional logic is interpretable in an intuitionistically acceptable fragment of it. The intuitionistic fragment, called \(\mathbf{B}\) (presumably for “Brouwer”) is: \[ \tag{1} A & \rightarrow(B \rightarrow A)\\ \tag{2} (A \rightarrow(A \rightarrow B)) & \rightarrow (A \rightarrow B)\\ \tag{3} (A \rightarrow(B \rightarrow C)) & \rightarrow (B \rightarrow(A \ rightarrow C))\\ \tag{4} (B \rightarrow C) & \rightarrow((A \rightarrow B) \rightarrow(A \rightarrow C))\\ \tag{5} (A \rightarrow B) & \rightarrow((A \rightarrow \neg B) \rightarrow \neg A)\\ \] The system \(\mathbf{H}\) (presumably for “Hilbert”) consists of \(\mathbf{B}\) and the additional axiom \[\tag{6} \neg \neg A \rightarrow A \] In both systems, the rules are modus ponens and substitution. Kolmogorov then indicates and partially carries out a proof that \(\mathbf{H}\) is equivalent to the system for classical propositional logic presented by Hilbert in Hilbert 1923. Then Kolmogorov devises the following translation \(^*\): \[ \begin{align*} A^* & = \neg \neg A & \textrm{for atomic } A\\ F(\phi_1, \phi_2 ,\ldots ,\phi_k)^* & = \neg \neg F(\phi_{1}^*, \phi_{2}^*,\ldots ,\phi_{k}^*) & \textrm{ for composed formulas} \end {align*} \] and proves this interpretability result: \[ \textrm{If } U \vdash_{\mathbf{H}} \phi \textrm{ then } U^* \vdash_{\mathbf{B}} \phi^* \] where \(U\) is a set of axioms of \(\mathbf{H}\), and \(U^*\) the set of its translations (which Kolmogorov shows to be derivable in \(\mathbf{B})\). Kolmogorov’s technical result anticipated Gödel’s and Gentzen’s “double negation translations” for arithmetic (see below), all the more so since he also made quite concrete suggestions how to treat predicate logic. As Hesseling (2003: 239) points out, however, to see Kolmogorov’s result as a translation into intuitionistic mathematics is slightly different from his own way of seeing it. Kolmogorov saw it as a translation into a domain of “pseudomathematics”; but although he did not explicitly identify that as part of intuitionistic mathematics, he could have done so. Kolmogorov’s strategy to obtain a (fragment) of formalized intuitionistic logic was to start with a classical system and isolate an intuitionistically acceptable system from it. (Note that, although Kolmogorov refers to Principia Mathematica, he did not take it as his point of departure.) This might (roughly) be described as the method of crossing out, which is also what Heyting would do in 1928 (see below). Given the task Kolmogorov set himself, it is a natural approach. Kolmogorov’s criterion whether to keep an axiom was whether a proposition has an “intuitive foundation” or “possesses intuitive obviousness” (van Heijenoort 1967: 421, 422); on implication he said, The meaning of the symbol \(A \rightarrow B\) is exhausted by the fact that, once convinced of the truth of \(A\), we have to accept the truth of \(B\) too. (van Heijenoort 1967: 420) No more precise indications are given, so in that sense the paper did little to explain the meaning of intuitionistic logic. Ex Falso was excluded from this fragment: Kolmogorov said that, just like PEM, Ex Falso “has no intuitive foundation” (van Heijenoort 1967: 419). In particular, he says that Ex Falso is unacceptable for the reason that it asserts something about the consequences of something impossible (van Heijenoort 1967: 421). Note that that is a very strong rejection: it not only rules out Ex Falso in its full generality, but also specific instances such as “If 3.15 is an initial segment of \(\pi\), then 3.1 is an initial segment of \(\pi\)”. It also indicates an incoherence in Kolmogorov’s position: one cannot at the same time accept \(A \rightarrow (B \rightarrow A)\) as an axiom and deny that anything can be asserted about the consequences of an impossible \(B\). As van Dalen notes, it is not known whether Kolmogorov sent his paper to Brouwer (van Dalen 2005: 555). The contents of the paper seem to have remained largely unknown outside the Soviet Union for years. Glivenko mentioned the paper in a letter to Heyting of October 13, 1928, as did Kolmogorov in an undated letter to Heyting of 1933 or later (Troelstra 1990: 16); but in Heyting 1934 it is, unlike Kolmogorov’s later paper from 1932, neither discussed nor included in the bibliography. The volume of the Jahrbuch über die Fortschritte der Mathematik covering 1925, which included a very short notice on Kolmogorov 1925 by V. Smirnov (Leningrad), wasn’t actually published until 1932 (Smirnov 1925). 4.2 Heyting 1928 While Kolmogorov’s work remained unknown in the West, an independent initiative towards the formalization of intuitionistic logic and mathematics was taken in 1927, when the Dutch Mathematical Society chose to pose the following problem for its annual contest: By its very nature, Brouwer’s set theory cannot be identified with the conclusions formally derivable in a certain pasigraphic system [i.e., universal notation system]. Nevertheless certain regularities may be observed in the language which Brouwer uses to give expression to his mathematical intuition; these regularities may be codified in a formal mathematical system. It is asked 1. construct such a system and to indicate its deviations from Brouwer’s theories; 2. to investigate whether from the system to be constructed a dual system may be obtained by (formally) interchanging the principium tertii exclusi and the principium contradictionis. (Troelstra 1990: 4) The question had been formulated by Brouwer’s friend, colleague, and former teacher Gerrit Mannoury, who asked Brouwer’s opinion on it beforehand in a letter (Brouwer was in Berlin);^[18] unfortunately, no reply from Brouwer has been found, but the final formulation was the same as in Mannoury’s letter. Brouwer’s former student Arend Heyting, who had graduated (cum laude) in 1925 with a dissertation on intuitionistic projective geometry, wrote what turned out to be the one submission (Hesseling 2003: 274). The original manuscript seems no longer to exist, but it is known that its telling motto was “Stones for bread”.^[19] In 1928, the jury crowned Heyting’s work,^[20] stating that it was “a formalization carried out in a most knowledgeable way and with admirable perseverance” (Hesseling 2003: 274; translation modified). Heyting’s essay covered propositional logic, predicate logic, arithmetic, and Brouwerian set theory or analysis. One would think that, to be able to achieve this, Heyting must have had quite precise ideas on how to explain the logical connectives intuitionistically. However, Heyting’s correspondence with Freudenthal in 1930 shows that before 1930, Heyting had not yet arrived at the explicit requirement of a transformation procedure in the explanation of implication (see the quotation in section 5.1 below). Since the original manuscript seems not to have survived, a discussion of Heyting’s work must take the revised and published version from 1930 as its point of departure; see below. Heyting sent his manuscript to Brouwer, who replied in a letter of July 17, 1928, that he had found it “extraordinarily interesting”, and continued: By now I have already begun to appreciate your work so much, that I should like to request that you revise it in German for the Mathematische Annalen (preferably somewhat extended rather than Interestingly, Brouwer also suggested (with an eye on the formalization of the theory of choice sequences) And, with an eye on §13, perhaps also the notion of “law” can be formalized. It seems, however, that Heyting made no effort in that direction. Heyting’s paper would indeed be published soon after, in 1930; not in the Mathematische Annalen, as Brouwer by then was no longer on its editorial board, but in the proceedings of the Prussian Academy of Sciences. However, Heyting’s work became known already before its publication. Heyting mentioned it in correspondence with Glivenko in 1928 (see below), Tarski and Łukasiewicz talked about it to Bernays at the Bologna conference in 1928, and Church mentioned it in a letter to Errera in 1929 (Hesseling 2003: 274). 4.3 Glivenko 1928 and 1929 In reaction to Barzin and Errera, who had argued that Brouwer’s logic was three-valued and moreover that this led to it being inconsistent, Valerii Glivenko^[22] in 1928 set out to prove them wrong by formal means. He gave the following list of axioms for intuitionistic propositional logic: \[ \tag{1} p &\rightarrow p\\ \tag{2} (p \rightarrow q) &\rightarrow((q \rightarrow r) \rightarrow(p \rightarrow r))\\ \tag{3} (p \wedge q) &\rightarrow p\\ \tag{4} (p \wedge q) &\rightarrow q\\ \tag {5} (r \rightarrow p) &\rightarrow((r \rightarrow q) \rightarrow(r \rightarrow(p \wedge q)))\\ \tag{6} p &\rightarrow(p \vee q)\\ \tag{7} q &\rightarrow(p \vee q)\\ \tag{8} \label{GlivAxiom8} (p \ rightarrow r) &\rightarrow ((q \rightarrow r) \rightarrow((p \vee q) \rightarrow r))\\ \tag{9} (p \rightarrow q) &\rightarrow((p \rightarrow \neg q) \rightarrow \neg p)\\ \] From these axioms, he then proved \[ \tag{1} \neg \neg(p \vee \neg p)\\ \tag{2} \neg \neg \neg p & \rightarrow \neg p\\ \tag{3} ((\neg p \vee p) \rightarrow \neg q) & \rightarrow \neg q\\ \] The first two had already informally been argued for by Brouwer (1908C, 1925E); the third was new. Now suppose that in intuitionistic logic, a proposition may be true \((p\) holds), false \((\neg p\) holds), or have a third truth value \((p'\) holds). Clearly, \(p \rightarrow \neg p'\) and \(\neg p \rightarrow \neg p'\), and therefore \((\neg p \vee p) \rightarrow \neg p'\); but then, by the third of the lemmata above and axiom \ref{GlivAxiom8} in the list, \(\neg p'\). As \(p\) is arbitrary, this means no proposition has the third truth value. (In 1932, Gödel would show that, more generally, intuitionistic propositional logic is no \(n\)-valued logic for any natural number \(n\); see Section 1 of the supplementary document The Turn to Heyting’s Formalized Logic and Arithmetic Like Kolmogorov in 1925 and, as we will see, Heyting in 1930, Glivenko provided no detailed explanation of the intuitionistic validity of these axioms. Glivenko immediately continued this line of work with a second short paper, Glivenko 1929, in which he showed, for a richer system of intuitionistic propositional logic: 1. If \(p\) is provable in classical propositional logic, then \(\neg \neg p\) is provable in intuitionistic propositional logic; 2. If \(\neg p\) is provable in classical propositional logic, then it is also provable in intuitionistic propositional logic. The first theorem is not a translation in the usual sense (as Kolmogorov’s is), as it does not translate subformulas of \(p\); but it is strong enough to show that the classical and intuitionistic systems in question are equiconsistent. The system of intuitionistic propositional logic is richer than in Glivenko’s previous paper, because to its axioms have now been added the following four: \[ \tag{A} (p \rightarrow(q \rightarrow r)) & \rightarrow (q \rightarrow(p \rightarrow r))\\ \tag{B} (p \rightarrow(p \rightarrow r)) & \rightarrow (p \rightarrow r)\\ \tag{C} p & \rightarrow(q \ rightarrow p)\\ \tag{D} \neg q & \rightarrow(q \rightarrow p)\\ \] Interestingly, Glivenko mentions in his paper that “It is Mr. A. Heyting who first made me see the appropriateness of the two axioms C and D in the Brouwerian logic” (Mancosu 1998: 304–305n.3). The two had come into correspondence when, upon the publication of Glivenko 1928, Heyting sent Glivenko a letter (Troelstra 1990: 11). Kolmogorov in 1925 had explicitly rejected Ex Falso for having no “intuitive foundation”. From Glivenko’s letter to Heyting of October 13, 1928, we know that Glivenko was aware of this (Troelstra 1990: 12). In his paper, however, which he finished later, he does not mention Kolmogorov at all. Instead, he makes the remark on Heyting just quoted and then justifies D by saying that it is a consequence of the principle \((p \vee \neg q) \rightarrow (q \ rightarrow p)\), the admissibility of which he considers “quite evident”.^[23] From a Brouwerian point of view, however, the principle is as objectionable as Ex Falso.^[24] It is worth noting that, when Heyting gave his justification for Ex Falso in Heyting 1956: 102, he did not appeal to the principle Glivenko had used (nor did Kolmogorov in 1932). From Glivenko’s letter of October 18, 1928, one gets the impression that this principle had not been the argument Heyting had actually given to convince him: I am now convinced by your reasons that intuitionistic mathematics need not reject that axiom, so that all considerations against that axiom might lead beyond the limits of our present subject matter. (Troelstra 1990: 12; translation mine) Heyting had informed Glivenko of the planned publication of his (revised and translated) prize essay from 1928 in the Mathematische Annalen. On October 30, 1928, Glivenko asked Heyting if he also was going to include the result that if \(p\) is provable in classical propositional logic, then \(\neg\neg p\) is provable in intuitionistic propositional logic; for if he would, then there would be no point for Glivenko in publishing his own manuscript. Two weeks later, Glivenko had changed his mind, writing to Heyting on November 13 that, even though this result “is but an almost trivial remark”, “its rigorous demonstration is a bit long” and he wants to publish it independently of Heyting’s paper. Indeed, Glivenko’s paper was published first, and in it the publication of Heyting’s formalization was announced; and when Heyting published his paper in 1930, he included a reference to Glivenko 1929, stating its two theorems, and he also acknowledged the use of results from Glivenko 1928. Heyting’s note “On intuitionistic logic”, also from 1930, begins with a reference to Glivenko’s “two excellent articles” from 1928 and 1929. 4.4 Heyting 1930 Heyting’s (partial) formalization of intuitionistic logic and mathematics in Heyting 1930, Heyting 1930A, and Heyting 1930B, is perhaps, as far as the parts on logic are concerned, the most influential intuitionistic publication ever, together with his book Intuitionism: An Introduction from 1956. Heyting’s formalization comprised intuitionistic propositional and predicate logic, arithmetic, and analysis, all together in one big system (with only variables for arbitrary objects). The part concerned with analysis was, not only in its intended interpretation (involving choice sequences) but also formally, no subsystem of its classical counterpart; this explains why it sparked no general interest at the time. (A consequence we noted above is that Brouwer’s strong counterexamples never affected the debate.) Therefore this part of Heyting’s formalization was left out of consideration by the other participants in the Foundational Debate.^[25] This was different for the parts concerned with logic and arithmetic. Formally speaking and disregarding their intended interpretations, from these one could distill subsystems of their classical counterparts, from which only PEM (or double negation elimination) is missing. No doubt this encouraged many to accord to these systems a definitive character, which, as Heyting had remarked at the beginning of his paper, on the intuitionistic conception of logic they cannot have: Intuitionistic mathematics is a mental activity [“Denktätigkeit”], and for it every language, including the formalistic one, is only a tool for communication. It is in principle impossible to set up a system of formulas that would be equivalent to intuitionistic mathematics, for the possibilities of thought cannot be reduced to a finite number of rules set up in advance. Because of this, the attempt to reproduce the most important parts of formal language is justified exclusively by the greater conciseness and determinateness of the latter vis-à-vis ordinary language; and these are properties that facilitate the penetration into the intuitionistic concepts and the use of these concepts in research. (Heyting 1930: 42 [Mancosu 1998: 311]) However, Heyting himself, however, wrote some five decades later, I regret that my name is known to-day mainly in connection with these papers [Heyting 1930, 1930A, 1930B], which were very imperfect and contained many mistakes. They were of little help in the struggle to which I devoted my life, namely a better understanding and appreciation of Brouwer’s ideas. They diverted the attention from the underlying ideas to the formal system itself. (Heyting 1978: 15) The fear that the attention might be thus diverted had indeed been expressed in the first of the three papers themselves: Section 4 [on negation] departs substantially from classical logic. Here I could not avoid giving the impression that the differences that come to the fore in this section constitute the most important point of conflict between intuitionists and formalists (a claim that is already refuted by the remark made at the beginning [quoted above]); this impression arises because the formalism is unfit to express the more fundamental points of conflict. (Heyting 1930: 44 [Mancosu 1998: 313]) For the full system, including predicate logic and analysis, the reader is referred to Heyting’s original papers. Heyting’s axioms for intuitionistic propositional logic were: \[ \tag{1} a &\rightarrow(a \wedge a)\\ \tag{2} (a \wedge b) &\rightarrow(b \wedge a)\\ \tag{3} (a \rightarrow b) &\rightarrow((a \wedge c)\rightarrow(b \wedge c))\\ \tag{4} ((a \rightarrow b) \wedge (b \rightarrow c)) &\rightarrow(a \rightarrow c)\\ \tag{5} \label{HeytingAxiom5} b &\rightarrow(a \rightarrow b)\\ \tag{6} (a \wedge(a \rightarrow b)) &\rightarrow b\\ \tag{7} a &\rightarrow(a \vee b)\\ \tag{8} (a \vee b) &\rightarrow(b \vee a)\\ \tag{9} ((a \rightarrow c) \wedge(b \rightarrow c)) &\rightarrow((a \vee b) \rightarrow c)\\ \tag{10} \label{HeytingAxiom10} \neg a &\rightarrow(a \ rightarrow b)\\ \tag{11} ((a \rightarrow b) \wedge(a \rightarrow \neg b)) &\rightarrow \neg a\\ \] In a letter to Oskar Becker, Heyting described the approach used to obtain these axioms, as well as those for predicate logic, as follows: I sifted the axioms and theorems of Principia Mathematica and, on the basis of those that were found to be admissible, looked for a system of independent axioms. Given the relative completeness of Principia, this to my mind ensures the completeness of my system in the best possible way. Indeed, as a matter of principle, it is impossible to be certain that one has captured all admissible modes of inference in one formal system. (Heyting to Becker, September 23, 1933 (draft) [van Atten 2005: 129], original italics, translation mine) As Heyting emphasizes here, the theorems of Principia Mathematica also had to be looked at, for a theorem might be intuitionistically acceptable even when a classical proof given for it is not.^[26] It is worth noting that Heyting used this method of crossing out, as also Kolmogorov had, instead of determining the logic directly from general considerations on mathematical constructions in Brouwer’s sense. (To some extent, Kreisel tried to do that systematically in the 1960s.) In his dissertation on intuitionistic axiomatization of projective geometry, Heyting had already gained experience with developing an intuitionistic system by taking a classical axiomatic system as guideline and then adjusting it.^[27] In Mints 2006 (section 2) it has been observed that Russell 1903 (section 18) anticipated intuitionistic propositional logic by identifying Peirce’s law and using it to separate out the principles that imply PEM. It seems that Heyting did not realize this at the time; among the references given in Heyting 1930, Russell’s book does not appear. Heyting shows the independence of his axioms using a method given by Bernays (1926); this use of a non-intended interpretation for metamathematical purposes Heyting accepted without hesitation, but he remarked that such metamathematics is “less important for us [intuitionists]” than the approach where all formulas are considered to be meaningful propositions (Heyting 1930: 43 [Mancosu 1998: Heyting states Glivenko’s two theorems from 1929, without giving proofs. Unlike Kolmogorov, but like Glivenko (who had been convinced by Heyting), Heyting accepted Ex Falso (axiom \ref{HeytingAxiom10} above). He was somewhat more elaborate on this point than they had The formula \(a \rightarrow b\) generally means: “If \(a\) is correct, then \(b\) is correct too”. This proposition is meaningful if \(a\) and \(b\) are constant propositions about whose correctness nothing need be known … The case if conceivable that after the statement \(a \rightarrow b\) has been proved in the sense specified, it turns out that \(b\) is always correct. Once accepted, the formula \(a \rightarrow b\) then has to remain correct; that is, we must attribute a meaning to the sign \(\rightarrow\) such that \(a \rightarrow b\) still holds. The same can be remarked in the case where it later turns out that \(a\) is always false. For these reasons, the formulas [\ref{HeytingAxiom5}] and [\ref{HeytingAxiom10}] are accepted. (Heyting 1930: 44 [Mancosu 1998: 313]) The argument is, however, incomplete. It is uncontroversial that, once \(a \rightarrow b\) has been proved, it should remain so when afterward \(\neg a\) is proved. But why should \(a \rightarrow b \), if it has not yet been proved, become provable just by establishing \(\neg a\)? (Johansson asked this in a letter to Heyting in 1935;^[28] see Section 1 of the supplementary document Objections to the Proof Interpretation). Clearly, then, further work needed to be done on the explanation of intuitionistic logic. After the publication of Heyting’s series of papers, three roads could be taken, and indeed were (cf. Posy 1992): 1. to explicate and develop the meaning explanation that had given rise to Heyting’s system; 2. to engage in metamathematical study of the formal systems distilled from Heyting’s system; 3. to find alternative motivations for (parts of) Heyting’s system that are independent from the intuitionists’, yet also in some sense constructive (e.g., Lorenzen’s dialogue semantics) By and large, these three roads lead to very different areas, with correspondingly divergent histories, of which no unified account can be expected. (However, in the Dialectica Interpretation, as proposed and understood by Gödel [1958, 1970, 1972], they came close to one another.) Following the main theme of the present account, our topic will remain the intuitionistic meaning explanation of logic. But a number of early highlights of the formal turn are presented in this supplementary document: The Turn to Heyting’s Formalized Logic and Arithmetic 5. The Proof Interpretation Made Explicit 5.1 Heyting 1930, 1931 Heyting told van Dalen that he had the notion of (intuitionistic) construction in mind to guide him in devising his formalization of intuitionistic logic and mathematics in 1927. In the published version of his formalization, he did not elaborate much on the meaning of the connectives; all he explained there about the general meaning of \(a \rightarrow b\) was that “If \(a\) is correct, then \(b\) is correct too” (Heyting 1930: 44 [Mancosu 1998: 313]). Correspondence between Heyting and Freudenthal in 1930 shows that Heyting up till then did not have a more refined explanation at hand; we will come back to this later in this section. Heyting began to expound on the meaning of the connectives in print in two papers written in 1930. The first, Heyting 1930C, was published that same year, the second, the text of his lecture at Königsberg in September 1930, was published the next year (Heyting 1931). The first paper was a reaction to Barzin and Errera’s claim that Brouwer’s logic is three-valued (Barzin & Errera 1927). The relevant points for the explanation of the meaning of the connectives in Heyting’s paper are the following. First, an explanation is given of assertion: Here, then, is the Brouwerian assertion of \(p\): It is known how to prove p. We will denote this by \(\vdash p\). The words “to prove” must be taken in the sense of “to prove by construction”. [original italics] (Heyting 1930C: 959 [Mancosu 1998: 307]) And then of intuitionistic negation: \(\vdash \neg p\) will mean: “It is known how to reduce \(p\) to a contradiction”. (Heyting 1930C: 960 [Mancosu 1998: 307]) Heyting goes on to explain that, although on these explanations there is a third case beside \(\vdash p\) and \(\vdash \neg p\), namely the case where one knows neither how to prove \(p\) nor how to refute it, this does not mean there is a third truth value: This case could be denoted by \(p'\), but it must be realized that \(p'\) will hardly ever be a definitive statement, since it is necessary to take into account the possibility that the proof of either \(p\) or \(\neg p\) might one day succeed. If one does not wish to risk having to retract what one has said, in the case \(p'\) one should not state anything at all. (Heyting 1930C: 960 [Mancosu 1998: 307]) This refutes the contention of Barzin and Errera. Note that these points are all in Brouwer’s writings, too. Indeed, Heyting (1932: 121) objects to Barzin and Errera’s term “Heyting’s logic”, saying that “all the fundamental ideas of that logic come from Brouwer” (translation mine). But Heyting’s papers will have found a wider audience than Brouwer’s. Brouwer, in turn, was very positive about the paper Heyting 1930C, and wrote to the editor of the journal in which it appeared (Brouwer to de Donder, October 9, 1930): While preparing a note on intuitionism for the Bulletin de l’Académie Royale de Belgique,^[29] I was pleasantly surprised to see the publication of a note by my student Mr. Heyting, which elucidates in a magisterial manner the points that I wanted to shed light upon myself. I believe that after Heyting’s note little remains to be said. (van Dalen 2005: 676) Heyting also proposes a provability operator +, where \(+p\) means “\(p\) is provable”. The distinction between \(p\) and \(+p\) is relevant if one believes that (at least some) propositions are true or false independently of our mathematical activity. In that case one can go on and develop a provability logic, as for example Gödel did (see Section 1 of the supplementary document The Turn to Heyting’s Formalized Logic and Arithmetic.) That is not the intuitionistic conception, and Heyting remarks that, if the fulfillment of \(p\) requires a construction, then there is no difference between \(p\) and \(+p\). He adds that, on the intuitionistic explanation of negation, there is indeed no difference between \(\neg p\) and \(+\neg p\), as a proof of \(\neg p\) is defined as a construction that reduces \(p\) to a contradiction. But Heyting does not generalize this remark to all of intuitionistic logic. The final section of the paper is a further discussion of the logic of the provability operator, in particular its interaction with negation (e.g., \(\vdash \neg +p\) is the assertion that \(p\) is unprovable). But Heyting ends by saying that, as the intuitionists’ task is the reconstruction of all mathematics, while at the same time no examples of propositions have been found so far for which this provability operator would be necessary to express their status (e.g., to express absolute undecidability), it cannot be asked of intuitionists that they develop this logic (Heyting 1930C: 963 [Mancosu 1998: 309–310]) The Königsberg lecture, given in 1930 and published in 1931, specifies the meanings of \(p, \neg p\), and \(p \vee q\). This time Heyting makes an explicit connection to phenomenology: We here distinguish between propositions [Aussagen] and assertions [Sätze]. An assertion is the affirmation of a proposition. A mathematical proposition expresses a certain expectation. For example, the proposition, “Euler’s constant \(C\) is rational”, expresses the expectation that we could find two integers \(a\) and \(b\) such that \(C = a / b\). Perhaps the word “intention”, coined by the phenomenologists, expresses even better what is meant here … The affirmation of a proposition means the fulfillment of an intention. (Heyting 1931: 113 [Benacerraf & Putnam 1983: Compared to the earlier paper written in 1930, the point about the provability operator is amplified: The distinction between \(p\) and \(+p\) vanishes as soon a construction is intended in \(p\) itself, for the possibility of a construction can be proved only by its actual execution. If we limit ourselves to those propositions which require a construction, the logical function of provability does not arise at all. We can impose this restriction by treating only propositions of the form “ \(p\) is provable”, or, to put it another way, by regarding every intention as having the intention of a construction for its fulfillment added to it. It is in this sense that intuitionistic logic, insofar as it has been developed up to now without using the function +, must be understood. (Heyting 1931: 115 [Benacerraf & Putnam 1983: 60; translation modified]) The explanation of disjunction in the Königsberg lecture is: “\(p \vee q\)” signifies that intention which is fulfilled if and only if at least one of the intentions \(p\) and \(q\) is fulfilled. (Heyting 1931: 114 [Benacerraf & Putnam 1983: 59]) And of negation: Becker, following Husserl, has described its meaning very clearly. For him negation is something thoroughly positive, viz., the intention of a contradiction contained in the original intention. The proposition “\(C\) is not rational”, therefore, signifies the expectation that one can derive a contradiction from the assumption [Annahme] that \(C\) is rational. It is important to note that the negation of a proposition always refers to a proof procedure which leads to the contradiction, even if the original proposition mentions no proof procedure. (Heyting 1931: 113 [Benacerraf & Putnam 1983: 59]) Heyting pointed out that these explanations for disjunction and negation, taken together, are an immediate argument against the acceptability of PEM, for which a general method would be needed that, applied to any given proposition \(p\), produces either a proof of \(p\) or a proof of \(\neg p\). What Heyting did not do here was to generalize this explanation of negation to one for implication. Also, note that the procedure does not operate on proofs of \(p\), but starts from merely the assumption that \(p\), which in general gives less information. Both points were taken care of shortly afterward. In a letter to Freudenthal dated October 25, 1930, shortly after the Königsberg lecture, Heyting wrote: From your remarks it has become clear to me that the simple explanation of \(a \rightarrow b\) by “When I think \(a\), I must think \(b\)” is untenable; this idea is in any case too indeterminate to be able to serve as the foundation for a logic. But also your formulation: “When \(a\) has been proved, so has \(b\)”, is not wholly satisfactory to me; when I ask myself what you may mean by that, I believe that also \(a \rightarrow b\), like the negation, should refer to a proof procedure: “I possess a construction that derives from every proof of \(a\) a proof of \(b\)”. In the following, I will keep to this interpretation. There is therefore no difference between \(a \rightarrow b\) and \(+a \rightarrow +b\). (Troelstra 1983: 206–207; translation mine) This explanation of implication, which is the one that became standard, would be introduced in print only in Heyting 1934: 14; in his paper 1932C, Heyting used the explanation given in Kolmogorov 1932 instead (see below). Neither of these two papers by Heyting contained an argument for the validity of Ex Falso. 5.2 Influences on Heyting A number of influences (or possible influences) on Heyting’s arriving at the Proof Interpretation can be suggested. The following are publications Heyting had all seen by 1927, for he refers to them in his dissertation (1925: 93–94): • Brouwer 1907 (Ch. 3) and Brouwer 1908C, which forcefully made the point that intuitionistic logic is concerned with the preservation of constructibility. (See section 2.1 above.) • Brouwer’s proof of the bar theorem from 1924, handed in on March 29, 1924 (Brouwer 1924D1: 189 [Mancosu 1998: 36]), and perhaps also the later version from 1927, of which the manuscript was handed in on April 28, 1926 (Brouwer 1927B: 75 [van Heijenoort 1967: 446]); both show how to operate mathematically on demonstrations as objects. (See section 3.1.2 above.) • Weyl 1921, where universal and existential theorems are considered to be not genuine judgements at all, but “Urteilsanweisungen” (judgement instructions) and “Urteilsabstrakte” (judgement abstracts), thus emphasizing that such theorems for their justification need to be backed up by a construction method. Brouwer, in a note on Weyl’s paper, agreed, saying “This is only a matter of name and certainly does not reflect any lacking insight on my part” (Mancosu 1998: 122). A further likely influence is Brouwer’s unpublished elucidation of the virtual ordering axioms (see section 3.1.3 above). Dirk van Dalen (personal communication) suspects that, although Heyting was probably not present at this lecture course, he heard Brouwer make a similar comment on another occasion (for example during the years 1922–1925, when Heyting was working on his dissertation under Brouwer, for that work also considers intuitionistic orderings). 5.3 Kolmogorov 1932 and Heyting 1934 In 1932, Kolmogorov presented a logic of problems and their solutions, and pointed out that the logic this explanation validates is formally equivalent to the intuitionistic propositional and predicate logic presented by Heyting in 1930. Moreover, he suggests that this provides a better interpretation than Heyting’s. Kolmogorov’s idea is this: If \(a\) and \(b\) are two problems, then \(a \wedge b\) designates the problem “to solve both problems \(a\) and \(b\)”, while \(a \vee b\) designates the problem “to solve at least one of the problems \(a\) and \(b\)”. Furthermore, \(a \supset b\) is the problem “to solve \(b\) provided that the solution for \(a\) is given” or, equivalently, “to reduce the solution of \(b\) to the solution of \(a\)” \(\ldots \neg a\) designates the problem “to obtain a contradiction provided that the solution of \(a\) is given” \(\ldots(x)a(x)\) stands in general for the problem “to give a general method for the solution of \(a(x)\) for every single value of \(x\)”. (Kolmogorov 1932: 59 [Mancosu 1998: 329]) He then lists Heyting’s axioms for propositional logic (with Heyting’s numbering) and, by discussing an example, makes it clear that these all hold when interpreted as statements about problems and their solutions. He also points out that \(a \vee \neg a\) is the problem to give a general method that allows, for every problem \(a\), either to find a solution for \(a\), or to infer a contradiction from the existence of a solution for \(a\)! In particular, if the problem \(a\) consists in the proof of a proposition, then one must possess a general method either to prove or to reduce to a contradiction any proposition. (Kolmogorov 1932: 63 [Mancosu 1998: 332]). In the second part of his paper, Kolmogorov argues that, given the epistemological tenets of intuitionism, intuitionistic logic should be replaced by the calculus of problems, for its objects are in reality not theoretical propositions but rather problems. (Kolmogorov 1932: 58 [Mancosu 1998: 328]) That he considers his interpretation an alternative to Heyting’s, and a preferable one, is again emphasized in a note added in proof: This interpretation of intuitionistic logic is closely connected with the ideas Mr. Heyting has developed in the last volume of Erkenntnis 2, 1931: 106 [Heyting 1931]; yet in Heyting a clear distinction between propositions and problems is missing. (Kolmogorov 1932: 65 [Mancosu 1998: 334]) But it is not at all clear that Heyting would want to make that distinction. If the notion of proposition is understood in such a way that a proposition is true or false independently of our knowledge of this fact, then Heyting would readily agree with Kolmogorov that a proposition is different from a problem; but as soon as one adopts the view that propositions express intentions that are fulfilled (and the proposition made true) or disappointed (and the proposition made false) by our mathematical constructions, which is the view that Heyting actually held, then there would seem to be no essential difference between propositions and problems. Kolmogorov himself had already indicated that a problem may consist in finding the proof of a proposition; exploiting this, one can argue that the following two notions of proposition coincide: 1. propositions express intentions towards constructions 2. propositions pose problems which are solved by carrying out constructions The basic idea is that a proposition in sense 1 gives rise to the problem of finding a construction that fulfills the expressed intention, and that a solution to a problem posed in a proposition in sense 2 also serves to fulfill the intention towards constructions that solve that problem; this is made fully explicit in a little argument due to Martin-Löf, given in detail in Sundholm 1983: In a letter to Heyting of October 12, 1931, Kolmogorov in effect agrees that the difference between Heyting and him is mainly a terminological matter (Troelstra 1990: 15). Heyting later claimed that Kolmogorov’s meaning explanation and his own amounted to the same (Heyting 1958C: 107). By 1937, Kolmogorov seems to have come to believe the same, as in a review in the Zentralblatt of an exchange between Freudenthal and Heyting (discussed in Section 1 of the supplementary document Objections to the Proof Interpretation), he consistently speaks of “intention or problem” (Kolmogorov 1937). In that exchange itself, Freudenthal (1937: 114) had said that between Heyting’s and Kolmogorov’s explanations there was “no essential difference”. Finally, Oskar Becker, in a letter to Heyting of September 1934, had remarked that Heyting’s interpretation is a generalization of Kolmogorov’s, as a “problem” and its “solution” are special cases of an intention and its fulfillment. “Intuitionistic logic is therefore a ‘calculus of intentions’”.^[30] However, a complication for the identification of Heyting’s and Kolmogorov’s explanations of logic is introduced by Kolmogorov’s also accepting, in a particular case, solutions that do not consist in a carrying out a concrete construction. Kolmogorov said that “As soon as \(\neg a\) is solved, then the solution of \(a\) is impossible and the problem \(a \rightarrow b\) is without content” (Kolmogorov 1932: 62 [Mancosu 1998: 331]), and proposed that “The proof that a problem is without content [owing to an impossible assumption] will always be considered as its solution” (Kolmogorov 1932: 59: [Mancosu 1998: 329]). Taken together, this yields a justification of Ex Falso, \(\neg a \rightarrow(a \rightarrow b)\). It seems not altogether unreasonable to extend the meaning of the term “solution” this way, for, just like a concrete solution, an impossibility proof also provides what might be called “epistemic closure”: like a concrete solution, it provides a completely convincing reason to stop working on a certain problem. (This kind of “higher-order” solution is also familiar from Hilbert’s Program, e.g., Hilbert 1900: 51.) Note that this justification of Ex Falso makes no attempt to describe a counterfactual mathematical construction process; thus, Kolmogorov’s justification in 1932 is not really incompatible with the ground for his rejection of Ex Falso in 1925, namely, that one cannot constructively assert consequences of something impossible. Rather, the solution from 1932 introduces a stipulation to achieve completion of the logical theory for its own sake. On the other hand, although Kolmogorov’s stipulation is neither unreasonable nor unmotivated, on Brouwer’s descriptive conception of logic there is of course no place for stipulation. For this reason, “Proof Interpretation” seems to be a more appropriate name for an explanation of Brouwerian logic than “BHK Interpretation”. On Heyting’s explanation, however, a justification of Ex Falso parallel to Kolmogorov’s would seem to be impossible: while a problem may find a “higher-order” solution when it is shown that a solution is impossible, it makes no sense to say that an intention finds “higher-order” fulfillment when it is shown that it cannot be fulfilled. The notion of a solution seems to permit a reasonable extension that the notion of fulfillment does not. In his book from 1934, Heyting explains Ex Falso in Kolmogorov’s terms, not his own. After stating the axiom \(\neg a \supset (a \supset b)\), he It is appropriate to interpret the notion of “reducing” in such a way, that the proof of the impossibility of solving \(a\) at the same time reduces the solution of any problem whatsoever to that of \(a\). (Heyting 1934: 15; translation mine) Clearly there is a difference between Kolmogorov’s own explanation and Heyting’s explanation in Kolmogorov’s terms. Where Heyting says that a proof of \(\neg a\) establishes a reduction of the solution of any problem to that of \(a\), Kolmogorov had said that it established that the problem of reducing the solution of any problem to that of \(a\) has become without content. One has the impression that Heyting in his explanation of Ex Falso tries to approximate as closely as possible the explanation for ordinary implications in terms of a concrete constructive connection between antecedent and consequent; this is even clearer in the explanation he would give of Ex Falso in 1956 (see section 5.4 below). (Note that neither Heyting nor Kolmogorov ever justified Ex Falso by giving the traditional argument (based on the disjunctive syllogism) also stated in Glivenko’s paper from 1929; see section 4.3 above.) More generally, the explanation of logic in Heyting 1934 is for the most part given Kolmogorov style, and not Heyting’s own in terms of intentions and their fulfillment. (The latter is only mentioned for its explanation of the implication (Heyting 1934: 14).) Perhaps the reason for this is that Heyting (1934: 14) agrees with Kolmogorov (1932: 58) that the interpretation in terms of problems and solutions provides a useful interpretation Heyting’s formal system also for non-intuitionists (while for intuitionists they come to the same thing). In his short note 1932C, titled “The application of intuitionistic logic to the definition of completeness of a logical calculus”, Heyting uses Kolmogorov’s interpretation instead of his own. Given the subject matter, that is what one might expect. 5.4 Heyting 1956 In his influential book Intuitionism. An introduction from 1956, Heyting explains the logical connectives as follows (97–98, 102): 1. “A mathematical proposition \(p\) always demands a mathematical construction with certain given properties; it can be asserted as soon as such a construction has been carried out”. 2. “\(p \wedge q\) can be asserted if and only if both \(p\) and \(q\) can be asserted”. 3. “\(p \vee q\) can be asserted if and only if at least one of the propositions \(p\) and \(q\) can be asserted”. 4. “\(\neg p\) can be asserted if and only if we possess a construction which from the supposition that a construction \(p\) were carried out, leads to a contradiction”. 5. “The implication \(p \rightarrow q\) can be asserted, if and only if we possess a construction \(r\), which, joined to any construction proving \(p\) (supposing that the latter be effected), would automatically effect a construction proving \(q\)”. 6. “\(\vdash(\forall x)p(x)\) means that \(p(x)\) is true for every \(x\) in \(Q\) [over which \(x\) ranges]; in other words, we possess a general method of construction which, if any element \(a\) of \(Q\) is chosen, yields by specialization the construction \(p(a)\)”. 7. “\((\exists x)p(x)\) will be true if and only if an element \(a\) of \(Q\) for which \(p(a)\) is true has actually been constructed”. Note that these explanations are not in terms of proof conditions, but of assertion conditions. This may make a difference in particular for the explanation of implication, where, instead of only the information under what condition something counts as a proof of \(p\), we now can also take into consideration that, by hypothesis, a concrete construction for \(p\) has been effected. As we saw in section 3.1.2, the possibility to do so is crucial for Brouwer’s proof of the Bar Theorem. In the same pages, Heyting also gave the following justification of Ex Falso: Axiom X [\(\neg p \rightarrow(p \rightarrow q)\)] may not seem intuitively clear. As a matter of fact, it adds to the precision of the definition of implication. You remember that \(p \rightarrow q\) can be asserted if and only if we possess a construction which, joined to the construction \(p\), would prove \(q\). Now suppose that \(\vdash \neg p\), that is, we have deduced a contradiction from the supposition that \(p\) were carried out. Then, in a sense, this can be considered as a construction, which, joined to a proof of \(p\) (which cannot exist) leads to a proof of \(q\). (Heyting 1956: 102) One easily recognizes Heyting’s effort to explain Ex Falso as much as possible along the same lines as other implications, namely, by providing a concrete construction that leads from the antecedent to the consequent. In its attempt to provide, “in a sense”, a construction, the explanation is clearly not of the same kind as Kolmogorov’s stipulation from 1932. But it does not fit into Heyting’s original interpretation of logic in terms of intentions directed at constructions and the fulfillment of such intentions either. For to fulfill an intention directed toward a particular construction we will have to exhibit that construction; we will have to exhibit a construction that transforms any proof of \(p\) into one of \(q\). But how can a construction that from the assumption \(p\) arrives at a contradiction, and therefore generally speaking not at \(q\), lead to \(q\)? It will not do to say that such a construction exists “in a sense”. A construction that is a construction “in a sense”, which Heyting helps himself to here, is no construction. Even within the intuitionistic movement, not everyone agreed with Heyting’s explanation of logic. This is discussed in a supplementary document: Objections to the Proof Interpretation Finally, this entry has not, as yet, discussed precursors to Brouwer, more intricate aspects of Brouwer’s “strong counterexamples”, the objections to the Proof Interpretation of circularity and impredicativity, or the later developments around the Proof Interpretation. These will be subject of future updates. Brouwer’s writings are referred to according to the scheme in the bibliography van Dalen 1997a; Gödel’s, according to the bibliography in Gödel 1986, Gödel 1990, Gödel 1995 (except for Gödel 1970); Heyting’s, according to the bibliography Troelstra et al. 1981 (except for Heyting 1928). • Apostel, L., 1972, “Negation: The tension between ontological positivity (negationless positivity) and anthropological negativity (positively described)”, Logique et Analyse, 15(57–58): 209–317. [Apostel 1972 available online] • Artemov, Sergei N., 2001, “Explicit provability and constructive semantics”, Bulletin of Symbolic Logic, 7(1): 1–36. doi:10.2307/2687821 • Arruda, Ayda Ignez, 1978, “Some remarks on Griss’ logic of negationless intuitionistic mathematics”, in Mathematical Logic, Proceedings of the 1st Brazilian Conference on Mathematical Logic, Campinas 1977 (Lecture Notes in Pure and Applied Mathematics 39), A.I. Arruda, N.C.A. da Costa, R. Chuaqui (eds.), 9–29. • van Atten, Mark, 2004a, “Review of Dennis E. Hesseling, Gnomes in the Fog. The Reception of Brouwer’s Intuitionism in the 1920s”, Bulletin of Symbolic Logic, 10(3): 423–427. doi:10.2307/3185194 • –––, 2004b, On Brouwer, Belmont, CA: Wadsworth. • –––, 2005, “The correspondence between Oskar Becker and Arend Heyting”, in Oskar Becker und die Philosophie der Mathematik, V. Peckhaus (ed.), München: Wilhelm Fink, 119–142. • –––, 2009, “The hypothetical judgement in the history of intuitionistic logic”, in Logic, Methodology, and Philosophy of Science 13: Proceedings of the 2007 International Congress in Beijing, C. Glymour, W. Wang, and D. Westerståhl, eds., London: King’s College Publications, 122–136. • –––, 2018, “The Creating Subject, the Brouwer-Kripke Schema, and infinite proofs”, Indagationes Mathematicae, 29: 1565–1636. doi: 10.1016/j.indag.2018.06.005 • van Atten, Mark, Göran Sundholm, Michel Bourdeau, and Vanessa van Atten, 2014, “‘Que les principes de la logique ne sont pas fiables.’ Nouvelle traduction française annotée et commentée de l’article de 1908 de L.E.J. Brouwer”, Revue d’Histoire des Sciences, 67(2): 257–281. doi:10.3917/rhs.672.0257 • van Atten, Mark, Pascal Boldini, Michel Bourdeau, and Gerhard Heinzmann (eds.), 2008, One Hundred Years of Intuitionism (1907–2007). The Cerisy Conference, Basel: Birkhäuser. doi:10.1007/ • van Atten, Mark and Göran Sundholm, 2017, “L.E.J. Brouwer’s ‘Unreliability of the logical principles’: A new translation, with an introduction”, History and Philosophy of Logic, 38(1): 24–47. • Barzin, M. and A. Errera, 1927, “Sur la logique de M. Brouwer”, Académie Royale de Belgique, Bulletin de la classe des sciences, 13: 56–71. • Bazhanov, Valentin A., 2003, “The Scholar and the ‘Wolfhound Era’: The Fate of Ivan E. Orlov’s Ideas in Logic, Philosophy, and Science”, Science in Context, 16(4): 535–550. doi:10.1017/ • Becker, Oskar, 1927, “Mathematische Existenz. Untersuchungen zur Logik und Ontologie mathematischer Phänomene”, Jahrbuch für Philosophie und phänomenologische Forschung, 8: 439–809. • –––, 1930, “Zur Logik der Modalitäten”, Jahrbuch für Philosophie und phänomenologische Forschung, 11: 497–548. • Benacerraf, Paul and Hilary Putnam (eds.), 1983, Philosophy of Mathematics: Selected Readings, second edition, Cambridge: Cambridge University Press. doi:10.1017/CBO9781139171519 • Bergson, Henri, 1907, L’Évolution Créatrice, Paris: Félix Alcan. • Bernays, Paul, 1926, “Axiomatische Untersuchung des Aussagen-Kalküls der ‘Principia Mathematica’”, Mathematische Zeitschrift, 25: 305–320. doi:10.1007/BF01283841 • –––, 1967, “Hilbert, David”, in The Encyclopedia of Philosophy, vol. 3, Paul Edwards (ed.), New York: Macmillan. • Beth, Evert W., 1956, “Semantic Construction of Intuitionistic Logic”, Mededelingen der Koninklijke Nederlandse Akademie van Wetenschappen. Afdeling Letterkunde, 19(11): 357–388. • –––, 1966, The Foundations of Mathematics: A Study in the Philosophy of Science, second revised edition, New York: Harper & Row. • Blaschek, Günther, 1994, Object-Oriented Programming: with Prototypes, Berlin: Springer. doi:10.1007/978-3-642-78077-6 • Borwein, Jonathan M., 1998, “Brouwer-Heyting Sequences Converge”, Mathematical Intelligencer, 20(1): 14–15. doi:10.1007/BF03024393 • Brouwer, L.E.J., 1907, Over de Grondslagen der Wiskunde (On the Foundations of Mathematics), Ph.D. thesis, Universiteit van Amsterdam. English translation in Brouwer 1975: 11–101. • –––, 1908C, “De onbetrouwbaarheid der logische principes” (The Unreliability of the Logical Principles), Tijdschrift voor Wijsbegeerte, 2: 152–158. English translation in Van Atten and Sundholm 2017. An older English translation is in Brouwer 1975: 107–111. doi:10.1016/B978-0-7204-2076-0.50009-X • –––, 1918B, “Begründung der Mengenlehre unabhängig vom logischen Satz vom ausgeschlossenen Dritten. Erster Teil, Allgemeine Mengenlehre”, KNAW Verhandelingen, 5: 1–43. Also in Brouwer 1975: 150–190 (in German). doi:10.1016/B978-0-7204-2076-0.50015-5 • –––, 1919A, “Begründung der Mengenlehre unabhängig vom logischen Satz vom ausgeschlossenen Dritten. Zweiter Teil, Theorie der Punktmengen”, KNAW Verhandelingen, 7: 1–33. Also in Brouwer 1975: 191–221 (in German). doi:10.1016/B978-0-7204-2076-0.50016-7 • –––, 1919D, “Intuitionistische Mengenlehre”, Jahresbericht D.M.V., 28: 203–208. English translation in Mancosu 1998: 23–27. • –––, 1921A, “Besitzt jede reelle Zahl eine Dezimalbruchentwicklung?”, Mathematische Annalen, 83(3–4): 201–210. English translation in Mancosu 1998: 28–35. doi:10.1007/BF01458382 • –––, 1924D1, “Bewijs dat iedere volle functie gelijkmatig continu is”, KNAW verslagen, 33: 189–193. English translation in Mancosu 1998: 36–39. • –––, 1924N, “Über die Bedeutung des Satzes vom ausgeschlossenen Dritten in der Mathematik, insbesondere in der Funktionentheorie”, Journal für die reine und angewandte Mathematik, 154: 1–7. English translation in van Heijenoort 1967: 335–341. • –––, 1925E, “Intuitionistische Zerlegung mathematischer Grundbegriffe”, Jahresbericht D.M.V., 33: 251–256. English translation in Mancosu 1998: 287–289 (sections 2–4), 290–292 (section 1). • –––, 1926A, “Zur Begründung der intuitionistischen Mathematik, II”, Mathematische Annalen, 95: 453–472. doi:10.1007/BF01206621 • –––, 1926B2, “Intuitionistische Einführung des Dimensionsbegriffes”, Proceedings Koninklijke Akademie van Wetenschappen Amsterdam, 29: 855–873. • –––, 1927B, “Über Definitionsbereiche von Funktionen”, Mathematische Annalen, 97: 60–75. English translation of sections 1–3 in van Heijenoort 1967: 457–463. doi:10.1007/BF01447860 • –––, 1928A2, “Intuitionistische Betrachtungen über den Formalismus”, Proceedings Koninklijke Akademie van Wetenschappen Amsterdam, 31: 374–379. English translation in Mancosu 1998: 40–44. [ Brouwer 1928A2 available online] • –––, 1929A, “Mathematik, Wissenschaft und Sprache”, Monatshefte für Mathematik und Physik, 36: 153–164. English translation in Mancosu 1998: 45–53. doi:10.1007/BF02307611 • –––, 1930A, Die Struktur des Kontinuums, Wien: Komitee zur Veranstaltung von Gastvorträgen ausländischer Gelehrter der exakten Wissenschaften. English translation in Mancosu 1998: 54–63. • –––, 1933A2, “Willen, weten, spreken” (Volition, Knowledge, Language), in De Uitdrukkingswijze der Wetenschap, L.E.J. Brouwer et al., Groningen: Noordhoff, 45–63. English translation in van Stigt 1990: 418–431. Partial English translation in Brouwer 1975: 443–446. • –––, 1942A, “Zum freien Werden von Mengen und Funktionen”, Proceedings Nederlandse Akademie van Wetenschappen Amsterdam, 45: 322–323. Also Indagationes Mathematicae, 4 (1942): 107–108. • –––, 1942B, “Die repräsentierende Menge der stetigen Funktionen des Einheitskontinuums”, Proceedings Nederlandse Akademie van Wetenschappen Amsterdam, 45: 443. Also Indagationes Mathematicae, 4 (1942): 154. • –––, 1948A, “Essentieel negatieve eigenschappen” (Essentially Negative Properties), Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 51: 963–964. Also Indagationes Mathematicae, 10 (1948): 322–323. English translation in Brouwer 1975: 478–479. doi:10.1016/B978-0-7204-2076-0.50053-2 • –––, 1949A, “De non-aequivalentie van de constructieve en de negatieve orderelatie in het continuum” (The Non-Equivalence of the Constructive and the Negative Order Relation on the Continuum), Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 52:122–124. Also Indagationes Mathematicae, 11 (1949): 37–39. English translation in Brouwer 1975: 495–496. doi:10.1016/ • –––, 1949B, “Contradictoriteit der elementaire meetkunde” (Contradictority of Elementary Geometry), Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 52: 315–316. Also Indagationes Mathematicae, 11 (1949): 89–90. English translation in Brouwer 1975: 497–498. doi:10.1016/B978-0-7204-2076-0.50056-8 • –––, 1949C, “Consciousness, Philosophy and Mathematics”, Proceedings of the 10th International Congress of Philosophy, Amsterdam 1948, Amsterdam: North-Holland, 3: 1235–1249. • –––, 1952B, “Historical Background, Principles and Methods of Intuitionism”, South African Journal of Science, 49: 139–146. • –––, 1954A, “Points and Spaces”, Canadian Journal of Mathematics, 6: 1–17. doi:10.4153/CJM-1954-001-9 • –––, 1954F, “An Example of Contradictority in Classical Theory of Functions”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 57: 204–205. Also Indagationes Mathematicae , 16 (1954): 204–205. doi:10.1016/S1385-7258(54)50030-2 • –––, 1955, “The Effect of Intuitionism on Classical Algebra of Logic”, Proceedings of the Royal Irish Academy, 57: 113–116. • –––, 1975, Collected Works. I: Philosophy and Foundations of Mathematics, Arend Heyting (ed.), Amsterdam: North-Holland. • –––, 1977, Collected Works. II: Geometry, Analysis, Topology and Mechanics, H. Freudenthal (ed.), Amsterdam: North-Holland. • –––, 1981A, Brouwer’s Cambridge Lectures on Intuitionism, Cambridge: Cambridge University Press. • Brouwer, L.E.J., Fred. van Eeden, J. van Ginneken, and S.J.G. Mannoury, 1937, “Signifiese dialogen”, Synthese, 2: 168–174, 261–268, 316–324. doi:10.1007/BF00880415 doi:10.1007/BF00880431 • –––, 1939, Signifische dialogen, Utrecht: Erven J. Bijleveld. Partial English translation in Brouwer 1975: pp. 447–452. • Chronique générale, 1949. “Chronique générale”, Revue Philosophique de Louvain, 47(15): 432–436. [Chronique générale 1949 available online] • Colacito, Almudena, Dick de Jongh, and Ana Lucia Vargas, 2017, “Subminimal Negation”, Soft Computing, 2(1): 165–174. doi:10.1007/s00500-016-2391-8 • Colson, Loïc, and David Michel, 2007, “Pedagogical Natural Deduction Systems: the Propositional Case”, Journal of Universal Computer Science, 13(10): 1396–1410. [Colson & Michel 2007 available • –––, 2008, “Pedagogical Second-order Propositional Calculi”, Journal of Logic and Computation, 18(4): 669–695. doi:10.1093/logcom/exn001 • –––, 2009, “Pedagogical Second-order \(\lambda\)-calculus”, Theoretical Computer Science, 410(42): 4190–4203. doi:10.1016/j.tcs.2009.04.020 • van Dalen, Dirk, 1973, “Lectures on intuitionism”, in Mathias & Rodgers 1973: 1–94. doi:10.1007/BFb0066771 • –––, 1997, “A Bibliography of L.E.J. Brouwer”, Utrecht Logic Group Preprint Series, no. 175 [van Dalen 1997 updated preprint available from Universiteit Utrecht]. Updated version in van Atten et al. 2008: 343–390. doi:10.1007/978-3-7643-8653-5_22 • –––, 1999, Mystic, Geometer, and Intuitionist. The Life of L.E.J. Brouwer. 1: The Dawning Revolution, Oxford: Clarendon Press. • –––, 2001a, L.E.J. Brouwer 1881–1966. Een Biografie. Het Heldere Licht van de Wiskunde, Amsterdam: Bert Bakker. • –––, 2001b, L.E.J. Brouwer en de Grondslagen van de Wiskunde, Utrecht: Epsilon. • –––, 2004, “Kolmogorov and Brouwer on constructive implication and the Ex Falso rule”, Russian Mathematical Surveys, 59(2): 247–257. doi:10.1070/RM2004v059n02ABEH000717 • –––, 2005, Mystic, Geometer, and Intuitionist. The Life of L.E.J. Brouwer. 2: Hope and Disillusion, Oxford: Clarendon Press. • –––, 2008, “Another look at Brouwer’s dissertation”, in van Atten et al. 2008: 3–20. doi:10.1007/978-3-7643-8653-5_1 • ––– (ed.), 2011, The Selected Correspondence of L.E.J. Brouwer, London: Springer. An online supplement (link and password on the copyright page of the book) presents most of the extant correspondence, but without English translations. doi:10.1007/978-0-85729-537-8 • van Dalen, Dirk and Volker R. Remmert, 2007, “Ce périodique foncièrement international: the birth and youth of Compositio Mathematica”, Nieuw Archief voor Wiskunde 5th series, 8(3): 178–189. [van Dalen & Remmert 2007 available online] • van Dantzig, D., 1947a, “On the principles of intuitionistic and affirmative mathematics. I”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 50: 918–929. Also Indagationes Mathematicae, 9: 429–440. • –––, 1947b, “On the principles of intuitionistic and affirmative mathematics. II”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 50:1092–1103. Also Indagationes Mathematicae, 9: 506–517. • –––, 1949, “Comments on Brouwer’s Theorem on Essentially-negative predicates”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 52: 949–957. Also Indagationes Mathematicae, 11: 347–355. • –––, 1951, “Mathématique stable et mathématique affirmative”, Congrès International de Philosophie des Sciences, 1949, II Logique (Actualités scientifiques et industrielles 1134), Paris: Hermann & Cie, pp. 123–135. • Demange, Vincent, 2015, “Pedagogical lambda-cube: the λ² case”, Journal of Logic and Computation, 25(3): 743–779. doi: 10.1093/logcom/exu049 • Dequoy, Nicolle, 1952 [1955], Axiomatique intuitionniste sans négation de la géométrie projective, PhD thesis, Université de Paris. Published in 1955, Paris:Gauthier-Villars (Collection de logique mathématique ; sér. A, 6). • Destouches-Février, Paulette, 1945a, “Rapports entre le calcul des problèmes et le calcul des propositions”, Comptes Rendus de l’Académie des sciences, 220: 484–486. [Destouches-Février 1945a available online] • –––, 1945b, “Logique adaptee aux théories quantiques”, Comptes Rendus de l’Académie des sciences, 221: 287–288. [Destouches-Février 1945b available online] • –––, 1947a, “Sur la notion d’adequation et le calcul minimal de Johannsson”, Comptes Rendus de l’Académie des sciences, 224: 545–547. [Destouches-Février 1947a available online] • –––, 1947b, “Esquisse d’une mathématique intuitioniste positive”, Comptes Rendus de l’Académie des sciences, 225: 1241–1243. [Destouches-Février 1947b available online] • –––, 1948, “Logique de l’intuitionisme sans négation et logique de l’intuitionisme positif”, Comptes Rendus de l’Académie des sciences, 226: 38–39. [Destouches-Février 1948 available online] • –––, 1949, “Connexions entre les calculs des constructions, des problèmes, des propositions”, Comptes Rendus de l’Académie des sciences, 228: 31–33. [Destouches-Février 1949 available online] • –––, 1951, “Sur l’intuitionnisme et la conception strictement constructive”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 54: 80–86. doi:10.1016/S1385-7258(51)50012-4 • Došen, Kosta, 1992, “The First Axiomatization of Relevant Logic”, Journal of Philosophical Logic, 21(4): 339–356. doi:10.1007/BF00260740 • Dummett, Michael A.E., 1973, “The Justification of Deduction”, British Academy, London. Page references to reprint in Dummett 1978: 290–318. [Dummett 1973 available online] • –––, 1978, Truth and Other Enigmas, Cambridge MA: Harvard University Press. • –––, 2000, Elements of Intuitionism, second revised edition, Oxford: Clarendon Press. • Ewald, William Bragg, 1996, From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols, Oxford: Oxford University Press. • Fitting, Melvin Chris, 1969, Intuitionistic Logic, Model Theory and Forcing, (Studies in Logic and the Foundations of Mathematics, 54), Amsterdam: North-Holland. • Fraenkel, Abraham A., Yehoshua Bar-Hillel, and Azriel Levy, 1973, Foundations of Set Theory, second revised edition, (Studies in Logic and the Foundations of Mathematics, 67), Amsterdam: North-Holland. The revision of the chapter on intuitionism (4) was done by Dirk van Dalen. • Franchella, Miriam, 1994a, “Heyting’s contribution to the change in research into the foundations of mathematics”, History and Philosophy of Logic, 15(2): 149–172. doi:10.1080/01445349408837229 • –––, 1994b, “Brouwer and Griss on intuitionistic negation”, Modern Logic, 4(3): 256–265. [Franchella 1994b available online] • –––, 1995, “L.E.J. Brouwer towards intuitionistic logic”, Historia Mathematica, 22(3): 304–322. doi:10.1006/hmat.1995.1026 • –––, 2008, Con gli occhi negli occhi di Brouwer, Monza: Polimetrica. • Freudenthal, Hans, 1937a, “Zum intuitionistischen Raumbegriff”, Compositio Mathematica, 4: 82–111. [Freudenthal 1937a available online] • –––, 1937b, “Zur intuitionistischen Deutung logischer Formeln”, Compositio Mathematica, 4: 112–116. [Freudenthal 1937b available online] • –––, 1937c, “Nachwort”, Compositio Mathematica, 4: 118. [Freudenthal 1937c available online] • –––, 1973, Mathematics as an Educational Task, Dordrecht:Reidel. • Gentzen, Gerhard, 1934, “Untersuchungen über das logische Schliessen”, Mathematische Zeitschrift, 39: 176–210, 405–431. English translation in Gentzen 1969: 68–131. doi:10.1007/BF01201353 • –––, 1969, The Collected Papers of Gerhard Gentzen, M.E. Szabo (ed. and trans.), Amsterdam: North-Holland. • Georgacarakos, G.N., 1982, “The Semantics of Minimal Intuitionism”, Logique et Analyse, 25(100): 383–397. [Georgacarakos 1982 available online] • Gilmore, P.C., 1953a, “The effect of Griss’ criticism of the intuitionistic logic on deductive theories formalized within the intuitionistic logic. I”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 56: 162–174. Also Indagationes Mathematicae, 15: 162–174. doi:10.1016/S1385-7258(53)50022-8 • –––, 1953b, “The effect of Griss’ criticism of the intuitionistic logic on deductive theories formalized within the intuitionistic logic. II”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 56: 175–186. Also Indagationes Mathematicae, 15: 175–186. doi:10.1016/S1385-7258(53)50023-X • –––, 1953c, “Griss’ criticism of the intuitionistic logic and the theory of order”, Proceedings of the 11th International Congress of Philosophy, Brussels 1953, Amsterdam: North-Holland, 5: • –––, 1956, “Mathématique stable et Mathématique affirmative, by D. van Dantzig”, Journal of Symbolic Logic, 21(3): 323–324. doi:10.2307/2269134 • Glivenko, V., 1928, “Sur la logique de M. Brouwer”, Académie Royale de Belgique, Bulletin de la classe des sciences, 14: 225–228. • –––, 1929, “Sur quelques points de la logique de M. Brouwer”, Académie Royale de Belgique, Bulletin de la classe des sciences, 5(15): 183–188. English translation in Mancosu 1998: 301–305. • Gödel, Kurt, 1932, “Zum intuitionistischen Aussagenkalkül”, Anzeiger der Akademie der Wissenschaften in Wien, 69: 65–66. Also, with English translation, in Gödel 1986: 222–225. • –––, 1932f, “Heyting, Arend: Die intuitionistische Grundlegung der Mathematik”, Zentralblatt für Mathematik und ihre Grenzgebiete, 2: 321–322. Also, with English translation, in Gödel 1986: • –––, 1933e, “Zur intuitionistischen Arithmetik und Zahlentheorie”, Ergebnisse eines mathematischen Kolloquiums, 4: 34–38. Also, with English translation, in Gödel 1986: 286–295. • –––, 1933f, “Eine Interpretation des intuitionistischen Aussagenkalküls”, Ergebnisse eines mathematischen Kolloquiums, 4: 39–40. Also, with English translation, in Gödel 1986: 300–303. • –––, *1933o, “The present situation in the foundations of mathematics”, text of a lecture in Cambridge, MA, in Gödel 1995: 45–53. • –––, *1941, “In what sense is intuitionistic logic constructive?”, text of a lecture at Yale, in Gödel 1995: 189–200. • –––, 1958, “Über eine bisher noch nicht benutzte Erweiterung des finiten Standpunktes”, Dialectica, 12(3–4): 280–287. Also, with English translation, in Gödel 1990: 240–251. doi:10.1111/ • –––, 1970, “On an extension of finitary mathematics which has not yet been used”, Circulated earlier version of Gödel 1972. • –––, 1972, “On an extension of finitary mathematics which has not yet been used”, Revised and expanded translation of Gödel 1958, first published in Gödel 1990: 271–280. • –––, 1986–, Collected Works, Solomon Feferman et al. (eds.), Oxford: Oxford University Press. □ –––, 1986, I: Publications 1929–1936 □ –––, 1990, II: Publications 1938–1974 □ –––, 1995, III: Unpublished Essays and Lectures □ –––, 2003a, IV: Correspondence A–G □ –––, 2003b, V: Correspondence H–Z • Griss, G.F.C., 1944, “Negatieloze intuitïonistische wiskunde”, Verslagen Nederlandse Akademie van Wetenschappen Amsterdam, 53: 261–268. • –––, 1946, “Negationless intuitionistic mathematics”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 49: 1127–1133. Also Indagationes Mathematicae, 8: 675–681. • –––, 1947, Idealistische Filosofie, Arnhem: Van Loghum Slaterus. • –––, 1948a, “Sur la négation (dans les mathématiques et la logique)”, Synthese, 7(1/2): 71–74. • –––, 1948b, “Logique des mathématiques intuitionistes sans négation”, Comptes Rendus de l’Académie des sciences, 227: 946–948. [Griss 1948b available online] • –––, 1950, “Negationless intuitionistic mathematics. II”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 53: 456–463. Also Indagationes Mathematicae, 12: 108–115. • –––, 1951a, “Negationless Intuitionistic Mathematics. III”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 54: 193–199. Also Indagationes Mathematicae, 13: 193–199. • –––, 1951b, “Negationless Intuitionistic Mathematics. IVa”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 54: 452–462. Also Indagationes Mathematicae, 13: 452–462. • –––, 1951c, “Negationless Intuitionistic Mathematics. IVb”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 54: 463–471. Also Indagationes Mathematicae, 13: 463–471. • –––, 1951d, “Logic of Negationless Intuitionistic Mathematics”, Proceedings Koninklijke Nederlandse Akademie van Wetenschappen Amsterdam, 54: 41–49. Also Indagationes Mathematicae, 13: 41–49. • van Heijenoort, Jean (ed.), 1967, From Frege to Gödel: A Sourcebook in Mathematical Logic, 1879–1931, Cambridge, MA: Harvard University Press. • Hazen, A.P., 1995, “Is Even Minimal Negation Constructive?”, Analysis, 55(2): 105–107. doi:10.1093/analys/55.2.105 • Herbrand, Jacques, 1931, “Sur la non-contradiction de l’arithmétique”, Journal für die reine und angewandte Mathematik, 166: 1–8. Also Herbrand 1968: 221–232. English translation in van Heijenoort 1967: 620–628, reprint in Herbrand 1971: 284–297. • –––, 1968, Écrits logiques, Jean van Heijenoort (ed.), Paris: Presses Unversitaires de France. • –––, 1971, Logical Writings, Warren D. Goldfarb (ed.), Cambridge, MA: Harvard University Press. • Hesseling, Dennis E., 2003, Gnomes in the Fog. The Reception of Brouwer’s Intuitionism in the 1920s, Basel: Birkhäuser. • Heyting, Arend, 1925, Intuïtionistische axiomatiek der projectieve meetkunde, Ph.D. thesis, Universiteit van Amsterdam. • –––, 1928, [Prize essay on the formalization of intuitionistic logic]. Expanded and revised version published as Heyting 1930, Heyting 1930A, Heyting 1930B. □ –––, 1930, “Die formalen Regeln der intuitionistischen Logik I”, Sitzungsberichte der Preussischen Akademie der Wissenschaften, 42–56. English translation in Mancosu 1998: 311–327. □ –––, 1930A, “Die formalen Regeln der intuitionistischen Logik II”, Sitzungsberichte der Preussischen Akademie der Wissenschaften, 57–71. □ –––, 1930B, “Die formalen Regeln der intuitionistischen Logik III”, Sitzungsberichte der Preussischen Akademie der Wissenschaften, 158–169. • –––, 1930C, “Sur la logique intuitionniste”, Académie Royale de Belgique, Bulletin de la Classe des Sciences, 16: 957–963. English translation in Mancosu 1998: 306–310. • –––, 1931, “Die intuitionistische Grundlegung der Mathematik” (The Intuitionist Foundations of Mathematics), Erkenntnis, 2: 106–115. English translation in Benacerraf & Putnam 1983: 52–61. doi:10.1017/CBO9781139171519.003 and doi:10.1007/BF02028143 • –––, 1932, “A propos d’un article de MM. Barzin et Errera”, Enseignement Mathématique, 31: 121–122. • –––, 1932C, “Anwendung der intuitionistischen Logik auf die Definition der Vollständigkeit eines Kalküls”, in Verhandlungen des Internationalen Mathematikerkongresses Zürich 1932, W. Saxer (ed.), Zürich: Orell Füssli, vol. 2, 344–345. • –––, 1934, Mathematische Grundlagenforschung, Intuitionismus, Beweistheorie, Berlin: Springer. • –––, 1937, “Bemerkungen zu dem Aufsatz von Herrn Freudenthal ‘Zur intuitionistischen Deutung logischer Formeln’”, Compositio Mathematica, 4: 117–118. [Heyting 1937c available online] • –––, 1955, Les fondements des mathématiques. Intuitionnisme. Théorie de la démonstration, Paris: Gauthier-Villars. Updated version of Heyting 1934, translated into French by P. • –––, 1953–1955, “G. F. C. Griss and His Negationless Intuitionistic Mathematics” Synthese, 9: 91–96. doi:10.1007/BF00567395 • –––, 1956, Intuitionism, an Introduction, Amsterdam: North–Holland. • –––, 1958A, “Blick von der intuitionistischen Warte”, Dialectica, 12(3–4): 332–345. doi:10.1111/j.1746-8361.1958.tb01468.x • –––, 1958C, “Intuitionism in Mathematics”, in La philosophie au milieu du vingtième siècle, Raymond Klibansky (ed.), Firenze: La nuova Italia, vol. 1, 101–115. • –––, 1968A, “L.E.J. Brouwer”, in Logic and Foundations of Mathematics, (Contemporary Philosophy. A Survey, Vol.1), Raymond Klibansky (ed.), Firenze: La Nuova Italia editrice, 308–315. • –––, 1974, “Intuitionistic views on the nature of mathematics”, Synthese, 27(1–2): 79–91. doi:10.1007/BF00660890 • –––, 1978, “History of the Foundations of Mathematics”, Nieuw Archief voor Wiskunde, 3rd series, 26(3): 1–21. • Hilbert, David, 1900, “Mathematische Probleme. Vortrag, gehalten auf dem internationalen Mathematiker-Kongress zu Paris 1900” (Mathematical Problems), Archiv der Mathematik und Physik (3), 1: 44–63,213–237. English translation in the Bulletin of the American Mathematical Society, 8(10): 437–479, 1902. doi:10.1090/S0002-9904-1902-00923-3 • –––, 1922, “Neubegründung der Mathematik (Erste Mitteilung)”, Abhandlungen aus dem Mathematischen Seminar der Hamburgischen Universität, 1: 157–177. English translation in Mancosu 1998: 198–214. • –––, 1923, “Die logischen Grundlagen der Mathematik”, Mathematische Annalen, 88(1–2): 151–165. English translation in Ewald 1996: 1134–1148. doi:10.1007/BF01448445 • Hilbert, D. and W. Ackermann, 1928, Grundzüge der theoretischen Logik, Berlin: Springer. doi:10.1007/978-3-642-52789-0 • Imai, Yasuyuki and Kiyoshi Iseki, 1966, “On Griss Algebra. I”, Proceedings of the Japan Academy, 42(3): 213–216. doi:10.3792/pja/1195522077 • de Iongh, J.J., 1949, “Restricted Forms of Intuitionistic Mathematics”, Proceedings of the 10th International Congress of Philosophy, Amsterdam 1948, Amsterdam: North-Holland, 2: 744–748. • Johansson, Ingebrigt, 1937, “Der Minimalkalkül, ein reduzierter intuitionistischer Formalismus”, Compositio Mathematica, 4: 119–136. [Johansson 1937 available online] • Joosten, Joost Johannes, 2004, Interpretability Formalized, Ph.D. thesis, Utrecht University. Quaestiones Infinitae vol. XLIX, [Joosten 2004 available from Universiteit Utrecht/ • Kennedy, Juliette, 2007, “Kurt Gödel”, in The Stanford Encyclopedia of Philosophy, Winter 2007 Edition, Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2007/entries/goedel/>. • Kleene, Stephen Cole, 1945, “On the Interpretation of Intuitionistic Number Theory”, Journal of Symbolic Logic, 10(4): 109–124. doi:10.2307/2269016 • –––, 1952, Introduction to Metamathematics, Amsterdam: North-Holland. • –––, 1973, “Realisability: A Retrospective Survey”, in Mathias & Rodgers 1973: 95–112. doi:10.1007/BFb0066772 • Kleene, Stephen Cole and Richard Eugene Vesley, 1965, The Foundations of Intuitionistic Mathematics, Especially in Relation to Recursive Functions, (Studies in logic and the foundations of mathematics, 39), Amsterdam: North-Holland. • Kolmogorov, A., 1925, “O principe tertium non datur”, Matematiceskij Sbornik, 32: 646–667. English translation in van Heijenoort 1967: 416–437. • –––, 1932, “Zur Deutung der Intuitionistischen Logik”, Mathematische Zeitschrift, 35: 58–65. English translation in Mancosu 1998: 328–334. doi:10.1007/BF01186549 • –––, 1937, “Freudenthal, Hans: Zur intuitionistischen Deutung logischer Formeln. Heyting, A.: Bemerkungen zu dem Aufsatz von Herrn Freudenthal ‘Zur intuitionistischen Deutung logischer Formeln’”, Zentralblatt für Mathematik und ihre Grenzgebiete, 0015.24201. • Korevaar, Jaap., 2016, “Enkele persoonlijke herinneringen aan L.E.J. Brouwer”, (“Some personal memories of L.E.J. Brouwer”), Nieuw Archief voor Wiskunde, 5th series, 17(4): 247–249. [Korevaar 2016 available online] • Kreisel, G., 1962, “Foundations of Intuitionistic Logic”, in Logic, Methodology and Philosophy of Science: Proceedings of the 1960 International Congress, Ernst Nagel, Patrick Suppes, and Alfred Tarski (eds.), Stanford: Stanford University Press, 198–210. doi:10.1016/S0049-237X(09)70587-7 • –––, 1987, “Gödel’s Excursions into Intuitionistic Logic”, in Gödel Remembered, P. Weingartner and L. Schmetterer (eds.), Napoli: Bibliopolis, 67–179. • Kripke, Saul A., 1965, “Semantical Analysis of Intuitionistic Logic I”, in Formal Systems and Recursive Functions, (Studies in Logic and the Foundations of Mathematics, 40), J.N. Crossley and M. Dummett (eds.), Amsterdam: North-Holland, 92–130. doi:10.1016/S0049-237X(08)71685-9 • Krivtsov, Victor N., 2000a, “A Negationless Interpretation of Intuitionistic Theories”, Erkenntnis, 53(1–2): 155–172. doi:10.1023/A:1005618302941 • –––, 2000b, “A Negationless Interpretation of Intuitionistic Theories. I”, Studia Logica, 64(3): 323–344. doi:10.1023/A:1005233526469 • –––, 2000c, “A Negationless Interpretation of Intuitionistic Theories. II”, Studia Logica, 65(2): 155–179. doi:10.1023/A:1005207512630 • Kuiper, Johannes John Carel, 2004, Ideas and Explorations. Brouwer’s Road to Intuitionism, Ph.D. thesis, Utrecht University. Quaestiones Infinitae vol. XLVI [Kuiper 2004 available from Universiteit Utrecht/Universiteitsbibliotheek]. • Lewis, C.I., 1914, “The Calculus of Strict Implication”, Mind, New Series, 23(90): 240–247. doi:10.1093/mind/XXIII.1.240 • Mancosu, Paolo (ed.), 1998, From Brouwer to Hilbert. The Debate on the Foundations of Mathematics in the 1920s, Oxford: Oxford University Press. • Mathias, A.R.D. and H. Rodgers (eds.), 1973, Cambridge Summer School in Mathematical Logic 1971, (Lecture Notes in Mathematics, 337), Heidelberg: Springer. doi:10.1007/BFb0066770 • McKinsey, J.C.C., 1939, “Proof of the Independence of the Primitive Symbols of Heyting’s Calculus of Propositions”, Journal of Symbolic Logic, 4(4): 155–158. doi:10.2307/2268715 • Mendez, José M., 1988, “A Note on the Semantics of Minimal Intuitionism”, Logique et Analyse, 31(123–124): 371–377. [Mendez 1988 available online] • Michel, David, 2008, Systèmes formels et systèmes fonctionnels pédagogiques, Ph.D. thesis, Université Metz. [Michel 2008 available from Université de Lorraine]. • Mints, Grigori, 2006, “Notes on Constructive Negation”, Synthese, 148(3): 701–717. doi:10.1007/s11229-004-6294-3 • van der Molen, Tim, 2016, “The Johansson/Heyting letters and the birth of minimal logic”, ILLC Publications, Technical Notes Series, X-2016-04 [van der Molen 2016 available online from ILLC]. • Moschovakis, Joan, 2007, “Intuitionistic Logic”, in The Stanford Encyclopedia of Philosophy, Spring 2007 Edition, Edward N. Zalta (ed.), URL=<https://plato.stanford.edu/archives/spr2007/entries/ • Myhill, John, 1966, “Notes Towards An Axiomatization of Intuitionistic Analysis”, Logique et Analyse, 9(35–36): 280–297. [Myhill 1966 available online] • Nelson, David, 1966, “Non-Null Implication”, Journal of Symbolic Logic, 31(4): 562–572. doi:10.2307/2269691 • Parsons, Charles, 1967, [Introduction to the translation of sections 1–3 of Brouwer 1927B], in van Heijenoort 1967: 446–457. • Pieri, Mario, 1898, “I principii della geometria di posizione composti in sistema logico deduttivo”, Memorie della Reale Accademia delle Scienze di Torino, series II, 48: 1–62. • Plisko, V.E., 1988, “The Kolmogorov calculus as a part of minimal calculus”, Russian Mathematical Surveys, 43(6): 95–110. doi:10.1070/RM1988v043n06ABEH001993 • Pos, H.J., 1953–1954, “G.F.C. Griss als wijsgerig humanist en als mens”, De Nieuwe Stem, 8: 654–663. • Posy, Carl J., 1992, “Review: Dirk van Dalen, Intuitionistic Logic; Walter Felscher, Dialogues as a Foundation for Intuitionistic Logic”, Journal of Symbolic Logic, 57(2): 754–756. doi:10.2307/ • Prawitz, Dag, 1965, Natural Deduction. A Proof-Theoretical Study, Stockholm: Almqvist & Wiksell. • Prawitz, D. and P.-E. Malmnäs, 1968, “A Survey of Some Connections Between Classical, Intuitionistic and Minimal Logic”, in Contributions to Mathematical Logic, (Studies in Logic and the Foundations of Mathematics, 50), H. Arnold Schmidt, K. Schütte, and H.-J. Thiele (eds.), Amsterdam: North-Holland, pp. 215–229. doi:10.1016/S0049-237X(08)70527-5 • Rasiowa, Helena, 1974, An Algebraic Approach to Non-Classical Logics, (Studies in logic and the foundations of mathematics, 78), Amsterdam: North-Holland/ • Ruitenburg, Wim, 1991, “The Unintended Interpretations of Intuitionistic Logic”, in Perspectives on the History of Mathematical Logic, Thomas Drucker (ed.), Basel: Birkhäuser, 134–160. • van Rootselaar, B., 1953–1954, “In memoriam Dr. G.F.C. Griss”, Euclides, 29(1): 42–45. • Russell, Bertrand, 1903, The Principles of Mathematics, London: Allen & Unwin. • Sato, Masahiko, 1997, “Classical Brouwer-Heyting-Kolmogorov interpretation”, RIMS Kokyuroku, 1021: 28–47. • Smirnov, V.J., 1925 [published 1932], “Kolmogorov, A.N.: Über das Prinzip tertium non datur”, Jahrbuch über die Fortschritte der Mathematik 51.0048.01. • van Stigt, Walter P., 1990, Brouwer’s Intuitionism, (Studies in the history and philosophy of mathematics, 2), Amsterdam: North-Holland. • Stone, Marshall Harvey, 1937, “Topological Representations of Distributive Lattices and Brouwerian Logics”, Časopis pro pěstování matematiky a fysiky, 67(1): 1–25. • Sundholm, Göran, 1983, “Constructions, Proofs and the Meaning of Logical Constants”, Journal of Philosophical Logic, 12(2): 151–172. doi:10.1007/BF00247187 • –––, 2004, “The Proof-Explanation of Logical Constants is Logically Neutral”, Revue Internationale de Philosophie, 58(4): 401–410. • Sundholm, Göran and Mark van Atten, 2008, “The proper explanation of intuitionistic logic: on Brouwer’s proof of the Bar Theorem”, in van Atten et al. 2008: 60–77. doi:10.1007/978-3-7643-8653-5_5 • de Swart, H.C.M., 1976, “Another Intuitionistic Completeness Proof”, Journal of Symbolic Logic, 41(3): 644–662. doi:10.1017/S0022481200051215 • –––, 1977, “An Intuitionistically Plausible Interpretation of Intuitionistic Logic”, Journal of Symbolic Logic, 42(4): 564–578. doi:10.2307/2271877 • Tarski, Alfred, 1938, “Der Aussagenkalkül und die Topologie”, Fundamentae Mathematicae, 31: 103–134. English translation in Tarski 1956, pp. 421–454. • –––, 1953, “A General Method in Proofs of Undecidability”, in Undecidable Theories, A. Tarski, A. Mostowski, and R. Robinson (eds.), Amsterdam: North-Holland, 3–35. • –––, 1956, Logic, Semantics, Metamathematics. Papers from 1923 to 1938, J.H. Woodger (trl.), Clarendon Press. • Troelstra, A.S., 1977, “Aspects of constructive mathematics”, in Handbook of Mathematical Logic, (Studies in Logic and the Foundations of Mathematics, 90), Jon Barwise (ed.), Amsterdam: North-Holland, 973–1052. doi:10.1016/S0049-237X(08)71127-3 • –––, 1983, “Logic in the Writings of Brouwer and Heyting”, in Atti del Convegne Internazionaledi Storia della Logica. San Gimignano, 4–8 dicembre 1982, V. Abrusci, E. Casari, and M. Mugnai, eds., Bologna: CLUEB, 193–210. • –––, 1990, “On the Early History of Intuitionistic Logic”, in Mathematical Logic, Petio P. Petkov (ed.), New York: Plenum Press, 3–17. • Troelstra, A.S., J. Niekus, and H. van Riemsdijk, 1981, “Bibliography of A. Heyting”, Nieuw Archief voor Wiskunde, 3rd series, 29: 24–35. • Troelstra, A.S. and D. van Dalen, 1988, Constructivism in Mathematics, 2 vols., (Studies in Logic and the Foundations of Mathematics, 121 & 123), Amsterdam: North-Holland. • Valpola, Veli, 1955, “Ein system der negationlosen Logik mit ausschliesslich realisierbaren Prädicaten”, Acta Philosophica Fennica, 9: 1–247. • Veldman, Wim, 1976, “An Intuitionstic Completeness Theorem for Intuitionistic Predicate Logic”, Journal of Symbolic Logic, 41(1): 159–166. doi:10.1017/S0022481200051859 • –––, 1982, “On the Continuity of Functions in Intuitionistic Real Analysis. Some Remarks on Brouwer’s Paper: ‘Ueber Definitionsbereiche von Funktionen’”, Tech. Rep. 8210, Mathematisch Instituut, Katholieke Universiteit Nijmegen. • –––, 2008, “The Borel Hierarchy Theorem from Brouwer’s Intuitionistic Perspective”, Journal of Symbolic Logic, 73(1): 1–64. doi:10.2178/jsl/1208358742 • Vesley, Richard, 1980, “Intuitionistic Analysis: the Search for Axiomatization and Understanding”, in The Kleene Symposium, (Studies in Logic and the Foundations of Mathematics, 101), J. Barwise, H. J. Keisler, and K. Kunen (eds.), Amsterdam: North-Holland, 317–331. doi:10.1016/S0049-237X(08)71265-5 • Vredenduin, P.G.J., 1953, “The Logic of Negationless Mathematics”, Compositio Mathematica, 11: 204–277. [Vredenduin 1953 available online] • –––, 1956, “G.F.C. Griss and his Negationaless Intuitionistic Mathematics, by A. Heyting”, Journal of Symbolic Logic, 21(1): 91. doi:10.2307/2268511 • Wajsberg, Mordchaj, 1938, “Untersuchungen über den Aussagenkalkül [von A. Heyting]”, Wiadomosci Matematyczne, (old series) 46: 45–101. • Wang, Hao, 1987, Reflections on Kurt Gödel, Cambridge MA: MIT Press. • Wavre, Rolin, 1924, “Y a-t-il une crise des mathématiques? A propos de la notion d’existence et d’une application suspecte du principe du tiers exclu”, Revue de Métaphysique et de Morale, 31(3) • –––, 1926, “Logique formelle et logique empirique”, Revue de Métaphysique et de Morale, 33(1): 65–75. • Weyl, Hermann, 1921, “Über die neue Grundlagenkrise der Mathematik”, Mathematische Zeitschrift, 10(1–2): 39–79. English translation in Mancosu 1998: 86–118. doi:10.1007/BF02102305 • Whitehead, Alfred North, 1906, The Axioms of Projective Geometry, Cambridge: Cambridge University Press. • Whitehead, Alfred North and Bertrand Russell, 1910, Principia Mathematica, vol. 1, Cambridge: Cambridge University Press. Academic Tools How to cite this entry. Preview the PDF version of this entry at the Friends of the SEP Society. Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO). Enhanced bibliography for this entry at PhilPapers, with links to its database. Other Internet Resources I am grateful to Dirk van Dalen and Göran Sundholm for discussions of some of the issues involved. Helpful comments by the editors and Rosalie Iemhoff led to various improved formulations, corrections, and clarifications. Van Dalen kindly granted permission to quote from materials in the Brouwer Archive (at Utrecht at the time, now at the Noord-Hollands Archief in Haarlem). Thanks also to van Dalen and Eckhart Menzler-Trott for their search for Bernays’ letter to Brouwer, and to Sundholm and Michael Hegarty for a copy of de Iongh 1949. In section 3.6 I have drawn on van Atten 2004a; I thank the Association for Symbolic Logic for granting permission to do so.
{"url":"https://plato.stanford.edu/ENTRIES/intuitionistic-logic-development/","timestamp":"2024-11-06T10:51:29Z","content_type":"text/html","content_length":"191484","record_id":"<urn:uuid:e17b914f-c423-4eef-806d-3d42e4b4f93f>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00336.warc.gz"}
How to profit as the world gets fuller, richer and older How to profit as the world gets fuller, richer and older Doomsayers have for centuries warned of the dangers of overpopulation. The good news is that they’ve been wrong so far, says Alice Gråhns – here’s what you should invest in to keep it that way. Fifty years ago biologist Paul Ehrlich’s controversial bestseller The Population Bomb hit the bookshelves. Ehrlich warned that the 1970s would be a decade of mass starvation and ecological disaster, with hundreds of millions of people dying of hunger due to the planet’s inability to feed the growing human population. His sensationalist presentation and drastic policy recommendations won him plenty of readers. In the US, he warned, “we must have population control… by compulsion if voluntary methods fail”. He continued: “no changes in behaviour or technology can save us unless we can achieve control over the size of the human population”. Population growth was a cancer, he said, that “must be cut out”. Plenty of people took him at his word. The 1968 book arguably helped to inspire such horrors as mass sterilisation programmes in both India and China. But Ehrlich was also wrong. Not so much on population growth itself, but on its consequences. Like the Reverend Thomas Malthus before him, who made similar forecasts in his 1798 work An Essay on the Principle of Population, Ehrlich underestimated human ingenuity, and in particular the impact of the green revolution (a term also coined, ironically enough, in 1968). This brought with it, among other things, improved fertilisers, pesticides, herbicides, new farming techniques, mechanisation and genetically modified organisms (GMO). These boosted humanity’s ability to produce food from a limited amount of land, and increased agricultural yields significantly. None of this is to say that we don’t face challenges. The current global population of 7.6 billion is expected to reach 8.6 billion in 2030, 9.8 billion in 2050 and 11.2 billion in 2100, according to The World Population Prospects: The 2017 Revision, a report published by the UN in July last year. However, in both emerging and developed markets the challenges we face are those that accompany growing prosperity, rather than imminent destruction. And the solutions are all about making better use of our resources, rather than population control. Baby booms and ageing populations The real problem we have with population growth today, says economist George Magnus, author of The Age of Aging: How Demographics are Changing the Global Economy and Our World (2008), is that it’s not “equally distributed around the world”. From 2017 to 2050 (according to the UN report mentioned above), half of the world’s population growth will stem from just nine countries: India, Nigeria, the Democratic Republic of the Congo, Pakistan, Ethiopia, Tanzania, the US, Uganda and Indonesia. Today, China (with 1.4 billion people) and India (with 1.3 billion) are the most populous countries and account for 19% and 18% of the total global population respectively. Nigeria, which has the fastest-growing population of the ten most densely populated countries in the world, is set to overtake the US as the third-most populous country on the planet before 2050. Africa as a whole is also expected to see its population at least double by 2050. However, it’s a very different story in many other countries. On average, fertility rates have fallen sharply in recent decades as societies have become more affluent. In 1964, the average woman gave birth to 5.068 children, according to the World Bank. By 2016 the global fertility level had decreased to 2.439 children per woman. Even in Africa, where fertility levels are the highest of any region, total fertility has fallen from 5.1 births per woman in 2000-2005 to 4.7 in 2010-2015. Europe has been an exception to this trend, but hardly a dramatic one – total fertility has risen from 1.4 births per woman in 2000-2005 to 1.6 in 2010-2015, which is still below the replacement rate (the level at which the birth rate offsets the death rate, estimated at around 2.1 births per woman). Indeed, between 2010 and 2015, fertility was below the replacement level in 83 countries that comprise 46% of the world’s population, according to the UN. A unique phenomenon in human history Populations in these countries are still growing (even excluding immigration), but that’s due to increased life expectancy rather than having children. As a result, by the end of this decade people aged 65 and older will outnumber children younger than five for the first time in human history, according to the World Health Organisation (WHO). Life expectancies worldwide are increasing by roughly one year every five years and will reach an average of 77 by 2050, from 48 in 1950. As a result, compared with 2017 the number of people aged 60 or over is expected to more than double by 2050 and more than triple by 2100, rising from 962 million globally in 2017 to 2.1 billion in 2050 and 3.1 billion in 2100. This is especially pronounced among rich and advanced economies, says Magnus. But it’s also happening in eastern Europe, China, Taiwan and South Korea. This combination of a growing but ageing population is “a unique phenomenon in human history”, says Magnus. “We’ve never actually experienced weak or falling fertility at the same time as we’re seeing rising life expectancy.” This will present several new economic problems. For a start, “people don’t have enough savings to be idle for the next 20-25 years after they retire”, Magnus notes. That’s a problem for individuals – and one solution will simply be working longer – but it’s also going to put a lot more pressure on public finances. If there are fewer young people earning taxable incomes, then the question of how to pay for state pensions becomes much trickier. And that’s far from the only future source of pressure on public spending – governments have also made plenty of promises to voters about healthcare and residential care. Part of the solution to that may be increased private provision, while automation may also play a part in performing tasks that older populations now struggle to manage. Rising affluence and increasing consumption Here’s another complication. Not only are populations getting older, but they’re also becoming wealthier. Rising affluence is in itself a major factor driving falling fertility rates (due to lower child mortality rates, better education, and better access to contraception, among other things). But as people get better off, they also consume more – their diet improves, they may acquire some form of transport, and their whole life becomes more energy-intensive. It will come as no surprise that (according to Oxfam) the richest 10% of human beings globally produce about half of the world’s fossil-fuel emissions, while the poorest half produce 10%. As a result, says Sarah Harper, professor of gerontology at Oxford University and author of How Population Change Will Transform Our World, “the debate is shifting away from overpopulation… to population and consumption”. As Harper puts it: “it’s very clear that we can sustain large numbers of people, but we have to change our basic consumption patterns”. Countries of the northern hemisphere (academic shorthand for “rich countries”) need to reduce their consumption to allow countries of the south (the poorer nations) – particularly in Africa – “to raise their consumption to levels that are appropriate for the 21st century”. In other words, we need to make sure that everyone can get rich without exhausting the planet in the process. That’s not going to be easy – asking Americans to consume less so that Africans can consume more is not a practical approach. So again, we need to make more efficient use of the available resources. Specifically, we have to make city life more sustainable. As Magnus points out, income per head correlates closely with greenhouse-gas emissions simply because, as countries get richer, “more people live in buildings in concentrated areas in towns and cities”. The World Bank reckons that, by 2030, 93% of the global middle class will be from developing countries, where they will largely live in cities. And by the time 2040 or 2050 rolls around, the UN estimates that the world’s biggest cities will mostly be located in today’s developing economies. If Nigeria’s population keeps growing, and urbanisation continues at current rates, then Lagos may become the world’s largest metropolis within 60 years, home to between 85 and 100 million people (60 years ago it had a population of fewer than 200,000 people). In an extreme scenario, say demographers Daniel Hoornweg and Kevin Pope at the University of Ontario Institute of Technology, by 2100 only 14 of the 101 largest global cities will be located in Europe or the Americas. We need more food and water Another obvious implication of a growing, more affluent global population is that it raises demand for finite resources, including the critical ones of food, water and energy. As we’ve noted already, underestimating humanity’s ability to adapt and continue to meet its needs on this front led both Malthus and Ehrlich to mistaken conclusions about our inevitable demise as a species. But clearly, as the population continues to grow, agricultural efficiency needs to continue to improve. The World Bank estimates that by 2030 demand for food will have risen by 50%. By 2050, the global population is expected to be just 25% larger than it is now. But farmers will need to have boosted food output by 50%-100% just to keep up. The main reason for this is because increased affluence and changing dietary habits mean more demand for animal products, such as dairy, fish and meat, which is a much more resource-intensive way to eat – growing feed for animals requires more land, water, and energy than producing food simply by growing and eating plants. A drying world and renewable energy It doesn’t help that urbanisation, rising affluence and greater consumption levels also result in rising pollution, which in turn is one reason that nearly a third of usable land on the planet is severely degraded, while many water systems have become stressed. Roughly 1.1 billion people worldwide lack reliable access to water, and a total of 2.7 billion live in areas where water is scarce for at least one month a year. By 2025, two-thirds of the world’s population may face water shortages, according to the World Wildlife Fund. This is particularly pertinent to agriculture, which consumes more water than any other industry and wastes much of that through inefficiencies. Meanwhile, to cut down on global carbon emissions – which governments across the world are keen to do – we are under pressure to make the shift from using fossil fuels where possible to using low-carbon energy sources (such as solar and wind power, which in turn could be used to charge batteries for electric vehicles, say). If the population keeps growing at this pace, we will need 50% more energy to sustain humanity by 2050. Again, this is skewed by wealth. The US, for example, has a population of just over 300 million – about 5% of the people on Earth – yet it consumes 20% of the world’s energy and produces 19% of the world’s greenhouse gases. Since Thomas Malthus predicted a disaster in 1798, we’ve made progress. Hopefully we will continue to prove the doomsaying Reverend and the likes of Ehrlich wrong. But it will require investment and innovation in everything from clean transport and renewable energy to agricultural technology and more efficient water usage. We look at how to profit from these trends in the box below. The seven stocks to buy now To support a growing and ageing population, the world needs to make more efficient use of its resources. In particular, the pressure on food and water resources will call for innovative technological solutions. Deere & Company (NYSE: DE) might be best known for its tractors, but it is also at the forefront of precision agriculture, which deploys everything from drones to sophisticated sensors to artificial intelligence to help automate farming and boost yields by optimising growing conditions in every inch of a field. The stock trades on a forward price/earnings (p/e) ratio of 16. Nutrien (Toronto: NTR) is the world’s biggest fertiliser company, and was created from the merger of PotashCorp and Agrium, which completed at the tail-end of last year. As populations grow, demand for fertilisers should rise, particularly if prices for food crops also pick up. The merger aims to deliver cost savings of $500m over the next two years. You can’t grow food without water. Solutions to water scarcity – a growing problem – include desalinating seawater, recycling wastewater for industrial applications, and capturing rainfall. But an even easier win is to patch up leaks and wastage due to faults in existing infrastructure. Aegion Corporation (Nasdaq: AEGN) enables leaky pipes to be fixed without digging up entire roads. The company also extends the life of infrastructure assets such as bridges, buildings and waterfront structures. Full-year revenues hit a record $1.36bn in 2017, up 11% on 2016. It trades on a forward p/ e of 19. Another interesting water-technology play is Xylem (NYSE: XYL), which operates in more than 150 countries. The group makes everything from pumps to smart meters. In the first quarter revenues rose by 14% and Xylem now expects its 2018 full-year revenues to reach at least $5.1bn. The company expects future growth to come from the adoption of smart-meter technology in many countries, and also from infrastructure improvement projects in China and India. Promises by politicians to slash carbon emissions mean that the shift from fossil fuels to renewables will continue to pick up pace, even as a growing, wealthier population puts ever more pressure on energy supplies. Vestas Wind Systems (Copenhagen: VWS) is the world’s largest wind-turbine company. Its shares are down roughly 30% from its peak in the past 12 months and it trades on a p/e of 14. Meanwhile, revenues at US solar-panel maker First Solar (Nasdaq: FSLR) are expected to grow by 22.34% over the next two years as demand for, and use of, solar energy picks up. The ageing population is putting pressure on healthcare provision. Spire Healthcare (LSE: SPI) is one of the UK’s largest private-hospital providers, with around 40 hospitals and 12 clinics. Last year was turbulent for the company, but with the NHS under pressure, particularly as the population ages, the UK’s independent healthcare sector should benefit. For Spire – a major provider of hip and knee operations in particular – the “self-pay” segment of the market is growing rapidly, with individuals (typically in the over-50s bracket) increasingly opting to pay for treatment to avoid lengthy waits. Spire trades on a p/e of 13.
{"url":"https://degreesatcapella.com/2018/06/07/how-to-profit-as-the-world-gets-fuller-richer-and-older/","timestamp":"2024-11-13T18:37:02Z","content_type":"text/html","content_length":"35135","record_id":"<urn:uuid:fb3f1431-8b1b-44e6-82ca-604495c2c4d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00811.warc.gz"}
From Encyclopedia of Mathematics Definition and existence The presentation in this article seems odd. We start with a $\mathbf{Z}$-module with a symplectic form $\psi$ and then assume (explicitly) that there is a $\psi_0$ which is a quadratic form on $L \ otimes \mathbf{Z}/2\mathbf{Z}$, and then assume (implicitly) that any such $\psi_0$, should any exist, gives a consistent value of $\mathrm{Arf}$. It would make more sense to start with $\psi_0$ a quadratic form on a module over a field $k$ of characteristic two, define $\psi(x,y)$ by polarisation $\psi_0(x+y) - \psi_0(x) - \psi_0(y)$, define $ \mathrm{Arf}$ with respect to some symplectic basis, and then assert that it is independent of choice of basis. This seems to be the more usual presentation. Richard Pinch (talk) 20:45, 24 December 2017 (CET) How to Cite This Entry: Arf-invariant. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Arf-invariant&oldid=42587
{"url":"https://encyclopediaofmath.org/wiki/Talk:Arf-invariant","timestamp":"2024-11-06T19:01:35Z","content_type":"text/html","content_length":"13564","record_id":"<urn:uuid:5d4e54dc-9d80-410a-bd09-be943028341e>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00507.warc.gz"}
Measured Variability Of Southern Yellow Pine - Manual for LS-DYNA Wood Material Model 143 PDF Version (2.92 MB) PDF files can be viewed with the Acrobat® Reader® 2.3 FITTING THE MODEL TO THE DATA 1. Stiffnesses (EL, ET, GLT, and GTR) and the major Poisson’s ratio (nLT) are directly measured from the elastic portion of the stress-strain curves or are selected from data tabulated in the literature. If the parallel stiffness EL is known, then one can estimate the other elastic stiffnesses from tables documented in the literature for softwoods (e.g., pine).^(16) Typically, ET is 5 to 15 percent of EL. Care must be taken to measure or select the correct Poisson’s ratio. The major ratio, nLT, is typically greater than 0.1 and is about 10 times larger than the minor ratio, n 2. Strengths (XT, XC, YT, YC, S||, and S[^]) are obtained from measurements of peak/yield stress from stress histories or are selected from data tabulated in the literature. Typically, the parallel tensile strength of pine is 30 to 50 times greater than the perpendicular tensile strength. The parallel compressive strength is four to five times greater than the perpendicular compressive strength. The parallel shear strength is about 10 to 15 percent of the parallel tensile strength. Data for perpendicular shear strength vary. One source suggests that the perpendicular shear strength, S[^], is 140 percent of the parallel shear strength, S||.^(7) Another source suggests that the percentage is only 18 to 28 percent.^(18) 3. The prepeak hardening parameters (N||, c||, N[^], and c[^]) are derived from the nonlinear portion of compressive stress-strain curves measured both parallel and perpendicular to the grain. Hardening is modeled separately for the parallel and perpendicular modes. The parameters N|| and N[^] determine the onset of nonlinearity. If the user wants prepeak nonlinearity to initiate at 70 percent of the yield stress, then the user would input N = 0.3 (for 30 percent), which is derived from N = 1 – 0.7 (100 percent – 70 percent). The parameters c|| and c[^] set the amount of nonlinearity and are selected through iteration: Pick a value for c, plot the results from a single-element simulation, and compare with the test data. Typical values of c are between 100 and 1000 (unitless). Gradual hardening is accomplished with smaller values of c. Rapid hardening is accomplished with larger values of c. If no data are available, prepeak hardening can be neglected (giving a linear stress-strain curve to yield) by setting both values of N equal to zero. 4. The fracture energy (Gf I ||, Gf II ||, Gf I [^], or Gf II[^]) is the area under the stress-displacement curve as it softens from peak stress to zero stress. The best approach is to measure fracture energy directly in both direct tension and simple shear. An alternative approach is to measure fracture intensity and convert it to fracture energy. 5. The postpeak softening parameters (B and D) are derived from the softening portion of the stress-strain or stress-displacement curves and are set in conjunction with the fracture energies (Gf I | |, Gf II ||, Gf I[^], and Gf II[^]) and the damage parameters (dmax|| and dmax[^]). The parameters B (for parallel modes) and D (for perpendicular modes) set the shape of the softening portion of the stress-strain or stress-displacement curves once the fracture energy is selected. Although each parameter (B or D) is intended to be simultaneously fit to both tension and shear data, shear data are rarely available. The procedure is iterative: Pick a value for D or B, plot the results from a single-element simulation, and compare with the test data. Typical values for B and D are 10 to 50 (unitless). The smaller the value, the more gradual the initial softening. 6. The postpeak damage parameters (dmax|| and dmax[^]) are derived from the final softening portion of the stress-strain or stress-displacement curves and are set in conjunction with the fracture energies (Gf I ||, Gf II ||, Gf I[^], and Gf II[^]) and the softening parameters (B and D). The parameters dmax|| (for parallel modes) and dmax[^] (for perpendicular modes) set the maximum damage that can accumulate. Values between 0 and 1 are allowed. A value of 0 neglects softening, while a value of 1 models complete softening to 0 stress and stiffness. A typical value for parallel damage is dmax|| = 0.9999. Erosion is automatically modeled when parallel damage exceeds 0.99. Set dmax|| > 0.99 to model erosion at high damage levels. Set dmax|| £ 0.99 to bypass erosion (not recommended because this may cause mesh entanglement and element inversion). A typical value for perpendicular damage is dmax[^] = 0.99. Values greater than 0.99 are not recommended because elements have a tendency to tangle and invert with near zero stiffness, and erosion is not automatically modeled with perpendicular damage. Values less than 0.99 should be input if mesh entanglement and element inversion are a problem. 7. The rate-effect parameters (h||, h[c]||, n||, h[^], h[c][^], and n[^]) are obtained from fits to strength versus strain-rate measurements or are selected from data tabulated in the literature. Separate measurements must be made parallel and perpendicular to the grain to separately fit the parameters parallel (h||, hc||, and n||) and perpendicular (h[^], h[c][^], and n[^]) to the grain. Separate measurements may also be made in tension/shear versus compression to separately fit the tensile/shear (h|| or h[^]) and compression (h[c]|| or h[c][^]) parameters. If separate tension/ shear versus compression measurements are not available, then the following ratios are recommended: h||/h[c]|| = h[^]/h[c][^] = QT / QC. Each set of parameters is obtained from dynamic strength measurements at two different strain rates (because there are two parameters), as well as from the static strength (zero strain rate). At each of two strain rates, the difference between the dynamic and static strength is Ch, where C is the stiffness (EL, ET, GLT, or GTR), h is the effective fluidity parameter equal to h = h0/^n. Here, h0 (h|| or h[c]||, or h[^] or h[c][^]) and n (n| | or n[^]) are the two parameters to be fit (C and n|| = 0.107, with h|| = h[c]|| = 0.0045 for parallel rate effects, and n[^] = 0.104, with h[^] = h[c][^] = 0.0962 for perpendicular rate effects (for time in milliseconds). Typical graded wood values are n|| = 0.107, with h|| = 0.0045QT and h[c]|| = 0.0045QC (parallel), and n[^] = 0.104, with h[^] = 0.0962QT and h[c][^] = 0.0962QC Previous | Table of Contents | Next
{"url":"https://www.fhwa.dot.gov/publications/research/safety/04097/sec23.cfm","timestamp":"2024-11-12T06:48:53Z","content_type":"application/xhtml+xml","content_length":"29619","record_id":"<urn:uuid:4dc1e235-c050-4380-a73d-ea79ea2f798b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00677.warc.gz"}
Wideband Two-Ray Channel Wideband two-ray channel environment Since R2021a The Wideband Two-Ray Channel block propagates wideband signals from one point in space to multiple points or from multiple points back to one point via both the direct path and the ground reflection path. The block propagates wideband signals by (1) decomposing them into subbands, (2) propagating subbands independently, and (3) recombining the propagated subbands. The block models propagation time, propagation loss, and Doppler shift. The block assumes that the propagation speed is much greater than the object's speed in which case the stop-and-hop model is valid. X — Wideband input signal M-by-N complex-valued matrix | M-by-2N complex-valued matrix • Wideband nonpolarized scalar signal, specified as an □ M-by-N complex-valued matrix. The quantity M is the number of samples in the signal and N is the number of two-ray channels. Each channel corresponds to a source-destination pair. Each column contains an identical signal that is propagated along the line-of-sight and reflected paths. □ M-by-2N complex-valued matrix. The quantity M is the number of samples of the signal and N is the number of two-ray channels. Each channel corresponds to a source-destination pair. Each adjacent pair of columns represents a different channel. Within each pair, the first column represents the signal propagated along the line-of-sight path and the second column represents the signal propagated along the reflected path. The quantity M is the number of samples of the signal and N is the number of two-ray channels. Each channel corresponds to a source-destination pair. The size of the first dimension of the input matrix can vary to simulate a changing signal length. A size change can occur, for example, in the case of a pulse waveform with variable pulse repetition Example: [1,1;j,1;0.5,0] Data Types: double Complex Number Support: Yes Pos1 — Position of signal origin 3-by-1 real-valued column vector | 3-by-N real-valued matrix Origin of the signal or signals, specified as a 3-by-1 real-valued column vector or 3-by-N real-valued matrix. The quantity N is the number of two-ray channels. If Pos1 is a column vector, it takes the form [x;y;z]. If Pos1 is a matrix, each column specifies a different signal origin and has the form [x;y;z]. Position units are in meters. Pos1 and Pos2 cannot both be specified as matrices — at least one must be a 3-by-1 column vector. Example: [1000;100;500] Data Types: double Pos2 — Position of signal destination 3-by-1 real-valued column vector | 3-by-N real-valued matrix Origin of the signal or signals, specified as a 3-by-1 real-valued column vector or 3-by-N real-valued matrix. The quantity N is the number of two-ray channels. If Pos2 is a column vector, it takes the form [x;y;z]. If Pos2 is a matrix, each column specifies a different signal origin and has the form [x;y;z]. Position units are in meters. Pos1 and Pos2 cannot both be specified as matrices — at least one must be a 3-by-1 column vector. Example: [-100;300;50] Data Types: double Vel1 — Velocity of signal origin 3-by-1 real-valued column vector | 3-by-N real-valued matrix Velocity of signal origin, specified as a 3-by-1 real-valued column vector or 3-by-N real-valued matrix. The dimensions of Vel1 must match the dimensions of Pos1. If Vel1 is a column vector, it takes the form [Vx;Vy;Vz]. If Vel1 is a 3-by-N matrix, each column specifies a different origin velocity and has the form [Vx;Vy;Vz]. Velocity units are in meters per second. Example: [-10;3;5] Data Types: double Vel2 — Velocity of signal destination 3-by-1 real-valued column vector | 3-by-N real-valued matrix Velocity of signal origin, specified as a 3-by-1 real-valued column vector or 3-by-N real-valued matrix. The dimensions of Vel2 must match the dimensions of Pos2. If Vel2 is a column vector, it takes the form [Vx;Vy;Vz]. If Vel2 is a 3-by-N matrix, each column specifies a different origin velocity and has the form [Vx;Vy;Vz]. Velocity units are in meters per second. Example: [-1000;300;550] Data Types: double Out — Propagated signal M-by-N complex-valued matrix | M-by-2N complex-valued matrix • M-by-N complex-valued matrix. To return this format, set the CombinedRaysOutput property to true. Each matrix column contains the coherently combined signals from the line-of-sight path and the reflected path. • M-by-2N complex-valued matrix. To return this format set the CombinedRaysOutput property to false. Alternate columns of the matrix contain the signals from the line-of-sight path and the reflected path. The output Out contains signal samples arriving at the signal destination within the current input time frame. Whenever it takes longer than the current time frame for the signal to propagate from the origin to the destination, the output may not contain all contributions from the input of the current time frame. The remaining output will appear in the next execution of the block. Combine two rays at output — Option to combine two rays at output on (default) | off Select this parameter to combine the two rays at channel output. Combining the two rays coherently adds the line-of-sight propagated signal and the reflected path signal to form the output signal. You can use this mode when you do not need to include the directional gain of an antenna or array in your simulation. Example: on When the origin and destination are stationary relative to each other, the block output can be written as y(t) = x(t – τ)/L. The quantity τ is the delay and L is the propagation loss. The delay is computed from τ = R/c where R is the propagation distance and c is the propagation speed. The free space path loss is given by ${L}_{fsp}=\frac{{\left(4\pi R\right)}^{2}}{{\lambda }^{2}},$ where λ is the signal wavelength. This formula assumes that the target is in the far-field of the transmitting element or array. In the near-field, the free-space path loss formula is not valid and can result in losses smaller than one, equivalent to a signal gain. For this reason, the loss is set to unity for range values, R ≤ λ/4π. When there is relative motion between the origin and destination, the processing also introduces a frequency shift. This shift corresponds to the Doppler shift between the origin and destination. The frequency shift is v/λ for one-way propagation and 2v/λ for two-way propagation. The parameter v is the relative speed of the destination with respect to the origin. Extended Capabilities C/C++ Code Generation Generate C and C++ code using Simulink® Coder™. Version History Introduced in R2021a See Also
{"url":"https://de.mathworks.com/help/radar/ref/widebandtworaychannel.html","timestamp":"2024-11-09T00:39:48Z","content_type":"text/html","content_length":"116690","record_id":"<urn:uuid:97e5950e-9055-440d-b187-b4e80d012b5a>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00308.warc.gz"}
Today’s price of Marriot (MAR) is $75. MAR does not pay dividends. Each year, the Today’s price of Marriot (MAR) is $75. MAR does not pay dividends. Each year, the stock... Today’s price of Marriot (MAR) is $75. MAR does not pay dividends. Each year, the stock price of MAR can either go up or down. If the stock price goes up, the gross rate of return is u = 2. If the stock price goes down, the gross rate of return is d = 0.5. The c.c. risk-free interest rate is zero percent. You are interested in a European put option on MAR with a strike of $80 and a maturity of two years. Assume there is no What is the price of the put option? ANSWER IN THE IMAGE ((YELLOW HIGHLIGHTED). FEEL FREE TO ASK ANY DOUBTS. THUMBS UP PLEASE.
{"url":"https://justaaa.com/finance/88279-todays-price-of-marriot-mar-is-75-mar-does-not","timestamp":"2024-11-12T09:54:23Z","content_type":"text/html","content_length":"39912","record_id":"<urn:uuid:c073be51-86fe-48d8-9d1c-31cbfa5438ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00656.warc.gz"}
Use the pth power mean to prove an inequality - Stumbling Robot Use the pth power mean to prove an inequality previous exercise Further, if Thus, the requested inequality holds for any Point out an error, ask a question, offer an alternative solution (to use Latex type [latexpage] at the top of your comment):
{"url":"https://www.stumblingrobot.com/2015/07/10/use-the-pth-power-mean-to-prove-an-inequality/","timestamp":"2024-11-09T13:04:35Z","content_type":"text/html","content_length":"59131","record_id":"<urn:uuid:95aebbbd-0601-4631-b535-78af769d3248>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00269.warc.gz"}
Lesson 17 Volume and Density • Let’s use volume and density to solve problems. 17.1: A Kilogram by Any Other Name Which has more mass, a thousand kilograms of feathers or a thousand kilograms of steel? Explain your reasoning. 17.2: Light as a Feather The feathers in a pillow have a total mass of 59 grams. The pillow is in the shape of a rectangular prism measuring 51 cm by 66 cm by 7 cm. A steel anchor is shaped like a square pyramid. Each side of the base measures 20 cm, and its height is 28 cm. The anchor’s mass is 30 kg. 1. What’s the density of feathers in kilograms per cubic meter? 2. What’s the density of steel in kilograms per cubic meter? 3. What’s the volume of 1,000 kg of feathers in cubic meters? 4. What’s the volume of 1,000 kg of steel in cubic meters? Iridium is one of the densest metals. How many times heavier would a standard pencil be if it were made out of iridium instead of wood? 17.3: A Fishy Situation An aquarium manager drew a blueprint for a cylindrical fish tank. The tank has a vertical tube in the middle in which visitors can stand and view the fish. The best average density for the species of fish that will go in the tank is 16 fish per 100 gallons of water. This provides enough room for the fish to swim while making sure that there are plenty of fish for people to see. The aquarium has 275 fish available to put in the tank. Is this the right number of fish for the tank? If not, how many fish should be added or removed? Explain your reasoning. Imagine you have a baseball and an apple the size of a baseball. If we weigh each, we’ll likely find that even though they’re the same size, the baseball weighs more. A baseball has volume 200 cubic centimeters and weighs 145 grams, while an apple the same volume might weigh about 100 grams. We say that the baseball is more dense than the apple because it has more mass packed into each unit of volume. The density of the apple in this example is 0.5 grams per cubic centimeter, because \(\frac{100\text{ grams}}{200\text{ cm}^3} = 0.5\) grams per cubic centimeter. For the baseball, the density is \(\frac{145\text{ grams}}{200\text{ cm}^3} = 0.725\) grams per cubic centimeter. In general, to find the density of an object, divide its mass by its volume. • density The mass of a substance per unit volume.
{"url":"https://curriculum.illustrativemathematics.org/HS/students/2/5/17/index.html","timestamp":"2024-11-02T21:24:10Z","content_type":"text/html","content_length":"83049","record_id":"<urn:uuid:b20b2471-e959-444f-b23a-b61a27d9ffa9>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00360.warc.gz"}
How do I analyze turbulent flow patterns? | SolidWorks Assignment Help How do I analyze turbulent flow patterns? On a typical turbulent flow generator (TFG) is not possible to say or the shape of the flow. The information is very sensitive to turbulent flow lines and patterns. In a TFG it is possible to find another way to analyze the flow. At least in the wave-triggers that can contribute in the direction of an eddy we can isolate both the turbulent flow pattern and the eddy modes. What are the characteristics of a turbulent inlet here? To be able to analyze which turbulent flows are inlets, we need to understand the transversal coherence of the flow or to measure its turbulence. I will show you four examples of transversal coherence in TFGs. In order to analyze the transversal coherence one could identify transversal mode (MT) or multiple modes. If there is any distinction between a single TFPG-transversal mode and the other inlet and exit it. Example 3. Transversal Mode in a TFG Here is the TFG from an ordinary TFG. If we try to measure the turbulent fluid inlet (upper left) transversal coherence is $C$ and also if we look in the transversal coherence of all the components of the transversal layer (referred to as P1-head in FIG ). The transversal field is represented by a line containing $C$ and its derivatives. If we take this equation for a TFG, we have the same coherence in the transversal coherence. Now to analyze the transversal coherence one can divide the transversal coherence into three main phases: periodicity, dynamics of turbulence and eddy. Periodicity of transition {#periodicity} ————————– The density of the inlet fluid is a crucial element of the tube structure. With this condition the fluid begins to flow with a high linewidth (large eddy bandwidths) and then with a gradual change from this low linewidth to higher eddies. The transversal coherency of the flow is again $C$ with respect to such initial pattern. The transversal turbulence is represented by one large eddy profile at $z^\sigma = 0$. Then $C$ corresponds to a streamwise (not necessarily transversal) accumulation as seen in FIG (B-C). The transversal coherence $\{C\}$ is then a transversal distribution with profile $C(z)$ and its derivatives moving with respect to $z^\a$ and the $z$ axis. Paid Homework Help Online Then we have also a transversal velocity profile and its derivatives moving with respect to $z^\z$. What are the effect of small eddy profiles on transversal and evanescent flows? Notice that the transversalHow do I analyze turbulent flow patterns? Are turbulent flow patterns actually convective (i.e. Gaussian)? And how can I design flux-distributed turbulence transitions in a turbulent flow pattern? Example: We can track flow profiles (shocks) you could try here a laser light, just like gas velocity and density can be measured with this kind of hardware. Once we know how exactly the turbulent water diffuses into a parcel of water the analysis can be accomplished. So we can combine current laser light with a beam we expect turbulent flow pattern. Creating a laser beam: We can create a laser beam with a laser guide fitted to the location and velocity of the laser, fitting through-scales to the direction of turbulent flow and as an ’in situ’ measurement of the turbulent flow pattern (i.e. the location, in other words, the “windward” velocity). Here is a shot at a 2d image of the image at 1 millisecond with a random velocity, using an input velocity of 932kms$^{-1}$ on 10 images taken every 30 minutes. The diameter of the line ofsight for this sample was chosen as 0.75”. Once we know actual velocity the time dimension of the line of sight can be measured by the turbulence pattern in the laser spot. This is a snapshot of the flow model, which is pretty similar to how a high intensity filament is a moving thing within a thin filament. Mathematically this produces an ’overplicative’ result, not a ’mean’ one. I note that in these examples turbulence pattern for particles is modeled through moving flow and does not represent a ‘flow’. This is a simplification in a certain form (overplicative-mean turbulence pattern in our examples). Turbulence pattern in turbulent water? Here’s a picture of a typical spatio-temporal example flow pattern over a turbulent water flow (see Fig. 4). Images: Figure 4. Pay Someone To Take Test For Me In Person There was a small stream, much to the east-west, emerging from the river. The source of the stream was a high intensity blobs of dissolved water and a high-voltage filament. Part of this stream would initially be mostly stationary while the rest would remain stationary. The source temperature was the largest “in situ” melting point of all the water (i.e. the boiling point). The most significant of these was attributed to this stream. The solid layer was at the solid value and formed relatively slowly while the liquid and brine were essentially immobile. The brine “outflow” was composed of a short segment of water with a low melting point, and at this time non-blendingly formed a small fraction consisting largely of water with thick branches. But now we know from the figure that these blobs were not formed by meltingHow do I analyze turbulent flow patterns? J.G. Maude and J.W. Varshavin, “An Inequalities Approach to Flow Analysis and Related Questions,” PhD diss., Stanford University, July 2016 3.3D Model Example, D. C. Kim and J. R. Chisholm, “Two-dimensional Fourier Transform for Ordinary Differential Equations,” IEEE Trans. Pay Someone Do My Homework Flow, vol. 53, no. 9, September 2004, pp. 150-169. Fundamentally, ordinary differential equations are equivalent to ordinary Fourier transformation and are said to be of the Laplace–Beltrami type, whereas the Fourier transformation is known as integral equations. As a generalization, there is a known functionals approach for setting the representation of the integral of a 2-dimensional Euler number and, hence, an integral representation of the integral is known. One can also construct examples for general regularity sets and the Fourier transform of an ordinary differential equation in such a way it includes the Laplace–Beltrami functionals part. A generalization of the Laplace–Beltrami functionals part can also be constructed as well as some other commonly used functionals which are known as the Laplace regularity and Laplace fractional regularity. However, most of the known regularity and fractional regularity functions generally are of the Laplace type, which uses the Laplace transformation which consists of the differentiation of two types. In other words, more complex solutions to the Laplace and fractional systems can be obtained by explicitly considering complex equations, such as Hurwitz equations and Laplace-Beltrami system. Therefore, a system of ordinary differential equations taking the Laplace transform is known to have very complicated and quite non-uniform structure. On the contrary, another general approach for constructing regularity problems based on Laplace transforms is known as logarithmic functions and has very compact behavior about its critical point. 6.2 Differential Equations. In order to look for a function with multiple coefficients that satisfy certain integral equations regarding equation 3.3, one can basically have a different approach which utilizes two different methods. One is the 2D direct integral formalism. Apart from ordinary differential equations involving such equations as an example of the same-dimensional functional dependence, he can also be used for specific 2D partial differential equations or partial differential equations which are equivalent or similar to 2D differential equations. 6.3 Differential equations with the Laplace transform of arbitrary functions at infinity (3. Pay Someone To Fill Out 2.1..4.3) The usual 2D integral methods for the Laplace–Beltrami type in more details apply in the case of 1D differential equations. Such a two-dimensional integral equations can be recast into the following Laplace transforms [@1d]: [t12]{} = \
{"url":"https://solidworksaid.com/how-do-i-analyze-turbulent-flow-patterns-18496","timestamp":"2024-11-06T04:37:08Z","content_type":"text/html","content_length":"156324","record_id":"<urn:uuid:e85d7ed6-3b7c-45e1-bf6e-fb6a15e44bec>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00814.warc.gz"}
Neural Network L All NNabla functions are derived from the nnabla.function.Function class. class nnabla.function.Function Function interface class. Instances of nnabla.function.Function are not directly created by users. It is indirectly created by the functions available in nnabla.functions. These functions return nnabla.Variable (s) holding the created function instance as the parent property. Get args of the function. auto_grad_depends_input_data(self, int i, int j) auto_grad_depends_output_data(self, int i, int o) backward(self, inputs, outputs, accum=None) forward(self, inputs, outputs) grad_depends_input_data(self, int i, int j) grad_depends_output_data(self, int i, int o) inplace_data(self, int i) inplace_data_with(self, int i) need_setup_recompute(self, int o) recompute(self, inputs, outputs) set_active_input_mask(self, mask) setup(self, inputs, outputs) setup_recompute(self, inputs, outputs) Get tags of the function. class nnabla.function.PythonFunction(ctx=None) Creates a user-defined custom function in the subclsass. To implement the naive multiplicaiton function of two variables using PythonFunction, import nnabla as nn import nnabla.functions as F from nnabla.function import PythonFunction class Mul2(PythonFunction): def __init__(self, ctx): super(Mul2, self).__init__(ctx) def name(self): return self.__class__.__name__ def min_outputs(self): return 1 def setup_impl(self, inputs, outputs): i0 = inputs[0] i1 = inputs[1] assert i0.shape == i1.shape, "Shapes of inputs are different." o0 = outputs[0] o0.reset_shape(i0.shape, True) def forward_impl(self, inputs, outputs): x0 = inputs[0].data x1 = inputs[1].data y = outputs[0].data # We can also write like, y.copy_from(x0 * x1) y.copy_from(F.mul2(x0, x1)) def backward_impl(self, inputs, outputs, propagate_down, accum): # Data of inputs and outputs x0 = inputs[0].data x1 = inputs[1].data y = outputs[0].data # Grads of inputs and outputs dx0 = inputs[0].grad dx1 = inputs[1].grad dy = outputs[0].grad # backward w.r.t. x0 if propagate_down[0]: if accum[0]: dx0 += F.mul2(dy, x1) dx0.copy_from(F.mul2(dy, x1)) # backward w.r.t. x1 if propagate_down[1]: if accum[1]: dx1 += F.mul2(dy, x0) dx1.copy_from(F.mul2(dy, x0)) def grad_depends_output_data(self, i, o): return False def grad_depends_input_data(self, i, j): return True def mul2(x, y, ctx=None): func = Mul2(ctx) return func(x, y) __init__(self, ctx=None) ctx (nnabla.Context) – Context used for the forward and backward pass. If not specified, the current context is used. backward_impl(self, inputs, outputs, propagate_down, accum) Backward method. property ctx Context Return the context if the context is set in the constructor; otherwise return the global context forward_impl(self, inputs, outputs) Forward method. grad_depends_input_data(self, i, j) Checking if i-th input’ gradient computation requires j-th input’s data or not. grad_depends_output_data(self, i, o) Checking if i-th input’ gradient computation requires o-th output’s data or not. Minimum number of outputs of the function. property name Name of the function. setup_impl(self, inputs, outputs) Setup method. List of Functions The nnabla.functions module provides various types of functions listed below. These functions takes input nnabla.Variable (s) as its leading argument(s), followed by options specific to each The functions can also take NdArray (s) as inputs instead of Variable (s). It will execute the function operation immediately, and returns NdArray (s) as output(s) holding output values of the operation. We call this “Imperative Mode” (NdArray + Functions). Neural Network Activation nnabla.functions.constant(val=0, shape=[], n_outputs=-1, outputs=None)[source] Generate a constant-valued array. ☆ val (float) – Constant value. [default= 0 ] ☆ shape (tuple of int) – Shape of the output array. [default= [] ] N-D array where all values are the specified constant. Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.arange(start, stop, step=1, n_outputs=-1, outputs=None)[source] Generate a range of values within the half-open interval [start, stop) (the interval including start but excluding stop) with step increments. ☆ start (float) – Start value. ☆ stop (float) – End value. ☆ step (float) – Step value. [default= 1 ] 1-D array with the generated values. Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.abs(x, n_outputs=-1, outputs=None)[source] Element-wise absolute value function. \[y_i = |x_i|\] x (Variable) – Input variable Element-wise absolute variable Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.exp(x, n_outputs=-1, outputs=None)[source] Element-wise natural exponential function. \[y_i = \exp(x_i).\] x (Variable) – Input variable Element-wise exp variable Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.log(x, n_outputs=-1, outputs=None)[source] Element-wise natural logarithm function. \[y_i = \ln(x_i).\] x (Variable) – Input variable Element-wise log variable Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.round(x, n_outputs=-1, outputs=None)[source] Element-wise round function. In the forward pass, this function simply computes round to the nearest integer value. \[y_i = round(x_i).\] In the backward pass, the simple Straight-Through Estimator (STE) is applied, \[\frac{\partial y_i}{\partial x_i} = 1.\] x (Variable) – Input variable N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.ceil(x, n_outputs=-1, outputs=None)[source] Element-wise ceil function. In the forward pass, this function simply returns the smallest integer which is not less than the input. \[y_i = ceil(x_i).\] In the backward pass, the simple Straight-Through Estimator (STE) is applied, \[\frac{\partial y_i}{\partial x_i} = 1.\] x (Variable) – Input variable N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.floor(x, n_outputs=-1, outputs=None)[source] Element-wise floor function. In the forward pass, this function simply returns the largest integer which is not greater than the input. \[y_i = floor(x_i).\] In the backward pass, the simple Straight-Through Estimator (STE) is applied, \[\frac{\partial y_i}{\partial x_i} = 1.\] x (Variable) – Input variable N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.identity(x, n_outputs=-1, outputs=None)[source] Identity function. \[y = x\] x (Variable) – N-D array. N-D array Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.matrix_diag(x, n_outputs=-1, outputs=None)[source] Returns an array where the last two dimensions consist of the diagonal matrix. x (Variable) – N-D array with shape (\(M_0 \times \ldots \times M_N\)). N-D array with shape (\(M_0 \times \ldots \times M_N \times M_N\)). Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.matrix_diag_part(x, n_outputs=-1, outputs=None)[source] Returns an array in which the values of the last dimension consist of the diagonal elements of the last two dimensions of an input array. x (Variable) – N-D array with shape (\(M_0 \times \ldots \times M_N \times M_N\)). N-D array with shape (\(M_0 \times \ldots \times M_N\)). Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.batch_matmul(a, b, transpose_a=False, transpose_b=False, n_outputs=-1, outputs=None)[source] Batch matrix multiplication. Two of batchs of matrices are multiplied for each sample in a batch. A batch of matrices is composed as […, P, Q] where the last two dimensions compose matrix dimensions, and the first dimensions up to the third last dimension are considered as batch samples. These batch dimensions are internally broadcasted when the size of a dimension is 1. import nnabla as nn import nnabla.functions as F import numpy as np # Same batch size a = nn.Variable.from_numpy_array(np.random.rand(2, 2, 3, 4)) b = nn.Variable.from_numpy_array(np.random.rand(2, 2, 4, 3)) c = F.batch_matmul(a, b) # Different batch size with the broadcast a = nn.Variable.from_numpy_array(np.random.rand(2, 1, 3, 4)) b = nn.Variable.from_numpy_array(np.random.rand(1, 3, 4, 3)) c = F.batch_matmul(a, b) Since the version 1.13, the behavior of the batch dimensions changed, it supported the internal broadcast when the size of a dimension is 1. Accordingly, this function does not supports different batch dimensions between two inputs even if the total sample size for each input is same. Output of sample-wise matrix multiplication in a batch. When a is of a shape of [N, P, Q], b is of a shape of [N, Q, R], and transpose options are all False, the output will be a shape of [N, P, R]. Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.sin(x, n_outputs=-1, outputs=None)[source] Element-wise sine (sin) function. \[y_i = \sin (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.cos(x, n_outputs=-1, outputs=None)[source] Element-wise cosine (cos) function. \[y_i = \cos (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.tan(x, n_outputs=-1, outputs=None)[source] Element-wise tangent (tan) function. \[y_i = \tan (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.sinh(x, n_outputs=-1, outputs=None)[source] Element-wise hyperbolic sine (sinh) function. \[y_i = \sinh (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.cosh(x, n_outputs=-1, outputs=None)[source] Element-wise hyperbolic cosine (cosh) function. \[y_i = \cosh (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.tanh(x, n_outputs=-1, outputs=None)[source] Element-wise hyperbolic tangent (tanh) function. \[y_i = \tanh (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.asin(x, n_outputs=-1, outputs=None)[source] Element-wise arcsine (asin) function. \[y_i = \arcsin (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.acos(x, n_outputs=-1, outputs=None)[source] Element-wise arccosine (acos) function. \[y_i = \arccos (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.atan(x, n_outputs=-1, outputs=None)[source] Element-wise arctangent (atan) function. \[y_i = \arctan (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.atan2(x0, x1, n_outputs=-1, outputs=None)[source] Element-wise arctangent (atan) function with 2 input variables. \[y_i = \arctan2 (x_{i1}, x_{i2})\] N-D array with the same shape as input variables Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.asinh(x, n_outputs=-1, outputs=None)[source] Element-wise hyperbolic arcsine (asinh) function. \[y_i = \text{arcsinh} (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.acosh(x, n_outputs=-1, outputs=None)[source] Element-wise hyperbolic arccosine (acosh) function. \[y_i = \text{arccosh} (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.atanh(x, n_outputs=-1, outputs=None)[source] Element-wise hyperbolic arctangent (atanh) function. \[y_i = \text{arctanh} (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.cumsum(x, axis=None, exclusive=False, reverse=False, n_outputs=-1, outputs=None)[source] Cumulative sum along a given axis. N-D array Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.cumprod(x, axis=None, exclusive=False, reverse=False, n_outputs=-1, outputs=None)[source] Cumulative product along a given axis. Backward computation is not accurate in a zero value input. N-D array Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.batch_inv(x, n_outputs=-1, outputs=None)[source] Returns an array of inverted matrix x (Variable) – batched N-D array batched N-D array of inverted matrix Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.batch_det(x, n_outputs=-1, outputs=None)[source] Batch-wise determinant function. \[Y_b = \det(X_b),\] where \(X_b\) and \(Y_b\) are the \(b\)-th input and output, respectively. x (Variable) – batched N-D array batched N-D array of determinant Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.batch_logdet(x, n_outputs=-1, outputs=None)[source] Batch-wise log absolute determinant function. \[Y_b = \log(|\det(X_b)|),\] where \(X_b\) and \(Y_b\) are the \(b\)-th input and output, respectively. x (Variable) – batched N-D array batched N-D array of log absolute determinant Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.batch_cholesky(x, upper=False, n_outputs=-1, outputs=None)[source] Batch-wise cholesky decomposition of symmetric positive definite matrix. The gradient of this function will be a symmetric matrix. This function does not check whether given matrix is symmetric positive define matrix or not. ☆ x (Variable) – batched N-D array ☆ upper (bool) – If true, will return an upper triangular matrix. Otherwise will return a lower triangular matrix. [default= False ] batched N-D array of lower/upper triangular matrix. Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.erf(x, n_outputs=-1, outputs=None)[source] Element-wise Error function. \[y_i = \text{erf} (x_i)\] x (Variable) – N-D array N-D array with the same shape as x Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. Geometric Neural Network Layers nnabla.functions.affine_grid(theta, size, align_corners=False, n_outputs=-1, outputs=None)[source] Generate the source grid based on the normalized target grid with size. The target grid is first normalized in [-1, 1], then tranformed by the affine transformation \(\theta\) to generate the source grid. 2D and 3D grid are supported now. This function is normally used with the warp_by_grid function for constructing the spatial transformer. ☆ theta (Variable) – N-D array with the shape (\(B \times 2 \times 3\)), the sample-wise affine transformation matrix. ☆ size (repeated int64) – The grid size of (\(H \times W\)) for 2D and (\(D \times H \times W\)) for 3D. ☆ align_corners (bool) – If True, the top-left and bottom-right pixels correspond to (-1, -1) and (1, 1) respectively since a pixel is located on the corner of a grid, and the target grid is normalized in [-1, 1]. If False, the normalized target grid in [-1, 1] is scaled by size - 1 / size according to the respective spatial size (e.g., \(H\) and \(W\)) before the transformation since a pixel is located on a center of a cell in a grid. [default= False ] N-D array with the shape (\(B \times H \times W \times 2\)) for 2D and (\(B \times D \times H \times W \times 3\)) for 3D. The last dimension of 2 is for (x, y) and of 3 for (x, y, z). The gird is used as the source grid for the warping. Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.warp_by_grid(x, grid, mode='linear', padding_mode='zero', align_corners=False, channel_last=False, n_outputs=-1, outputs=None)[source] Warp the input data by the grid. This function is normally used with the generated normalized grid by the affine_grid function for constructing the spatial transformer. ☆ x (Variable) – Input data to be warped with the shape (\(B \times C \times H_{in} \times W_{in}\)) for 2D and (\(B \times C \times D_{in} \times H_{in} \times W_{in}\)) for 3D. ☆ grid (Variable) – Grid warping the input data with the shape (\(B \times H_{out} \times W_{out} \times 2\)) for 2D and (\(B \times D_{out} \times H_{out} \times W_{out} \times 3\)) for 3D. The last dimension of 2 is for (x, y) or 3 for (x, y, z). ☆ mode (string) – Interpolation mode, linear or nearest. [default= 'linear' ] ☆ padding_mode (string) – Padding mode when the grid value is outside [-1, 1]. If this is “zero”, 0 is used for padding. “reflect” uses the values reflected at the ends of the original input data like the mirror. “repeat” used the values at the ends of the original input data. [default= 'zero' ] ☆ align_corners (bool) – The target grid normalized in [-1, 1] is scaled by size - 1 / size according to the respective spatial size (e.g., \(H\) and \(W\)) before the transformation if this is False. If this is True, the top-left and bottom-right pixels correspond to (-1, -1) and (1, 1) respectively. [default= False ] ☆ channel_last (bool) – If True, the last dimension is considered as channel dimension, a.k.a NHWC order. [default= False ] Output data warped by the grid. Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. nnabla.functions.warp_by_flow(data, flow, n_outputs=-1, outputs=None)[source] Transform the image(s) data by flow field(s) of offset vectors such that each output pixel corresponds to the input image pixel at the relative offset location given by horizontal and vertical flow values (in other words, the flow field describes the coordinate displacements for each output pixel to the corresponding input pixel). Both data and flow are 4-D variables (in “NCHW” layout) with identical shape except the flow channel dimension (which is always 2). \[output_{n,c,y,x} = data_{n,c,y',x'},\] \[\begin{split}y' &=& y + flow_{n,1,y,x}, \\ x' &=& x + flow_{n,0,y,x}.\end{split}\] The output pixel values at \(y'\) and \(x'\) locations are obtained by bilinear interpolating between the 4 closest pixels of the input image. Pixel values outside of the input image are implicitly padded with the value of the closest boundary pixel. ☆ data (Variable) – Input image data with shape (N, Channels, Height, Width). ☆ flow (Variable) – Flow field vectors with shape (N, 2, Height, Width). Transformed image data with shape (N, Channels, Height, Width). Return type: All nnabla functions in nnabla.functions are decorated with the nnabla.function_bases.function_api decorator, which queries the current context and passes it into the first argument of the original function. The original function always takes a context as the first argument. Quantized Neural Network Layers
{"url":"https://nnabla.readthedocs.io/en/latest/python/api/function.html","timestamp":"2024-11-10T12:20:48Z","content_type":"text/html","content_length":"1049979","record_id":"<urn:uuid:ba447aec-a9fe-42c5-a9e4-39131ff9a5da>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00276.warc.gz"}
From Scholarpedia Wilfried Buchmüller (2014), Scholarpedia, 9(3):11471. doi:10.4249/scholarpedia.11471 revision #144189 [link to/cite this article] Leptogenesis relates the matter-antimatter asymmetry of the universe to neutrino properties. Decays of heavy Majorana neutrinos generate a lepton asymmetry which is partly converted to a baryon asymmetry via sphaleron processes. Consistency with the observed baryon asymmetry implies lower and upper bounds on the light neutrino masses. During the past decade a variety of models have been developed, which make use of the leptogenesis idea to explain the cosmological baryon asymmetry. Moreover, significant progress has been made towards a full quantum mechanical description of Matter-antimatter asymmetry One of the mysteries of our universe is the observed density of baryons, i.e., protons and neutrons. Since there is good evidence that the universe is mostly made up of matter and no antimatter, the baryon density corresponds to the cosmological matter-antimatter asymmetry. It can be inferred from the ratio of the number density of baryons to photons in the universe, which has been determined most precisely by the measurement of the angular distribution of the temperature fluctuations of the microwave background radiation (Komatsu, 2010fb) \[\tag{1}\eta_B=\frac{n_B-n_{\bar{B}}}{n_{\gamma}} \simeq \frac{n_B}{n_{\gamma}} = 6.19 \pm 0.15 \times 10^{-10}.\] Why does this ratio have this value? Starting from a matter-antimatter symmetric initial state at high temperatures, as suggested by an inflationary phase of the very early universe, one would expect a much smaller baryon asymmetry. An attractive explanation of the observed value for ratio \(\eta_B\) is the leptogenesis mechanism proposed by Fukugita and Yanagida (Fukugita, 1986hr), which relates the baryon asymmetry to properties of neutrinos, the lightest elementary particles. The starting point are the general conditions for baryogenesis, which particle interactions and the cosmological evolution have to satisfy, as pointed out by Sakharov already in 1967 (Sakharov, • baryon number violating interactions, • \(C\) and \(C\!P\) violation, • deviation from thermal equilibrium, where \(C\) and \(P\) denote the discrete transformations charge conjugation and spatial reflection, respectively. A further crucial ingredient of baryogenesis is the connection between baryon number \(B\) and lepton number \(L\) in the high-temperature, symmetric phase of the Standard Model of particle physics. Due to the chiral nature of the weak interactions \(B\) and \(L\) are not conserved (tHooft, 1976up). At zero temperature this has no observable effect due to the smallness of the weak coupling. However, as the temperature reaches the critical temperature \(T_c\) of the electroweak phase transition, \(B\) and \(L\) violating processes come into thermal equilibrium (Kuzmin, 1985mm). The rate of these processes is related to the free energy of sphaleron-type field configurations which carry topological charge. In the Standard Model they lead to an effective interaction of all left-handed fermions (tHooft, 1976up) (cf. Figure 1), \[\tag{2} O_{B+L} = \prod_i \left(q_{Li} q_{Li} q_{Li} l_{Li}\right)\; ,\] which violates baryon and lepton number by three units, \[\Delta B = \Delta L = 3\;. \tag{3}\] The interaction (2) leads to processes such as \[\tag{4}u^c + d^c + c^c \rightarrow d + 2 s + 2 b +t + \nu_e + \nu_\mu + \nu_\tau\;.\] The sphaleron transition rate in the symmetric high-temperature phase has been evaluated by combining analytical resummation with numerical lattice techniques. The result is, in accord with early estimates, that \(B\) and \(L\) violating processes are in thermal equilibrium for temperatures in the range \[\tag{5}T_{\rm EW} \sim 100\ \mbox{GeV} < T < T_{\rm SPH} \sim 10^{12}\ \mbox{GeV}\;.\] Sphaleron processes have a profound effect on the generation of the cosmological baryon asymmetry in the hot early universe. In a weakly coupled plasma, one can assign a chemical potential \(\mu\) to all fermionic and bosonic fields with the same gauge interactions. In the Standard Model, with one Higgs doublet \(H\) and three quark-lepton generations one then has \(16\) chemical potentials, for the Higgs doublet, the left-handed quark and lepton doublets \(q_i\) and \(\ell_i\), and the right-handed quark and lepton singlets \(u_i\), \(d_i\), and \(e_i\)\((i=1,\ldots,3)\). For a non-interacting gas the chemical potentials determine the asymmetries in the particle and antiparticle number densities. For massless particles the asymmetries at temperature \(T\) are given by \[\tag{6} n_i-\overline{n}_i={g T^3\over 6} \left\{\begin{array}{rl}\beta\mu_i +{\cal O}\left(\left(\beta\mu_i\right)^3\right)\ , &\mbox{fermions}\ ,\\ 2\beta\mu_i+{\cal O}\left(\left(\beta\mu_i\ right)^3\right)\;, &\mbox{bosons}\ , \end{array}\right.\] where \(\beta = 1/T\). In the case of leptogenesis all chemical potentials are very small, i.e., \(\beta \mu_i \ll 1\). Quarks, leptons and Higgs bosons interact via Yukawa and gauge couplings and, in addition, via the nonperturbative sphaleron processes. In thermal equilibrium all these processes yield constraints between the various chemical potentials. The effective interaction implies the relation \[\tag{7} \sum_i\left(3\mu_{qi} + \mu_{li}\right) = 0\;.\] The Yukawa interactions, supplemented by gauge interactions, yield relations between the chemical potentials of left-handed and right-handed fermions, which hold if the corresponding interactions are in thermal equilibrium. In the temperature range \(100\ \mbox{GeV} < T < 10^{12}\ \mbox{GeV}\), which is of interest for baryogenesis, this is the case for gauge interactions. On the other hand, Yukawa interactions are in equilibrium only in a more restricted temperature range that depends on the strength of the Yukawa couplings. In the simplest version of leptogenesis discussed below this complication is ignored, although this is not justified generically (see below). Using Eq. (6), the baryon number density \(n_B \equiv g B T^2/6\) and the lepton number densities \(n_{L_i} \equiv L_i gT^2/6\) can be expressed in terms of the chemical potentials: \[\begin{aligned}\tag{8} B &= \sum_i \left(2\mu_{qi} + \mu_{ui} + \mu_{di}\right)\;, \\ \end{aligned}\] \[\begin{aligned}\tag{9} L_i &= 2\mu_{li} + \mu_{ei}\;,\quad L=\sum_i L_i\;.\end{aligned}\] For massless neutrinos the asymmetries \(L_i-B/3\) are conserved. Using the relations implied by the Yukawa interactions one obtains \(\mu_{li} \equiv \mu_l\), \(\mu_{qi} \equiv \mu_q\), etc. Demanding furthermore that the vacuum carries no hypercharge, one can express the total baryon and lepton asymmetries in terms of a single chemical potential, for instance \(\mu_l\), \[\tag{10}B = -4\mu_l\;, \quad L = {51\over 7}\mu_l\;.\] This yields an important connection between the \(B\), \(B-L\) and \(L\) asymmetries, \[\tag{11} B = c_s (B-L)\ , \quad L= (c_s - 1)(B- L) \ ,\] where \(c_s = 28/79\). These relations hold in the high-temperature symmetric phase of the Standard Model. The relations (11) between the quantities \(B\), \(B\)\(-\)\(L\) and \(L\) suggest that violation of \(B\)\(-\)\(L\) is needed in order to generate a baryon asymmetry. Because the \(B\)\(-\)\(L\) current is conserved, the value of \(B\)\(-\)\(L\) at time \(t_f\), where the leptogenesis process is completed, determines the value of the baryon asymmetry today, \[\tag{12}B(t_0)\ =\ c_s (B-L)(t_f)\;.\] On the other hand, during the leptogenesis process the strength of interactions that violate \(B\)\(-\)\(L\), and therefore \(L\), can only be weak. Otherwise, because of Eq. (11), they would wash out any baryon asymmetry. As explained below, the interplay between these conflicting conditions leads to important constraints on the properties of neutrinos. Lepton-number violation and neutrino masses Lepton number violation is most easily realized by adding right-handed neutrinos to the Standard Model, which allow for an elegant explanation of the smallness of the light neutrino masses via the seesaw mechanism. The most general Lagrangian for couplings and masses of charged leptons and neutrinos is given by \[\begin{aligned} \tag{13} \mathcal{L} =& {\bar l}_{Li} i \partial\llap{/}l_{Li} + {\bar e}_{Ri} i \partial\llap{/}e_{Ri} + {\bar \nu}_{Ri} i \partial\llap{/}\nu_{Ri}\\ & +\; f_{ij}{\bar e}_{Ri}l_ {Lj}H^{\dagger} +h_{ij}{\bar \nu}_{Ri}l_{Lj}H - \frac{1}{2}M_{i}\nu_{Ri}\nu_{Ri} + {\rm h.c.}\ .\end{aligned}\] The right-handed neutrinos have no Standard Model gauge interactions and can therefore have Majorana masses that violate lepton number. The vacuum expectation value of the Higgs field, \(\langle H\ rangle=v_F\), generates Dirac masses \(m_e\) and \(m_D\) for charged leptons and neutrinos, \(m_e=f v_F\) and \(m_D=hv_F\). Since the Majorana masses are not controlled by electroweak symmetry breaking, they can be much larger than the Dirac neutrino masses, \(M \gg m_D\). The mass matrices \(m_D\) and \(M\) contain altogether 6 physical \(C\!P\) phases, which lead to \(C\!P\) violating decays and scatterings. Diagonalizing the \(6\times 6\) neutrino mass matrix one obtains three heavy and three light neutrino mass eigenstates, \[\tag{14}N\simeq \nu_R+\nu_R^c\quad,\qquad \nu\simeq V_{\nu}^T\nu_L+\nu_L^c V_{\nu}^*\, ,\] with masses \[m_N\simeq M\, \quad,\quad m_{\nu}\simeq- V_{\nu}^Tm_D^T{1\over M}m_D V_{\nu}\, . \tag{15}\] In a basis where the charged lepton mass matrix \(m_e\) is diagonal, \(V_{\nu}\) is the mixing matrix in the leptonic charged current. As an example, consider a hierarchical Dirac neutrino mass matrix, as in grand unified models (Langacker, 2012), with a third generation Yukawa coupling \({\cal O}(1)\), as it is the case for the top-quark. Identifying the heaviest Majorana mass with the mass scale of grand unification, one obtains the heavy and light neutrino masses \[\tag{16}M_3 \sim \Lambda_{\rm GUT} \sim 10^{15}\ {\rm GeV}\ , \quad m_3 \sim \frac{v^2}{M_3} \sim\ 0.01\ {\rm eV}\;.\] It is very remarkable that the light neutrino mass \(m_3\) is of the same order as the mass differences \((\Delta m^2_{sol})^{1/2}\) and \((\Delta m^2_{atm})^{1/2}\) inferred from neutrino oscillations. This suggests that, via the seesaw mechanism, neutrino masses indeed probe the grand unification scale. The difference between the charged current mixing matrices of quarks and leptons is a puzzle that can be explained in grand unified models. Like for quarks and charged leptons also right-handed neutrinos may have hierarchical masses. For instance, if their masses scale like the up-quark masses one has \(M_1 \sim 10^{-5} M_3 \sim 10^{10}\) GeV. The heavy neutrino \(N_1\) is of particular importance for leptogenesis. Its tree-level decay width into lepton-Higgs pairs reads \[\Gamma_{D1}=\Gamma\left(N_1\to H + l_L\right) +\Gamma\left(N_1\to H^{\dagger} + l_L^{\dagger}\right) ={1\over8\pi}(h h^\dagger)_{11} M_1\;. \tag{17}\] The interference between tree-level amplitude and one-loop vertex and self-energy corrections (see Figure 2) leads to a \(C\!P\) asymmetry in the decays. For hierarchical heavy neutrinos, \(M_1 \ll M_2,M_3\), one finds (Covi, 1996wh): \[\begin{aligned} \varepsilon_1 &=\frac{\Gamma\left(N_1\to H + l_L\right) -\Gamma\left(N_1\to H^{\dagger} + l_L^{\dagger}\right)} {\Gamma\left(N_1\to H + l_L\right) +\Gamma\left(N_1\to H^{\dagger} + l_L^{\dagger}\right)} \nonumber\\ &\simeq {3\over16\pi}\;{1\over\left(h h^\dagger\right)_{11}} \sum_{i=2,3}\mbox{Im}\left[\left(h h^\dagger\right)_{i1}^2\right] {M_1\over M_i}\; . \tag{18}\end The \(C\!P\) asymmetry is conveniently expressed in terms of the light neutrino mass matrix $m_{\nu}$, \[\varepsilon_1 \ \simeq\ - {3\over 16\pi} {M_1\over (h h^\dagger)_{11} v_F^2} \mbox{Im}\left(h^* m_\nu h^\dagger\right)_{11}\;, \tag{19}\] and satisfies the upper bound (Davidson, 2002qv; Hamaguchi, 2002b) \[|\varepsilon_1| \ \leq\ \frac{3}{16\pi} \frac{M_1m_3}{v_F^2}\;. \tag{20}\] For \(C\!P\) phases \(\mathcal{O}(1)\), values of \(\varepsilon_1\) close to the upper bound are typical in models of neutrino masses. Using the seesaw relation for light and heavy third generation neutrinos one then has \(\varepsilon_1\sim\ 0.1\ M_1/M_3\). Quark and charged lepton mass hierarchies vary from \(10^{-4}\) to \(10^{-5}\). A corresponding mass hierarchy for right-handed neutrinos implies a \(C\!P\) asymmetry \(\varepsilon_1 \sim 10^{-5} \ldots 10^{-6}\). Thermal leptogenesis The lightest of the heavy Majorana neutrinos, \(N_1\), is ideally suited to generate the cosmological baryon asymmetry. Since it has no Standard Model gauge interactions it can easily propagate out of equilibrium in a thermal bath of quarks, leptons and Higgs particles, thereby satisfying Sacharov's third condition. The Lagrangian (13) also violates lepton number and the discrete symmetries \(C \) and \(C\!P\), so that all Sacharov conditions are fulfilled for the generation of a lepton asymmetry. Sphaleron processes then partially convert the lepton asymmetry into a baryon asymmetry. At high temperatures, \(T>M_1\), the heavy Majorana neutrinos can efficiently erase any pre-existing lepton asymmetry. Once the temperature of the universe drops below the mass \(M_1\), the heavy neutrinos are not able to follow the rapid change of the equilibrium distribution. Hence, the necessary deviation from thermal equilibrium ensues as a result of having a too large number density of heavy neutrinos, compared to the equilibrium density. Eventually, the heavy neutrinos decay, and a \(B\)\(-\)\(L\) asymmetry is generated owing to the presence of \(C\!P\)-violating processes. The resulting baryon asymmetry is \[\begin{aligned} \tag{21} \eta_B &= \frac{n_B - n_{\bar{B}}}{n_\gamma} \simeq -c_s\frac{n_L - n_{\bar{L}}}{n_\gamma} \equiv -\frac{c_s}{f}N_{B-L} \\ &= -\frac{3}{4}\frac{c_s}{f} \varepsilon_1 \ kappa_f \simeq 10^{-2} \varepsilon_1 \kappa_f \;.\end{aligned}\] Here \(c_s\) is the fraction of \(B\)\(-\)\(L\) asymmetry converted into a baryon asymmetry by sphaleron processes, and \(f\) is a dilution factor accounting for the increase of the photon number density between the onset of leptogenesis and recombination. \(N_{B-L}\) is the amount of \(B\)\(-\)\(L\) asymmetry in a comoving volume element that contains one photon at the time of leptogenesis. The efficiency factor \(\kappa_f\) represents the effect of washout processes in the plasma and is obtained by solving the Boltzmann equations. Typical values are \(\kappa_f = 10^{-1} \ldots 10^{-2} \). Together with a \(C\!P\) asymmetry \(\varepsilon_1 = 10^{-5} \ldots 10^{-6}\) one then obtains for the baryon asymmetry \(\eta_B = 10^{-8} \ldots 10^{-10}\), consistent with observation. It is remarkable that a heavy neutrino mass hierarchy comparable to the one of quarks and charged leptons, which leads to a small \(C\!P\) asymmetry, together with the kinematical factors \(f\) and \(\ kappa_f\) can explain the observed matter-antimatter asymmetry! Leptogenesis is a nonequilibrium process which takes place at temperatures \(T \lesssim M_1\). For a decay width small compared to the Hubble parameter, \(\Gamma_1(T) < H(T)\), heavy neutrinos are out of thermal equilibrium; otherwise, they are in thermal equilibrium. The borderline between the two regimes is given by \(\Gamma_1 = H|_{T=M_1}\), which is equivalent to the condition that the effective neutrino mass \[\begin{aligned} \tag{22} \mt = \frac{(m_D m_D^\dagger)_{11}}{M_1} \end{aligned}\] is equal to the equilibrium neutrino mass \[\begin{aligned} \tag{23} m_* = \frac{16\pi^{5/2}}{3\sqrt{5}} g_*^{1/2} \frac{v_F^2}{M_{\rm P}} \simeq 10^{-3}~\mbox{eV}\;,\end{aligned}\] where \(H(T) = \sqrt{8\pi^3 g_*/90}\,T^2/M_{\rm P}\) is the Hubble parameter, \(g_*=g_{\rm SM}=106.75\) is the total number of degrees of freedom and \(M_{\rm P}=1.22\times10^{19}\,{\rm GeV}\) is the Planck mass. It is intriguing that the equilibrium neutrino mass \(m_*\) is close to the neutrino masses suggested by solar and atmospheric neutrino oscillations, \(\sqrt{\Delta m^2_{\rm sol}} \simeq 8\times 10^ {-3}\) eV and \(\sqrt{\Delta m^2_{\rm atm}} \simeq 5\times 10^{-2}\) eV. This encourages one to think that it may be possible to understand the cosmological baryon asymmetry via leptogenesis as a process close to thermal equilibrium. A necessary condition is that \(\Delta L=1\) and \(\Delta L=2\) lepton number violating processes are strong enough at temperatures above \(M_1\) to keep the heavy neutrinos in thermal equilibrium and weak enough to allow the generation of an asymmetry at temperatures below \(M_1\). In general, the generated baryon asymmetry is the result of a competition between production processes and washout processes that tend to erase any generated asymmetry. Unless the heavy Majorana neutrinos are partially degenerate, \(M_{2,3}-M_1 \ll M_1\), the dominant processes are decays and inverse decays of \(N_1\) and the usual off-shell \(\Delta L=1\) and \(\Delta L=2\) scatterings. The leptogenesis process is quantitatively described by Boltzmann equations. Summing over the three lepton flavours and neglecting the dependence on Yukawa couplings, they can be written as (Buchmuller, \[\tag{24} {dN_{N_1}\over dz} = -(D+S)\,(N_{N_1}-N_{N_1}^{\rm eq}) \;,\] \[\tag{25} {dN_{B-L}\over dz} = -\varepsilon_1\,D\,(N_{N_1}-N_{N_1}^{\rm eq})-W\,N_{B-L}\;,\] where \(z=M_1/T\). The density \(N_{N_1}\) is again defined as the number of heavy neutrinos in a portion of comoving volume containing one photon at the onset of leptogenesis, so that the relativistic equilibrium \(N_1\) number density is given by \(N_{N_1}^{\rm eq}(z \ll 1)=3/4\). Alternatively, one may normalize the number density to the entropy density \(s\) and consider \(Y_X = n_X/s\). If entropy is conserved, both normalizations are related by a constant. There are four classes of processes that contribute to the different terms in the above equations: decays, inverse decays, \(\Delta L=1\) scatterings of real heavy neutrinos and \(\Delta L~=~2\) processes mediated by virtual heavy neutrinos. The first three processes all modify the \(N_1\) abundance and try to push it towards its equilibrium value \(N_{N_1}^{\rm eq}\). Denoting by \(H\) the Hubble expansion rate, the term \(D = \Gamma_D/(H\,z)\) accounts for decays and inverse decays, whereas the scattering term \(S = \Gamma_S/(H\,z)\) represents the \(\Delta L~=~1\) scatterings. Decays also yield the source term for the generation of the \(B-L\) asymmetry, the first term in Eq. (25), whereas all other processes contribute to the total washout term \(W = \Gamma_W/(H\,z)\), which competes with the decay source term. The dynamical generation of the \(N_1\) abundance and the \(B-L\) asymmetry is shown in Figure 3 for typical parameters and different initial conditions, \(N_ {N_1}^{\rm in} = 3/4\) and \(N_{N_1}^{\rm in} = 0\), corresponding to thermal and zero initial \(N_1\) abundance, respectively. For the chosen parameters the generated \(B\)\(-\)\(L\) asymmetry is essentially independent of the initial conditions and entirely determined by neutrino parameters. Constraints on neutrino masses The \(\Delta L=2\) lepton number changing processes with heavy neutrino exchange generate a contribution to the washout rate that depends on the absolute neutrino mass scale, \(\Delta W \ \propto \ M_1\ \mb^2\ M_{\rm P}/v_F^4\), where \(\mb^2 = m_1^2 + m_2^2 + m_3^2\). As long as \(\Delta W\) can be neglected, the efficiency factor \(\kappa_f\) is independent of \(M_1\). With increasing \(\mb \), however, the washout rate \(\Delta W\) becomes important and eventually prevents successful leptogenesis. This leads to an upper bound on the absolute neutrino mass scale (Buchmuller, 2002rq). One can also obtain a lower bound on the heavy neutrino masses, because the upper bound on the \(C\!P\) asymmetry \(\varepsilon_1\), as well as the efficiency factor \(\kappa_f\) only depend on \(M_1 \), \(\mt\) and \(\mb\). Since the rates entering the Boltzmann equations are functions of the same quantities, there exists for arbitrary light neutrino mass matrices a maximal baryon asymmetry \(\ eta^{\rm max}_{B}(\mt,M_1,\mb)\). Requiring this to be larger than the observed one, \(\eta^{\rm max}_{B}(\mt,M_1,\mb) \geq \eta^{\rm obs}_{B}\), one obtains a constraint on the neutrino mass parameters \(\mt\), \(M_1\) and \(\mb\). For each value of \(\mb\) there is a domain in the (\(\mt\)-\(M_1\))-plane, which is allowed by successful baryogenesis. For \(\mb \geq 0.20\) eV this domain shrinks to zero, which can be translated into upper limits on the individual neutrino masses and a lower limit on \(M_1\), the smallest mass of the heavy Majorana neutrinos. A quantitative analysis yields (Buchmuller, 2004nz) \[\tag{26}m_i < 0.1\,{\rm eV}\;, \quad M_1 > 4 \times 10^8~\mbox{GeV}\ ,\] where a thermal initial \(N_1\) abundance has been assumed. For zero initial \(N_1\) abundance one obtains the more restrictive lower bound \(M_1 > 2 \times 10^9~\mbox{GeV}\). For \(\mt > m_*\), the baryon asymmetry is generated at a temperature \(T_B < M_1\). Hence the lower bound on the reheating temperature \(T_i\) is less restrictive than the lower bound on \(M_1\). The results of a detailed analytical and numerical analysis are summarized in Figure 4. In the so-called weak washout regime, where \(\mt < m_*\), the baryon asymmetry, and therefore the lower bound on \(M_1\), depends on the initial \(N_1\) abundance. On the contrary, in the strong washout regime, where \(\mt > m_*\), the baryon asymmetry \(\eta_B\) is independent of the initial \(N_1\) abundance. Furthermore, the final baryon asymmetry does not depend on the value of an initial baryon asymmetry generated by some other mechanism. Hence, the value of \(\eta_B\) is entirely determined by neutrino properties. In this way leptogenesis singles out the neutrino mass window \[\tag{27}10^{-3}~{\rm eV} < m_i < 0.1~{\rm eV}\ .\] What is the theoretical error on the lower and upper bounds for light neutrino masses due to leptogenesis? A rigorous answer would require a full quantum field theoretical treatment of the nonequilibrium leptogenesis process, which, despite much progress in recent years, is not yet available. It is known, however, that there is a significant dependence on the lepton flavours due to their different Yukawa couplings, which has been neglected in the treatment presented above. These effects can relax lower and upper bound by about one order of magnitude (see the reviews (Davidson, 2008bu; Blanchet, 2012bk)). In view of the constraints on light neutrino masses imposed by leptogenesis, knowledge of the absolute neutrino mass scale is of great importance. Hence, a measurement of the neutrino mass \(m_\beta \) in tritium \(\beta\)-decay and \(m_{0\nu\beta\beta}\) in neutrinoless double \(\beta\)-decay, or the determination of the sum \(\sum_i m_i\) from cosmology, consistent with the above neutrino mass window, would strongly support the leptogenesis mechanism. Further developments Early studies of leptogenesis were partly motivated by trying to find alternatives to electroweak baryogenesis, which did not seem to produce a big enough asymmetry. As described above, the simple case of hierarchical heavy neutrino masses with \(B\)\(-\)\(L\) broken at the unification scale \(M_{\rm GUT} \sim 10^{15}\) GeV, light neutrino masses \(m_{1,2} < m_{3} \sim 0.1\) eV, and a reheating temperature \(T_{\rm i}\sim 10^{10}\ {\rm GeV}\), yields a successful description of baryogenesis. During the past decade much progress has been made in further developing this picture. This includes the connection to dark matter and the role of nonthermal leptogenesis, where the heavy neutrinos are produced in decays of other heavy particles (see the review (Buchmuller, 2005eh)). Moreover, important flavour and spectator effects have been studied (see the reviews (Davidson,2008bu; Blanchet, 2012bk)), and significant progress has been made towards a full quantum mechanical description of leptogenesis based on Kadanoff-Baym equations (see the reviews (Blanchet, 2012bk; Fong, 2013wr)). One has to emphasize that the minimal leptogenesis scenario, as described above, is far from unique and that many interesting alternatives have been suggested. First of all, supersymmetric leptogenesis is as successful as ordinary leptogenesis (see the reviews (Buchmuller, 2005eh; Davidson, 2008bu)). The dominant contribution to the \(B\)\(-\)\(L\) asymmetry may also be produced in decays of the next-to-lightest heavy neutrino, as in \(N_2\)-leptogenesis. A strong enhancement of the \(C\!P\) asymmetry, and therefore a lower reheating temperature is possible for quasi-degenerate heavy neutrinos, as in resonant leptogenesis or soft leptogenesis (see the reviews (Davidson, 2008bu; Blanchet, 2012bk)). Finally, leptogenesis is possible without heavy Majorana neutrinos, as in Dirac leptogenesis, triplet scalar leptogenesis of triplet fermion leptogenesis (see the reviews (Davidson, 2008bu; Fong, 2013wr)). • [Buchmuller, 2002rq] Buchmuller W., P. Di Bari and M. Plumacher, "Cosmic microwave background, matter-antimatter asymmetry and neutrino masses," Nucl. Phys. B 643 (2002) 367 [Erratum-ibid. B 793 (2008) 362]. • [Buchmuller, 2004nz] Buchmuller W., P. Di Bari and M. Plumacher, "Leptogenesis for pedestrians," Annals Phys. 315 (2005) 305. • [Covi, 1996wh] Covi L., E. Roulet and F. Vissani, "CP violating decays in leptogenesis scenarios," Phys. Lett. B 384 (1996) 169. • [Davidson, 2002qv] Davidson S. and A. Ibarra, "A Lower bound on the right-handed neutrino mass from leptogenesis," Phys. Lett. B 535 (2002) 25 • [Fukugita, 1986hr] Fukugita M. and T. Yanagida, "Baryogenesis Without Grand Unification," Phys. Lett. B 174 (1986) 45. • [Hamaguchi, 2002b] Hamaguchi K., H. Murayama and T. Yanagida, “Leptogenesis from N dominated early universe,” Phys. Rev. D 65 (2002) 043512. • [Harvey, 1990qwb] Harvey J. A. and M. S. Turner, "Cosmological baryon and lepton number in the presence of electroweak fermion number violation," Phys. Rev. D 42 (1990) 3344. • [tHooft, 1976up] 't Hooft G. , "Symmetry Breaking Through Bell-Jackiw Anomalies," Phys. Rev. Lett. 37 (1976) 8. • [Komatsu, 2010fb] Komatsu E. et al. [WMAP Collaboration], "Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Cosmological Interpretation," Astrophys. J. Suppl. 192 (2011)18 • [Kuzmin, 1985mm] Kuzmin V. A., V. A. Rubakov and M. E. Shaposhnikov, "On the Anomalous Electroweak Baryon Number Nonconservation in the Early Universe," Phys. Lett. B 155 (1985) 36. • [Sakharov, 1967dj] Sakharov A. D., "Violation of CP Invariance, c Asymmetry, and Baryon Asymmetry of the Universe," Pisma Zh. Eksp. Teor. Fiz. 5 (1967) 32 [JETP Lett. 5 (1967) 24] [Sov. Phys. Usp. 34 (1991) 392] [Usp. Fiz. Nauk 161 (1991) 61]. Internal references Further reading • [Blanchet, 2012bk] Blanchet S. and P. Di Bari, "The minimal scenario of leptogenesis," New J. Phys. 14 (2012) 125012 • [Buchmuller, 2005eh] Buchmuller W., R. D. Peccei and T. Yanagida, "Leptogenesis as the origin of matter," Ann. Rev. Nucl. Part. Sci. 55 (2005) 311 • [Davidson, 2008bu] Davidson S., E. Nardi and Y. Nir, "Leptogenesis," Phys. Rept. 466 (2008) 105 • [Fong, 2013wr] Fong C. S., E. Nardi and A. Riotto, "Leptogenesis in the Universe," Adv. High Energy Phys. 2012 (2012) 158303 External links See also
{"url":"http://www.scholarpedia.org/article/Leptogenesis","timestamp":"2024-11-06T18:47:48Z","content_type":"text/html","content_length":"65714","record_id":"<urn:uuid:497201df-da65-40ab-ae57-46900025d929>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00111.warc.gz"}
How to trade with the Rate of Change (ROC) indicator? Hi. There are various technical indicators used to predict price movements and develop trading strategies in financial markets. One of these indicators is the Rate of Change (ROC) indicator. In this article, you will learn how to calculate the Rate of Change (ROC) indicator and how to use it in forex trading. What is the Rate of Change (ROC) indicator? The Rate of Change (ROC) is a momentum indicator that measures how much the price of an asset has changed over a specific period. ROC indicates whether price changes are large or small. Larger ROC values represent larger price movements, while smaller ROC values indicate more stable and smaller price changes. The most important component of this indicator is the 0 (zero) line. The area above the 0 (zero) line is known as the positive zone, while the area below the 0 (zero) line is the negative zone. ROC measures the speed of price changes and helps us identify an existing trend in an asset. A positive ROC value indicates an uptrend, while a negative ROC value indicates a downtrend. In other words, ROC is often used to determine both the strength and direction of a trend. If ROC is above the zero line, it means the price is rising. If ROC is below the zero line, it means the price is falling. Rate of Change (ROC) indicator calculation The ROC (Rate of Change) indicator is used to measure the rate of change in the price of an asset over a specific period and is typically expressed as a percentage. In other words, ROC is calculated by dividing the current price by the price from a certain number of periods ago and then expressing the result as a percentage. The result represents the percentage change in price over the last few periods. The ROC indicator is calculated using the following formula: ROC = [(Current Price - Price X periods ago) / Price X periods ago] • 100 • ROC: The value of the Rate of Change indicator • Current Price: The current price of the asset • Price X periods ago: The price of the asset X periods ago This calculation is done automatically, and when trading, we pay attention to the value of this indicator. We use it to assess market trends and momentum. Trading with the Rate of Change (ROC) indicator ROC indicator is commonly used to identify buy and sell signals, and there are several main methods for doing so. I've listed them below with live chart examples: Zero Line Crossover. When ROC crosses below the zero line from above, it can be interpreted as a sell signal. Conversely, when it crosses above the zero line from below, it can be interpreted as a buy signal. Example: Look at the 4-hour chart of the Australian Dollar/Singapore Dollar: Trading Signals with ROC in AUD/SGD chart Overbought and Oversold Conditions. We can trade by identifying overbought and oversold conditions using the ROC indicator. When ROC is in an overbought condition, we may consider selling, and in oversold conditions, we may consider buying orders. Example: Take a look at the 1-hour chart of the US Dollar/Japanese Yen: ROC indicator for USD/JPY overbought and oversold Divergences. Divergences are identified between the ROC and the price chart. For example, if prices are making new highs while ROC is decreasing (negative divergence), this is considered a sell signal. Conversely, if prices are making new lows while ROC is increasing (positive divergence), this is considered a buy signal. Example: Take a look at the daily chart of Amazon stock: ROC Divergences for Trading Amazon Stock example Horizontal Markets. The ROC indicator may not be very effective during market consolidation. During such times, we often see many zero line crossovers, but prices do not show a sustained movement in any direction. An example of this can be seen in the 4-hour chart of Bitcoin/USD: ROC during Market Consolidation on BTC/USD Chart Remember that the Forex market is risky, and you should carefully consider all trading decisions. The ROC indicator is a powerful tool that can be used in forex trading. However, using any technical indicator alone may not help reduce risk. Using it in conjunction with other analysis tools can help reduce risk even further.
{"url":"https://www.forexeduline.com/2023/10/how-to-trade-with-rate-of-change-roc.html","timestamp":"2024-11-12T08:49:40Z","content_type":"text/html","content_length":"154274","record_id":"<urn:uuid:09e9a589-b65d-4307-aafd-da58a21f16f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00867.warc.gz"}
How to append an element into an array in NumPy? That is correct! The append() method in numpy accepts two parameters: the array you want to append to, and the element you want to append. In the code example you provided, the arr array is first defined with four elements [1, 2, 3, 4]. Then, the append() method is called to add the element 5 to the array. Finally, the updated array [1, 2, 3, 4, 5] is printed using the print() function.
{"url":"https://devhubby.com/thread/how-to-append-an-element-into-an-array-in-numpy","timestamp":"2024-11-10T15:09:39Z","content_type":"text/html","content_length":"119367","record_id":"<urn:uuid:25d1ae09-e32c-4350-9324-57e6cc333cb8>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00409.warc.gz"}
Makes Number Negative Javascript With Code Examples In this article, we will look at how to get the solution for the problem, Makes Number Negative Javascript With Code Examples How do you change negative to positive in Java? To convert negative number to positive number (this is called absolute value), uses Math. abs(). This Math. abs() method is work like this “ number = (number < 0 ? -number : number); ". Math.abs(num) => Always positive -Math.abs(num) => Always negative Does parseInt work with negative numbers JavaScript? parseInt understands exactly two signs: + for positive, and - for negative. What does number NEGATIVE_INFINITY mean in JavaScript? The value of Number. NEGATIVE_INFINITY is the same as the negative value of the global object's Infinity property. This value behaves slightly differently than mathematical infinity: Any positive value, including POSITIVE_INFINITY , multiplied by NEGATIVE_INFINITY is NEGATIVE_INFINITY . What is float (' inf ')? But in python, as it is a dynamic language, float values can be used to represent an infinite integer. One can use float('inf') as an integer to represent it as infinity. Below is the list of ways one can represent infinity in Python. How do you convert to positive? Multiply with Minus One to Convert a Positive Number All you have to do just multiply a negative value with -1 and it will return the positive number instead of the negative. How do you make a number positive? You just have to simply multiply by −1. For example if you have a number −a then multiply by −1 to get −a×−1=a. If the number is positive then multiply by 1. How do you make a number positive in JavaScript? Use the abs() method In this approach, we'll use a built-in method provided by the Math module in JavaScript. The method Math. abs() converts the provided negative integer to a positive integer. How do you add a positive and a negative? To get the sum of a negative and a positive number, use the sign of the larger number and subtract. For example: (–7) + 4 = –3. 6 + (–9) = –3. How do you make a number negative? Method #1 – Multiply by Negative 1 with a Formula We can write a formula to multiply the cell's value by negative 1 (-1). This works on cells that contain either positive or negative numbers. The result of the formula is: Positive numbers will be converted to negative numbers. How To Access App.Config Globally In Flask App With Code Examples In this article, we will look at how to get the solution for the problem, How To Access App.Config Globally In Flask App With Code Examples How do I use .ENV files in Flask? env file inside the Python Flask application. Reads the key-value pair from the . env file and adds them to the environment variable. It is great for managing app settings during development and in production using 12-factor principles. from flask import current_app @api.route("/info") def get_account_num(): num = current_ First Remove Active Class From Classlist And Append To Current Element Using Javascript With Code Examples In this article, we will look at how to get the solution for the problem, First Remove Active Class From Classlist And Append To Current Element Using Javascript With Code Examples How do you clear a classList in JavaScript? To remove all classes from an element, set the element&#x27;s className property to an empty string, e.g. box. className = &#x27;&#x27; . Setting the element&#x27;s className property to an empty string empties the element&#x27;s class list. function myFunction(e) { if (do Encapsulation With Code Examples In this article, we will look at how to get the solution for the problem, Encapsulation With Code Examples What is polymorphism in OOPs? Polymorphism is one of the core concepts of object-oriented programming (OOP) and describes situations in which something occurs in several different forms. In computer science, it describes the concept that you can access objects of different types through the same interface. What is encapsulation? Encapsulation protects your code by controlling the ways valu Redirect To Html Page In Javascript With Code Examples In this article, we will look at how to get the solution for the problem, Redirect To Html Page In Javascript With Code Examples How do you load a page in Javascript? Approach: We can use window. location property inside the script tag to forcefully load another page in Javascript. It is a reference to a Location object that is it represents the current location of the document. We can change the URL of a window by accessing it. // similar behavior as an HTTP redirect window.location.replace("h Asyncstorage Getallkeys Seperately With Code Examples In this article, we will look at how to get the solution for the problem, Asyncstorage Getallkeys Seperately With Code Examples Can we store data in diamond? Unlike the DVD, which has only one surface, a diamond can store data in multiple layers, like a whole stack of DVDs. This storage would also work differently than a magnetic hard drive, because diamonds, as they say, are forever. importData = async () => { try { const keys = await AsyncStorage.getAllKeys(); const result = await AsyncSto
{"url":"https://www.isnt.org.in/makes-number-negative-javascript-with-code-examples.html","timestamp":"2024-11-11T00:22:43Z","content_type":"text/html","content_length":"148995","record_id":"<urn:uuid:339973ce-216c-4e1f-8388-2592067193ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00832.warc.gz"}
[Solved] Problem 1: A museum gathers data about th | SolutionInn Answered step by step Verified Expert Solution Problem 1: A museum gathers data about the ages of its patrons on one day. Below is a stemplot based on their data: 1 Problem 1: A museum gathers data about the ages of its patrons on one day. Below is a stemplot based on their data: 1 058 2 114689 3 0566 4 257 534 6 2 7 80 Part 1(a): How many visitors did the museum have on this day? Answer: Part 1(b): What was the median age of museum visitors on this day? Answer: Part 1(c): What is the overall shape of this distribution? Answer: Part 1(d): Is the mean of this data set likely to be larger, smaller, or about the same as the median? Why? Answer: There are 3 Steps involved in it Step: 1 A great problem Part 1a How many visitors did the museum have on this day The stemplot shows the dat... Get Instant Access to Expert-Tailored Solutions See step-by-step solutions with expert insights and AI powered tools for academic success Ace Your Homework with AI Get the answers you need in no time with our AI-driven, step-by-step assistance Get Started Recommended Textbook for Authors: Steven A. Finkler, Thad Calabrese 4th edition 133060411, 132805669, 9780133060416, 978-0132805667 More Books Students also viewed these Mathematics questions View Answer in SolutionInn App
{"url":"https://www.solutioninn.com/study-help/questions/problem-1-a-museum-gathers-data-about-the-ages-of-1010129","timestamp":"2024-11-07T12:26:07Z","content_type":"text/html","content_length":"105361","record_id":"<urn:uuid:9d33bc59-2a0e-404b-8edf-1a86a4ebdfb3>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00647.warc.gz"}
waveguide termination waveguide termination On the completeness of eigenmodes in a parallel plate waveguide with a perfectly matched layer termination free download An explicit proof of the completeness of the eigenmodes of a grounded parallel plate waveguide with a perfectly matched layer termination is given. The proof is based on a general theorem governing the completeness of sets of complex exponentials. The Greens It is suggested to use a basis of discontinuous solutions of the wave equation (the generalized Sommerfeld solution) to describe scattering effects by a thin half-plane. This makes it possible to construct the self-consistent wave spectrum of a semiinfinite plate Waveguide Termination with a Magnetic Wall on the Mushroom-Shaped Metamaterial Modeling free download Waveguide termination modeling with a mushroom-shaped wall were carried out. Voltage standingwave ratio (VSWR) and reflection coefficient (S11) frequency dependencies were obtained. High values of the mushroomshaped structure unloaded quality factor are the Planar Metamaterial for Matched Waveguide Termination . free download Abstract- We present the design, fabrication, and characterization of a novel matched waveguide termination based on planar artificial metamaterial. This matched termination is realized by recent developed planar metamaterial absorber, which can be designed to near
{"url":"https://www.engpaper.com/ece/waveguide-termination.html","timestamp":"2024-11-08T18:56:18Z","content_type":"text/html","content_length":"8303","record_id":"<urn:uuid:6d38f650-e630-4996-8fb5-6b2300e15a90>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00534.warc.gz"}
Adena Montessori 5 wooden boxes with lids contain triangles in different sizes, shapes and colors. Triangle Box 1 The Directress shows the child how to take any one of the box individually to the floor mat. 2 She sits beside the child and places the box of shapes in front of the child. 3 Directress shows how to lay out all the contents on the mat. 4 Box 1 -Trace the lines on the shapes with fingers. 5 Join up the triangles along the line. 6 Then, use the gray triangle as control of error by holding it just above all the shapes made earlier, (not touching) in turn. 7 The Directress then invites the child to put the shapes together, matching the black lines. the resulting shapes are discussed and named. 8 Directress may use the Three Period lesson to teach the child the name of the shapes that could be formed with the various shapes, for e.g. hexagon, triangle, rhombus,...etc. Rectangular Box 1 1 Invite the child to work with the 'rectangular box'. 2 Show the child where the rectangular box is located on the shelf. 3 When the child has made the selection of the material, begin with the third step after naming the material for the child. 4 Indicate the procedure for carrying the material: one hand on each side of the rectangular box with thumbs on top and fingers on the bottom. 5 Place the box in the upper left side of the rug. 6 Remove the lid and place to the right of the box. 7 Say, "I will make shapes from the triangles." 8 Remove from the box the two green and two yellow isosceles triangles one at a time and place them in a straight line at the bottom of the rug. 9 With the index and middle fingers, trace the black lines on the green triangles. 10 Slide them together, forming a square. 11 Move the square below the rectangle box. 12 With the index and middle fingers, trace the black lines on the yellow triangles. 13 Slide them together, forming a rhomboid. 14 Move the rhomboid to the right of the square. 15 Select from the box ,the two grey, two green, and two yellow scalene right triangles one at a time and place them in a straight line at the bottom of the rug. 16 Trace the black line on the grey triangles. 17 Slide them together, forming a rhomboid. 18 Trace the black line on the yellow triangles. 19 Slide them together, forming a rhomboid. 20 Select from the box the two yellow equilateral triangles and the red scalene triangles one at a time and place them in a straight line at the bottom of the rug. 21 Trace the black line on the two yellow triangles. 22 Slide them together, forming a rhombus. 23 Trace the black line on the red triangles. 24 Slide them together, forming a trapezoid. 25 Replace the material into the box beginning with the trapezoid to the lower left, rhombus in upper left, right angle triangles in right as rectangles, and isosceles triangles as a square. 26 Return the material to the shelf in the manner as indicated in #3. Rectangular Box 2 1 Invite the child to work with the 'rectangular box'. 2 Show the child where the rectangular box is located on the shelf. 3 When the child has made the selection of the material, begin with the third step after naming the material for the child. 4 Indicate the procedure for carrying the material: one hand on each side of the rectangular box with the thumbs on the top and fingers on the bottom. 5 Place the box in the upper left side of the rug. 6 Remove the lid and place it to the right of the box. 7 Say, "I will make shapes from the triangles." 8 Remove the two isosceles right triangles one at a time, and place them in a straight line below the lid. 9 Slide them against each other, forming a square. 10 Again, slide them together to form a rhomboid. 11 Slide them together, to form a different rhomboid. 12 Select the two scalene right triangles one at a time. 13 Slide them against eachother, forming a rectangle. 14 Again, slide them together to form a rhomboid. 15 Slide them together to form a different rhomboid. 16 Select the two equilateral triangles one at a time. 17 Slide them against eachother, to form a rhombus. 18 Select the one scalene right triangle and the one scalene obtuse triangle one at a time. 19 Slide them together, to form a trapezoid. 20 Replace the material into the box beginning with the trapezoid on thr right. 21 Return the material to the shelf in the manner described in #3. Small Hexagonal Box 1 Invite the child to work with the 'Small hexagonal box'. 2 Show the child where the small hexagonal box is located on the shelf. 3 When the child has made the selection of the material, begin with the third step after naming the material for the child. 4 Indicate the procedure for carrying the material: one hand on each side of box with thumbs on the top and the fingers on the bottom. 5 Place the box on a rug. 6 Place the box in the upper left side of the rug. 7 Say, "I will make shapes with the triangles." 8 Remove the lid. Select the yellow equilateral triangle and place it to the right of the lid. 9 Select six Isosceles obtuse triangles, one at a time placing them in a straight line below the box. 10 Trace the black line with the index and middle fingers. 11 Slide the triangles together to foram a rhombus and continue until you have made three rhombi. 12 With two hands, make the first two rhombi point to the top of the rug. 13 Slide the third rhombi below forming the hexagon. 14 Slide Hexagon to the left. 15 Select the six grey equilateral triangles, one at a time placing them in a straight line at the bottom of the rug. 16 Trace the black lines with the index and middle fingers. 17 Slide the triangles to form a hexagon. 18 Select the three green triangles, one at a time placing them in a straight line to the right of the grey hexagon. 19 Trace the black lines with the indeax and middle fingers. 20 Slide the triangles together to form a trapezoid. 21 Select two red quilateral triangles one at a time placing them in a straight line to the right of the red hexagon. 22 Trace the black lines with the index and middle fingers. 23 Slide the triangles together to form a rhombus. 24 Replace the material into the box beginning with the red rhombus. 25 Return the material to the shelf in the manner indicated in #3. Large Hexagonal Box 1 Invite the child to work with the 'large hexagonal box'. 2 Show the child where the large hexagonal box is located on the shelf. When the child has made the selection of the material, begin with the third step after naming the material for the child. 3 Indicate the procedure for carrying the material: one hand on each side of the hexagonal box, with the thumbs on top and fingers on the bottom. 4 Place the box on a rug. 5 Place the box in the upper left side of the rug. 6 Remove the lid and place it to the right of the box. 7 Say, "I will make shapes with triangles." 8 Remove the yellow equilateral triangle and place it at the bottom of the rug. 9 Remove the three isosceles obtuse triangles with the black line on the side opposite to the obtuse angle and place them in a straight line to the right of the equilateral triangle. 10 With the index and middle fingers trace the black line of the equilateral triangle and the black line of one of the isosceles obtuse triangles. 11 Slide the black line of the second triangle against the black line of the equilateral triangle. 12 Continue with the remaining triangles in the same manner, thus forming a hexagon. 13 Indicate the equilateral triangle inscribed in the hexagon by tracing the black line with the index and forefinger. 14 Select from the box the yellow isosceles obtuse triangles with the black line on the two sides adjacent to the obtuse angle and place them in a straight line to the right of the yellow hexagon just formed. 15 With the index and middle fingers, trace the black line of the first isosceles triangle and the black line of the second isosceles triangle. 16 Continue in the same manner with the third triangle. 17 Superimpose the large yellow equilateral triangle over the equilateral triangle formed by the three isosceles triangles. 18 Return the equilateral triangle. 19 Select the two red triangles one at a time and place them in a straight line to the right of the equilateral triangle. 20 With the index and middle fingers, trace the black line on each triangle. 21 Then slide the triangles together at the black lines forming a rhombus. 22 Select from the box two grey triangles one at a time and place them in a straight line to the right of the red rhombus. 23 With the index and middles fingers, trace the black line on each triangle. 24 Then slide the triangles together at the black lines forming a rhomboid. 25 Replace the material into the box, beginning with the grey rhombus. 26 Return the material to the shelf in the manner indicated in step #3. To show the various plane figures included in the hexagon. Refinement of the discrimination of geometric shapes. Development of concentration, order, coordination, and independence. Preparation for geometry. Development of the appreciation of line and form. Development of creativity. Main Material
{"url":"https://www.adenamontessori.com/wap/details.php?did=909","timestamp":"2024-11-04T18:06:47Z","content_type":"text/html","content_length":"19327","record_id":"<urn:uuid:d2ce3383-d47d-4706-b073-f2b9681adde8>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00767.warc.gz"}
Ten Facts Multiplication Picture Worksheets 3rd Grade Mathematics, particularly multiplication, develops the foundation of many academic techniques and real-world applications. Yet, for numerous learners, grasping multiplication can position an obstacle. To resolve this hurdle, educators and parents have embraced a powerful device: Ten Facts Multiplication Picture Worksheets 3rd Grade. Intro to Ten Facts Multiplication Picture Worksheets 3rd Grade Ten Facts Multiplication Picture Worksheets 3rd Grade Ten Facts Multiplication Picture Worksheets 3rd Grade - 3rd Grade Multiplication Facts of 10 Learning Resources Strengthen your child s multiplication facts of 10 skills with interactive educational resources for multiplication facts of 10 for 3rd graders online These learning resources include fun games and worksheets with eye catching visuals and characters Multiplication Facts of 10 Worksheets for 3rd Graders Help your child practice multiplication facts of 10 with worksheets for 3rd graders In these printable worksheets children will solve vertical multiplication problems horizontal multiplication problems and solve to find the missing number to practice the multiplication facts of 10 Value of Multiplication Method Recognizing multiplication is pivotal, laying a strong structure for sophisticated mathematical concepts. Ten Facts Multiplication Picture Worksheets 3rd Grade use structured and targeted method, fostering a much deeper comprehension of this essential arithmetic operation. Advancement of Ten Facts Multiplication Picture Worksheets 3rd Grade Multiplication Problems 3rd Grade Multiplication Worksheets Multiplication Problems 3rd Grade Multiplication Worksheets Sheet 1 Answers Multiplication Division Facts Sheet 2 Sheet 2 Answers 3rd Grade Multiplication Tables to 10x10 Multiplication Tables to 10x10 Sheet 1 Sheet 1 Answers Multiplication Tables to 10x10 Sheet 2 Sheet 2 Answers Multiplication Tables to 10x10 Sheet 3 Sheet 3 Answers More Recommended Math Worksheets This third grade math worksheet asks students to look at picture representations of addition and multiplication problems Highlighting one digit multiplication facts it shows how addition facts can help kids understand multiplication Show your students the multiplication process using this colorful picture multiplication Download Free From standard pen-and-paper exercises to digitized interactive formats, Ten Facts Multiplication Picture Worksheets 3rd Grade have progressed, satisfying varied discovering designs and preferences. Types of Ten Facts Multiplication Picture Worksheets 3rd Grade Standard Multiplication Sheets Simple exercises concentrating on multiplication tables, assisting learners construct a strong arithmetic base. Word Issue Worksheets Real-life circumstances incorporated into problems, enhancing critical thinking and application abilities. Timed Multiplication Drills Examinations created to improve rate and precision, aiding in rapid psychological mathematics. Advantages of Using Ten Facts Multiplication Picture Worksheets 3rd Grade First Grade Math Worksheets 3rd Grade Math Teacher Tools Teacher First Grade Math Worksheets 3rd Grade Math Teacher Tools Teacher 16 hr min sec SmartScore out of 100 IXL s SmartScore is a dynamic measure of progress towards mastery rather than a percentage grade It tracks your skill level as you tackle progressively more difficult questions Consistently answer questions correctly to reach excellence 90 or conquer the Challenge Zone to achieve mastery 100 Multiplication Facts worksheets for Grade 3 are essential tools for teachers to help their students master the fundamental math skill of multiplication These worksheets provide a variety of exercises ranging from simple multiplication problems to more complex word problems ensuring that students are well equipped to tackle any multiplication Enhanced Mathematical Abilities Consistent technique sharpens multiplication effectiveness, enhancing total mathematics capacities. Enhanced Problem-Solving Abilities Word issues in worksheets establish analytical thinking and approach application. Self-Paced Discovering Advantages Worksheets suit private knowing rates, fostering a comfortable and adaptable learning environment. Exactly How to Produce Engaging Ten Facts Multiplication Picture Worksheets 3rd Grade Incorporating Visuals and Colors Dynamic visuals and shades record interest, making worksheets visually appealing and involving. Consisting Of Real-Life Circumstances Connecting multiplication to day-to-day circumstances includes relevance and practicality to exercises. Tailoring Worksheets to Different Skill Degrees Personalizing worksheets based upon varying proficiency degrees makes certain inclusive understanding. Interactive and Online Multiplication Resources Digital Multiplication Equipment and Games Technology-based sources provide interactive discovering experiences, making multiplication engaging and enjoyable. Interactive Sites and Applications Online systems provide varied and obtainable multiplication method, supplementing traditional worksheets. Customizing Worksheets for Numerous Understanding Styles Aesthetic Learners Visual help and representations help comprehension for learners inclined toward visual understanding. Auditory Learners Verbal multiplication issues or mnemonics deal with students that understand ideas via auditory means. Kinesthetic Students Hands-on tasks and manipulatives sustain kinesthetic students in recognizing multiplication. Tips for Effective Execution in Discovering Consistency in Practice Normal method reinforces multiplication abilities, promoting retention and fluency. Balancing Repeating and Selection A mix of recurring workouts and diverse problem formats maintains passion and understanding. Supplying Positive Feedback Comments help in identifying locations of improvement, encouraging continued progression. Obstacles in Multiplication Method and Solutions Motivation and Engagement Obstacles Tedious drills can lead to uninterest; innovative strategies can reignite inspiration. Overcoming Concern of Mathematics Negative understandings around mathematics can hinder progress; developing a favorable discovering environment is important. Impact of Ten Facts Multiplication Picture Worksheets 3rd Grade on Academic Efficiency Studies and Research Study Searchings For Research shows a positive correlation in between consistent worksheet usage and enhanced math performance. Ten Facts Multiplication Picture Worksheets 3rd Grade emerge as flexible devices, cultivating mathematical proficiency in students while accommodating diverse learning designs. From basic drills to interactive online resources, these worksheets not only improve multiplication abilities but likewise promote crucial reasoning and analytic capabilities. Multiplying 3 Numbers Three Worksheets Free Math Worksheets Math 4th Grade Multiplication Worksheets 4th Grade Multiplication Worksheets Check more of Ten Facts Multiplication Picture Worksheets 3rd Grade below Multiplication Worksheets Mystery Picture PrintableMultiplication Multiplication Facts 1 5 Worksheets Free Printable Free Printable Multiplication Worksheets Math Division Worksheets 3rd FREE Printable 1st Grade Worksheets 2nd Grade Math Worksheets First 4th Grade Multiplication Worksheets 4th Grade Multiplication Worksheets Multiplication Facts Of 10 Worksheets For 3rd Graders https://www. splashlearn.com /math/multiplication... Multiplication Facts of 10 Worksheets for 3rd Graders Help your child practice multiplication facts of 10 with worksheets for 3rd graders In these printable worksheets children will solve vertical multiplication problems horizontal multiplication problems and solve to find the missing number to practice the multiplication facts of 10 10 Free Multiplication Coloring Worksheets 3rd Grade https:// youvegotthismath.com /multiplication... Coloring Hidden Pictures with Multiplication Facts One by one multiplication problems and occasionally two by one multiplication problems are included in these hidden picture coloring worksheet Just multiply the numbers given on each part of image and color according to the color code Multiplication Facts of 10 Worksheets for 3rd Graders Help your child practice multiplication facts of 10 with worksheets for 3rd graders In these printable worksheets children will solve vertical multiplication problems horizontal multiplication problems and solve to find the missing number to practice the multiplication facts of 10 Coloring Hidden Pictures with Multiplication Facts One by one multiplication problems and occasionally two by one multiplication problems are included in these hidden picture coloring worksheet Just multiply the numbers given on each part of image and color according to the color code Multiplication Facts 1 5 Worksheets Free Printable FREE Printable 1st Grade Worksheets 2nd Grade Math Worksheets First 4th Grade Multiplication Worksheets 4th Grade Multiplication Worksheets Printable Resources For 3rd Graders Including A Multiplication Chart Mastering Multiplication Multiplication Fun Math Homeschool Math Mastering Multiplication Multiplication Fun Math Homeschool Math Kindergarten Worksheets Free Teaching Resources And Lesson Plans Frequently Asked Questions (Frequently Asked Questions). Are Ten Facts Multiplication Picture Worksheets 3rd Grade appropriate for every age groups? Yes, worksheets can be tailored to various age and ability levels, making them versatile for numerous learners. How commonly should students practice using Ten Facts Multiplication Picture Worksheets 3rd Grade? Regular practice is essential. Routine sessions, preferably a couple of times a week, can generate substantial enhancement. Can worksheets alone enhance mathematics skills? Worksheets are an important tool but should be supplemented with diverse learning methods for comprehensive skill advancement. Exist online platforms supplying totally free Ten Facts Multiplication Picture Worksheets 3rd Grade? Yes, numerous educational internet sites provide free access to a vast array of Ten Facts Multiplication Picture Worksheets 3rd Grade. How can parents support their children's multiplication practice in the house? Encouraging regular technique, supplying support, and developing a positive understanding environment are valuable steps.
{"url":"https://crown-darts.com/en/ten-facts-multiplication-picture-worksheets-3rd-grade.html","timestamp":"2024-11-13T20:46:16Z","content_type":"text/html","content_length":"29226","record_id":"<urn:uuid:3e06126e-ba53-48de-97da-af949272974f>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00186.warc.gz"}
程序代写案例-FNCE 30007|学霸联盟 FNCE 30007 Derivative Securities Lecture – Forward & futures pricing Investment v consumption assets Short selling Investment assets Simple relation (no income) Known income (bond futures) Known yield (stock index futures, currency futures) Consumption assets Environmental, social and governance movement Futures & expected spot prices Reading: Chapter 5 Introduction: No arbitrage pricing Lecture 1 considered speculators, lecture 2 considered We now focus on the other major participant – Arbitrage takes advantage of a price differential between two or more markets No up front payment and a positive cash flow later Arbitrage possible if Same asset does not trade at the same price on all markets (the law of one price); or 2 assets with identical cash flows do not trade at the same price; or An asset with a known price in the future does not today trade at its future price discounted at the risk- free interest rate Introduction: Forward v Futures prices Forward and futures prices are usually equal. We therefore price futures contracts as if they are Greatly simplifies the problem Slight differences may arise from Transaction costs Futures more liquid & have no counterparty risk Introduction: Investment v Consumption assets Investment assets: held by significant numbers of people purely for investment purposes (gold, silver, stocks, bonds). Can price forwards & futures off spot via Consumption assets: held primarily for consumption & not usually for investment purposes (copper, oil, pork bellies, soybeans). Not able to price forwards & futures off spot via Introduction: Short selling Short selling: sell securities you do not own Your broker borrows the securities from another client and sells them in the spot market At some stage you buy the securities back & return If the price falls: sell high, buy low. You pay dividends & other benefits to the owner of the securities. Introduction: Assumptions & notation No transaction costs Same tax rate Borrowing/lending at risk-free rate No-arbitrage opportunities (arbitrage opportunities taken advantage of immediately). S0: Spot price today F0: Futures or forward price today T: Time until delivery date (years) r: Risk-free interest rate for maturity T (continuously compounded rate p.a). Investment assets - Simple pricing The relationship between F0 and S0 is If F0 > S0erT, arbitrageurs buy the asset and short forward contracts If F0 < S0erT, arbitrageurs short the asset and buy forward contracts. rTeSF 00 = Investment assets - Example 1 The spot price of gold is US$600 The quoted 1-year futures price of gold is 1-year US$ interest rate is 5% p.a No income or storage costs for gold Is there an arbitrage opportunity? Investment assets - Example 2 The spot price of gold is US$600 The quoted 1-year futures price of gold is 1-year US$ interest rate is 5% p.a No income or storage costs for gold Is there an arbitrage opportunity? Short sale constraint What if short sales are not possible for all investment assets? Short selling not needed for the forward price equation. Only require that at least some people hold the asset only for investment purposes. If the forward price is low, they sell the asset & take a long forward position. Investment assets - known $ income Investment asset has income during the life of a forward contract: F0 = (S0 – I)erT where I = present value of the income during life of forward contract If F0 > (S0 – I )erT, arbitrageurs buy the asset and short the forward contract. If F0 < (S0 – I )erT, arbitrageurs short the asset and buy the forward contract. Example: known $ income Consider a long forward contract to buy a coupon bond with a current price of $900. -The forward contract has a 9 month maturity & a coupon payment of $40 in 4 months. -The continuously compounded 4 and 9 month risk- free rates are 3% and 4% p.a The forward contract price should be $886.60. To show why we will consider 2 situations i) where the futures is overvalued at $910. ii) where the futures is undervalued at $870. Example: known $ income (i) The forward price is $910. Arbitrageur will Borrow $900. The coupon payment has a present value of 40e(-0.03)(4/12) = $39.60. So, $39.60 is borrowed for four months and the rest ($860.40) is borrowed for nine months (at 4%). Buy the bond Enter into a forward contract to sell the asset for $910. In four months: Receive $40 coupon payment Use $40 to repay first loan with interest In nine months: Sell bond & receive $910 under the terms of the forward contract. Use $886.60 to repay second loan with interest. Profit: $910 – $886.60 = $23.40 Example: known $ income (ii) Forward price is $870. Arbitrageur will Short the bond ($900) Enter into a forward contract to buy the bond for $870 in nine months. Of the $900 realized from shorting the bond, $39.60 is invested for four months at 3% per annum (grows to $40). The remaining $860.40 is invested for nine months at 4% per annum (grows to $886.60) In four months: Receive $40 from four-month investment Use $40 to pay coupon on the bond In nine months: Receive $886.60 from nine-month investment Buy the bond for $870 under the terms of the forward Close out short position in the bond Profit: $886.60 – $870 = $16.60 Investment assets - known yield If the asset underlying a forward contract has a known yield F0 = S0e(r–q)T where q = average yield over the life of the contract Two examples Stock index futures Currency futures Known yield – stock index futures Investment asset paying a dividend yield The price relationship is F0 = S0 e(r–q )T where q = dividend yield (over life of contract) on the portfolio represented by the index Index must represent an investment asset changes in the index must correspond to changes in the value of a tradable portfolio Known yield – stock index futures F0 > S0e(r-q)T arbitrageur buys the stocks underlying the index and sells futures F0 < S0e(r-q)T an arbitrageur buys futures and sells the stocks underlying the index Index arbitrage involves simultaneous trades in futures & many different stocks Very often computer used to generate Known yield – stock index futures Lets consider the E-mini S&P500 futures contract S&P500 close Feb 3/2020 3225.52 4 week US T-bill rate 1.53%p.a Dividend yield 1.79%p.a Contract specs reveal expires 3rd Friday of the month. Therefore 34 trading days prior to expiration [ ]3225.52exp (0.0153 0.0179) 34 252 3224.39oF = − × = Known yield – currency Foreign currency provides a continuous yield - the foreign risk-free interest rate rf Underlying asset is 1 unit of foreign F S e r r Tf0 0= −( ) Known yield – currency example 2year interest rates in Australia and the U.S. are 5% and 7%. Spot exchange rate is 0.62 USD per AUD. Find the two year forward rate. Given that fx rate is 1 unit of foreign currency, we consider from perspective of US investor. Can also redo from the perspective of an Australian investor. Consider when 2year forward rate is 0.63 & 0.66. Consumption assets F0 ≤ S0 e(r+u )T where u is the storage cost per unit time as a percent of the asset value. F0 ≤ (S0+U )erT where U = present value of the storage Consumption assets - example Consider a 1 year copper futures contract. Assume no income and that it costs $2 per ounce per year to store copper, with payment being made at the end of the year. The spot price is $600 and the risk-free rate is 5% p.a for all maturities. Find the futures price. Consider when futures price is $700 & $610 Consumption assets – convenience yield Convenience yield is the benefit from holding the physical asset. Reflects market expectations of future availability High inventory expectations low conv yield Low inventory expectations high conv yield Convenience yield is F0eyT = (S0+U )erT If U is proportional to the spot price F0eyT = S0e(r + u)T Cost of carry Cost of carry, c, is the storage cost plus the interest costs less the income earned Investment asset F0 = S0ecT Consumption asset F0 ≤ S0ecT Convenience yield on consumption asset, y, is defined so that F0 = S0 e(c–y )T Forward valuation The value of a futures contract is zero - value reflected in the margin account. K is delivery price in a forward contract entered into previously. F0 is forward price that would apply to the contract today. Value of a long forward contract, ƒ, is ƒ = (F0 – K )e–rT Value of a short forward contract is ƒ = (K – F0 )e–rT Forwards have a value of zero at the time first entered into because F0=K. As time passes, the forward price and the value of the contract change. Forward valuation: example Consider a long forward contract on a non-dividend paying stock entered into some time ago that has 5 months left to maturity. The risk-free rate with continuous compounding is 9% p.a. The current stock price is $30 and the delivery price is $28. The value of the contract is: F0 = 30e(0.09)(5/12) = $31.15 ƒ = (F0 – K )e–rT = (31.15 – 28)e(-0.09)(5/12) = $3.03 ƒ = S0 – Ke–rT = 30 - 28e(-0.09)(5/12) = $3.03 Will use this in the BSM lecture Environmental, social and governance (ESG) movement is now emerging in the derivatives ESG companies typically less exposed to environmental and regulatory tail risks (Value at Risk lecture) Eurex and Nasdaq have launched ESG futures Similar plans for many other markets China will soon open an exchange solely dedicated to the trading of carbon finance futures Futures on the STOXX Europe 600 ESG-X index Index screens out companies with low ESG rankings Index enables investors to easily switch portfolio to an ESG compliant benchmark with low cost and tracking error Futures on the index now traded Futures can be used for hedging and speculative purposes This should add liquidity to the underlying index In October 2019 Nasdaq launched a futures based on the OMXS30 responsible index Also excludes companies with poor ESG standards Futures Prices & Expected Spot Prices So far we have focused on the contemporaneous relation between F0 and S0. What about the relation between F0 and E (ST )? F0 = E (ST ) F0 unbiased estimate of ST F0>E (ST ) contango
{"url":"https://www.xuebaunion.com/detail/1328.html","timestamp":"2024-11-05T19:52:08Z","content_type":"text/html","content_length":"22938","record_id":"<urn:uuid:3cf2dad2-7957-49ec-981a-3f192449626a>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00816.warc.gz"}
Structured Semidefinite Programming for Recovering Structured Preconditioners Structured Semidefinite Programming for Recovering Structured Preconditioners Arun Jambulapati · Jerry Li · Christopher Musco · Kirankumar Shiragur · Aaron Sidford · Kevin Tian Great Hall & Hall B1+B2 (level 1) #1124 Abstract: We develop a general framework for finding approximately-optimal preconditioners for solving linear systems. Leveraging this framework we obtain improved runtimes for fundamental preconditioning and linear system solving problems including:Diagonal preconditioning. We give an algorithm which, given positive definite $\mathbf{K} \in \mathbb{R}^{d \times d}$ with $\mathrm{nnz} (\mathbf{K})$ nonzero entries, computes an $\epsilon$-optimal diagonal preconditioner in time $\widetilde{O}(\mathrm{nnz}(\mathbf{K}) \cdot \mathrm{poly}(\kappa^\star,\epsilon^{-1}))$, where $\kappa^ \star$ is the optimal condition number of the rescaled matrix.Structured linear systems. We give an algorithm which, given $\mathbf{M} \in \mathbb{R}^{d \times d}$ that is either the pseudoinverse of a graph Laplacian matrix or a constant spectral approximation of one, solves linear systems in $\mathbf{M}$ in $\widetilde{O}(d^2)$ time. Our diagonal preconditioning results improve state-of-the-art runtimes of $\Omega(d^{3.5})$ attained by general-purpose semidefinite programming, and our solvers improve state-of-the-art runtimes of $\Omega(d^{\omega})$ where $\omega > 2.3$ is the current matrix multiplication constant. We attain our results via new algorithms for a class of semidefinite programs (SDPs) we call matrix-dictionary approximation SDPs, which we leverage to solve an associated problem we call matrix-dictionary recovery.
{"url":"https://nips.cc/virtual/2023/poster/71233","timestamp":"2024-11-11T23:50:51Z","content_type":"text/html","content_length":"48060","record_id":"<urn:uuid:12390df8-1641-4069-8a07-2acab1325e92>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00363.warc.gz"}
Unlock Math Mastery: The Ultimate Guide to Spectrum Grade 6 Answer Key Math | Mastering Success: The Power of Answer Keys and QuizzesUnlock Math Mastery: The Ultimate Guide to Spectrum Grade 6 Answer Key Math Unlock Math Mastery: The Ultimate Guide to Spectrum Grade 6 Answer Key Math Spectrum Grade 6 Answer Key Math offers detailed solutions and explanations for the Spectrum Grade 6 Math workbook. It provides step-by-step guidance to help students understand and solve mathematical problems, covering various topics in the Grade 6 math curriculum. The answer key is designed to assist students in checking their work, identifying areas for improvement, and reinforcing their understanding of mathematical concepts. It helps improve accuracy, build confidence, and prepare students for standardized tests. The topics addressed in the Spectrum Grade 6 Answer Key Math include number sense, measurement, geometry, algebra, and data analysis. By utilizing this resource, students can strengthen their problem-solving skills, enhance their critical thinking abilities, and develop a strong foundation for further mathematical studies. Spectrum Grade 6 Answer Key Math Spectrum Grade 6 Answer Key Math is a valuable resource for students and educators, providing accurate solutions and comprehensive explanations for the Spectrum Grade 6 Math workbook. • Comprehensive: Covers all topics in the Grade 6 math curriculum. • Accurate: Ensures correctness of solutions and explanations. • Step-by-step: Breaks down problems into manageable steps. • Reinforcing: Strengthens understanding of mathematical concepts. • Confidence-building: Enhances students’ belief in their mathematical abilities. • Test preparation: Prepares students for standardized tests. • Problem-solving: Develops critical thinking and problem-solving skills. • Foundation building: Provides a strong base for future mathematical studies. In conclusion, Spectrum Grade 6 Answer Key Math plays a multifaceted role in supporting students’ mathematical journey. It offers comprehensive solutions, fosters confidence, enhances problem-solving abilities, and prepares students for academic success. The comprehensiveness of Spectrum Grade 6 Answer Key Math is a defining feature that sets it apart. It meticulously covers all topics encompassed within the Grade 6 math curriculum, ensuring that students have access to a comprehensive resource that addresses every aspect of their mathematical studies. This comprehensive coverage is of paramount importance as it provides students with a well-rounded understanding of the subject matter. By engaging with the answer key’s solutions and explanations for every topic, students can reinforce their grasp of fundamental concepts, strengthen their problem-solving abilities, and develop a deeper appreciation for the interconnectedness of mathematical Furthermore, the comprehensiveness of Spectrum Grade 6 Answer Key Math empowers students to approach standardized tests with confidence. By familiarizing themselves with the diverse range of topics covered in the curriculum, they are better equipped to tackle any mathematical challenge that may arise. In conclusion, the comprehensiveness of Spectrum Grade 6 Answer Key Math is a cornerstone of its effectiveness. It ensures that students have access to a thorough and reliable resource that supports their mathematical journey throughout Grade 6 and beyond. Accuracy is the cornerstone of Spectrum Grade 6 Answer Key Math, ensuring that the solutions and explanations provided are precise and free from errors. This characteristic is of paramount importance as it enables students to rely on the answer key as a trustworthy source of information. The accuracy of Spectrum Grade 6 Answer Key Math is achieved through a rigorous review process conducted by experienced educators and subject matter experts. This process involves meticulously checking each solution and explanation against the corresponding problem in the workbook, verifying its correctness, and ensuring that the steps taken are clear and logical. The practical significance of accuracy in Spectrum Grade 6 Answer Key Math cannot be overstated. It empowers students to identify and correct their mistakes, fostering a growth mindset and encouraging them to strive for excellence. Moreover, it instills confidence in their mathematical abilities, knowing that they can depend on the answer key for reliable guidance. In conclusion, the accuracy of Spectrum Grade 6 Answer Key Math is an indispensable component that contributes to its effectiveness as a valuable resource for students. It provides a solid foundation for students to build upon, promoting their mathematical understanding and problem-solving skills. The step-by-step approach employed in Spectrum Grade 6 Answer Key Math is a pedagogical strategy that decomposes complex mathematical problems into a series of smaller, more manageable steps. This technique is particularly beneficial for students in Grade 6 as they encounter increasingly challenging mathematical concepts. • Clarity and Understanding: Breaking down problems into steps enhances clarity and facilitates a deeper understanding of the problem-solving process. Each step provides a clear and logical progression, allowing students to follow the solution method systematically. • Reduced Complexity: By dividing a complex problem into smaller steps, the answer key reduces its perceived complexity. This makes the problem more approachable and less daunting for students, fostering a positive attitude towards problem-solving. • Error Identification and Correction: The step-by-step approach allows students to identify and correct errors at each stage of the problem-solving process. This promotes self-assessment and encourages students to take ownership of their learning. • Cognitive Development: Breaking down problems into manageable steps aligns with the cognitive development of Grade 6 students. It supports their ability to think critically, analyze information, and develop problem-solving strategies. In summary, the step-by-step approach in Spectrum Grade 6 Answer Key Math plays a vital role in enhancing students’ mathematical understanding, problem-solving abilities, and overall cognitive In the context of Spectrum Grade 6 Answer Key Math, the reinforcing aspect plays a pivotal role in solidifying students’ grasp of mathematical concepts. By providing step-by-step solutions and detailed explanations, the answer key serves as an invaluable tool that reinforces learning and promotes a deeper understanding of mathematical principles. • The answer key strengthens students’ foundational knowledge by providing clear and concise explanations of mathematical concepts. This reinforcement helps students build a strong base upon which to tackle more complex mathematical challenges. • Through the step-by-step solutions, students learn effective problem-solving strategies and techniques. This reinforcement enables them to approach mathematical problems with confidence and develop their critical thinking skills. • The detailed explanations provided in the answer key help students develop a comprehensive understanding of mathematical concepts. This reinforcement fosters a deeper level of comprehension, allowing students to make connections between different mathematical ideas. • The answer key encourages students to think mathematically by providing a structured approach to problem-solving. This reinforcement promotes logical reasoning, analytical thinking, and the ability to apply mathematical concepts in various contexts. In conclusion, Spectrum Grade 6 Answer Key Math’s reinforcing aspect is a cornerstone of its effectiveness. It provides students with a valuable resource to deepen their understanding of mathematical concepts, develop problem-solving skills, and cultivate a strong mathematical foundation. Spectrum Grade 6 Answer Key Math fosters confidence-building by providing students with a reliable resource that enhances their belief in their mathematical abilities. By offering accurate and comprehensive solutions, the answer key empowers students to tackle mathematical challenges with greater assurance. • Overcoming Math Anxiety: The answer key helps alleviate math anxiety by providing a structured approach to problem-solving. Students gain confidence as they successfully navigate mathematical problems, reducing their apprehension towards the subject. • Empowering Independent Learning: The answer key supports independent learning by enabling students to check their work and identify areas for improvement. This empowers them to take ownership of their learning and builds their confidence in their ability to solve problems without relying solely on external assistance. • Fostering a Growth Mindset: The answer key promotes a growth mindset by encouraging students to learn from their mistakes. By providing step-by-step solutions, students can analyze their errors and develop strategies to improve their problem-solving approach. • Preparing for Success: The answer key helps build confidence by preparing students for standardized tests and classroom assessments. By familiarizing themselves with diverse problem types and solution methods, students develop the confidence to face mathematical challenges with greater preparedness. In conclusion, Spectrum Grade 6 Answer Key Math plays a pivotal role in confidence-building by providing students with a reliable resource, fostering independent learning, promoting a growth mindset, and preparing them for success. By enhancing students’ belief in their mathematical abilities, the answer key empowers them to approach mathematics with greater assurance and achieve their full Test preparation The connection between “Test preparation: Prepares students for standardized tests.” and “Spectrum Grade 6 Answer Key Math” lies in the crucial role that the answer key plays in preparing students for standardized tests. Standardized tests, such as state assessments and college entrance exams, require students to demonstrate their proficiency in various mathematical concepts and skills. Spectrum Grade 6 Answer Key Math provides students with a valuable resource to enhance their test preparation. The answer key offers comprehensive solutions and detailed explanations for the problems in the Spectrum Grade 6 Math workbook, which is aligned with the content and format of many standardized tests. By utilizing the answer key, students can: • Identify areas for improvement: The answer key helps students pinpoint their strengths and weaknesses by providing immediate feedback on their work. This allows them to focus their studies on areas where they need additional support. • Practice problem-solving techniques: The step-by-step solutions in the answer key demonstrate effective problem-solving strategies and techniques. Students can learn from these solutions and apply them to similar problems on standardized tests. • Build confidence: Using the answer key to check their work and identify areas for improvement can boost students’ confidence in their mathematical abilities. This confidence is essential for success on standardized tests. In conclusion, the “Test preparation: Prepares students for standardized tests.” component of “Spectrum Grade 6 Answer Key Math” is of paramount importance. By providing students with comprehensive solutions and detailed explanations, the answer key empowers them to identify areas for improvement, practice problem-solving techniques, and build confidence, ultimately enhancing their preparedness for standardized tests. Spectrum Grade 6 Answer Key Math provides students with a valuable resource to develop critical thinking and problem-solving skills, which are essential for success in mathematics and beyond. • Enhances Analytical Thinking: The answer key guides students through the step-by-step process of solving mathematical problems, fostering analytical thinking and the ability to break down complex problems into smaller, manageable steps. • Promotes Logical Reasoning: By providing detailed explanations and solutions, the answer key helps students understand the logical reasoning behind mathematical concepts. This strengthens their ability to think logically and apply mathematical principles to solve problems. • Develops Problem-Solving Strategies: The answer key exposes students to a variety of problem types and solution methods, enabling them to develop a repertoire of problem-solving strategies. This empowers them to approach new mathematical challenges with confidence. • Cultivates Perseverance: Solving mathematical problems often requires perseverance and resilience. The answer key provides support and encouragement, guiding students through challenging problems and helping them develop a growth mindset that embraces perseverance. In conclusion, Spectrum Grade 6 Answer Key Math plays a crucial role in developing students’ critical thinking and problem-solving skills. By providing step-by-step solutions, detailed explanations, and exposure to diverse problem types, the answer key empowers students to think analytically, reason logically, develop effective problem-solving strategies, and cultivate perseverance, ultimately fostering their mathematical success. Foundation building The connection between “Foundation building: Provides a strong base for future mathematical studies.” and “Spectrum Grade 6 Answer Key Math” lies in the significant role that the answer key plays in establishing a solid mathematical foundation for students. A strong foundation in Grade 6 mathematics is essential for success in higher-level mathematics courses and beyond. Spectrum Grade 6 Answer Key Math provides students with a comprehensive resource to reinforce mathematical concepts, develop problem-solving skills, and build a deep understanding of the subject matter. The detailed solutions and explanations in the answer key help students grasp mathematical concepts more effectively, enabling them to build a strong foundation upon which to advance their mathematical knowledge. By utilizing the answer key, students can: • : The answer key provides step-by-step solutions and clear explanations, reinforcing students’ understanding of mathematical concepts and principles. • : The answer key exposes students to a variety of problem types, helping them develop critical thinking and problem-solving skills that are essential for success in future mathematical endeavors. • : The answer key provides immediate feedback on students’ work, helping them identify areas for improvement and build confidence in their mathematical abilities. Investing in a strong mathematical foundation in Grade 6 through the use of Spectrum Grade 6 Answer Key Math can have long-lasting benefits for students. A solid foundation prepares students for the more complex mathematical concepts they will encounter in higher grades and sets them on a path towards mathematical success. Frequently Asked Questions about Spectrum Grade 6 Answer Key Math This section addresses common questions and concerns regarding Spectrum Grade 6 Answer Key Math. Question 1: What is Spectrum Grade 6 Answer Key Math? Spectrum Grade 6 Answer Key Math is a comprehensive resource that provides accurate solutions and detailed explanations for the Spectrum Grade 6 Math workbook. It is designed to assist students in checking their work, identifying areas for improvement, and reinforcing their understanding of mathematical concepts. Question 2: Is Spectrum Grade 6 Answer Key Math aligned with the curriculum? Yes, Spectrum Grade 6 Answer Key Math is meticulously aligned with the Grade 6 math curriculum, covering all essential topics and concepts. This ensures that students have access to relevant and up-to-date solutions for their math assignments and homework. Question 3: How can Spectrum Grade 6 Answer Key Math help students? Spectrum Grade 6 Answer Key Math offers numerous benefits for students, including improved accuracy, enhanced problem-solving skills, increased confidence, and thorough preparation for standardized tests. It also promotes independent learning and provides a valuable tool for parents and educators. Question 4: Is Spectrum Grade 6 Answer Key Math easy to use? Yes, Spectrum Grade 6 Answer Key Math is designed to be user-friendly and accessible to both students and parents. The solutions are presented in a clear and concise manner, making it easy to understand and follow the problem-solving process. Question 5: Can Spectrum Grade 6 Answer Key Math help students prepare for standardized tests? Absolutely. Spectrum Grade 6 Answer Key Math is an effective resource for preparing students for standardized tests by providing practice with various question types, improving problem-solving skills, and boosting confidence in mathematical abilities. Question 6: Is Spectrum Grade 6 Answer Key Math a reliable resource? Yes, Spectrum Grade 6 Answer Key Math is a reliable and trustworthy resource. The solutions are carefully reviewed and verified by experienced educators to ensure accuracy and adherence to curriculum In summary, Spectrum Grade 6 Answer Key Math is an invaluable resource that provides comprehensive solutions, enhances mathematical understanding, promotes problem-solving abilities, and prepares students for success in Grade 6 math and beyond. For further inquiries, please refer to the official Spectrum website or consult with your child’s teacher. Tips for Utilizing Spectrum Grade 6 Answer Key Math Spectrum Grade 6 Answer Key Math is a valuable tool that can greatly enhance students’ mathematical understanding and problem-solving abilities. Here are some tips to maximize its effectiveness: Tip 1: Utilize the Answer Key as a Learning Resource The answer key is not merely a means to check answers but a valuable learning resource. Encourage students to refer to the explanations and solutions to gain insights into mathematical concepts and problem-solving strategies. Tip 2: Promote Independent Learning Allow students to use the answer key independently to check their work and identify areas for improvement. This fosters self-assessment skills and promotes a sense of responsibility for their Tip 3: Address Misconceptions and Errors Use the answer key to identify and address misconceptions or errors in students’ work. By analyzing incorrect solutions, students can understand their mistakes and develop more effective problem-solving approaches. Tip 4: Encourage Step-by-Step Problem-Solving The answer key provides step-by-step solutions. Encourage students to follow these steps carefully to develop a systematic approach to problem-solving and avoid careless errors. Tip 5: Integrate Answer Key Usage into Homework and Classwork Incorporate the answer key into homework assignments and classroom activities. This provides students with opportunities to practice using the answer key and reinforces their understanding of mathematical concepts. By following these tips, students can effectively utilize Spectrum Grade 6 Answer Key Math to improve their mathematical skills, boost their confidence, and develop a strong foundation for future mathematical endeavors. In conclusion, Spectrum Grade 6 Answer Key Math is an indispensable resource for students, parents, and educators alike. It offers accurate solutions, detailed explanations, and a structured approach to problem-solving, making it an invaluable tool for enhancing mathematical understanding and problem-solving abilities. By utilizing this answer key effectively, students can reinforce their grasp of mathematical concepts, develop critical thinking and problem-solving skills, and build a solid foundation for future mathematical success. Its alignment with curriculum standards and comprehensive coverage ensure that students have access to reliable and up-to-date solutions for their mathematical endeavors. Spectrum Grade 6 Answer Key Math empowers students to approach mathematical challenges with confidence, fostering a positive attitude towards learning and equipping them with the skills necessary to excel in Grade 6 math and beyond. Images References You must be logged in to post a comment.
{"url":"https://sncollegecherthala.in/spectrum-grade-6-answer-key-math/","timestamp":"2024-11-03T13:47:00Z","content_type":"text/html","content_length":"149358","record_id":"<urn:uuid:e604931d-5fe0-4615-8769-b9262851d939>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00139.warc.gz"}
Addition - Numbers | Term 1 Chapter 2 | 5th Maths Addition − Numbers and Operations Numbers and Operations Appreciate the role of place value in addition, subtraction and multiplication algorithm "Ananthan come fast" Ananthan’s mother shouted. "Bus would come earlier". "I am here mummy, I am ready" he said. The whole family was very busy for ananthan’s sister marriage. They have to buy new clothes for their relatives and family members. They finished their purchase and returned back home. Ananthan asked his father ‘how much did you spend for our dresses? His father said, “Cost of dresses for gents is ₹25050, for ladies is ₹47025 and for kids is ₹7125, and for bride and groom dresses is ₹17500, Now you can tell the total amount. Ananthan took a paper and pen, he wrote all the amounts one by one according to their place values. Check whether, the above total amount is correct or not. Yes, ananthan is correct, see the cost of kids, ₹7125, There is a empty place in ten thousand’s place. So Ananthan wrote down the numbers according to the place value. We learnt about place values of the numbers. Now we are going to use the method of adding different values of numbers. Add the following numbers and write down one by one. 137462 + 4005 + 38 + 56734. Step 1: Start by adding the ones. We have 19 ones in ones places. Step 2: We must regrouping 19 ones to 1 ten and 9 ones. Step 3: Now we can put 1 ten with ten and write 9 in the ones place. Simillarly we have to do the hundred, thousand … and so on. Arrange all the given numbers according to their place value. We can do all the addition problems in this manner. When write the numbers, we can avoid mistakes by starting from the right side, that is from the units place.
{"url":"https://www.brainkart.com/article/Addition_44491/","timestamp":"2024-11-03T10:41:31Z","content_type":"text/html","content_length":"38118","record_id":"<urn:uuid:7ab66dca-2f5c-4f94-9940-c25fb6581b08>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00200.warc.gz"}
Recurrent Event Data Analysis — reda-package Recurrent Event Data Analysis The R package reda provides functions for simulating, exploring and modeling recurrent event data. The main functions are summarized as follows: • simEventData: Simulating survival, recurrent event, and multiple event data from stochastic process point of view. • mcf: Estimating the mean cumulative function (MCF) from a fitted gamma frailty model, or from a sample recurrent event data by using the nonparametic MCF estimator (the Nelson-Aelen estimator of the cumulative hazard function). • mcfDiff: Comparing two-sample MCFs by the pseudo-score tests and estimating their difference over time. • rateReg: Fitting Gamma fraitly model with spline baseline rate function. See the package vignettes for more introduction and demonstration.
{"url":"https://wwenjie.org/reda/reference/reda-package","timestamp":"2024-11-08T11:37:07Z","content_type":"text/html","content_length":"8681","record_id":"<urn:uuid:eeaa7994-166c-4778-8d8d-9db5dd88d593>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00240.warc.gz"}
Welcome to the Mathematics of the Twenties! Thanks to a recent Twitter post by Brette Garner at the University of Denver, I came across an interesting article by the NCTM (National Council of Teachers of Mathematics) published Nicole Fedio in The Mathematics Teacher. The journal title is “Effect of Certain Types of Speed Drills in Arithmetic” by A.I. Messick. The study split 272 fourth and fifth graders into two groups while teaching addition: one focused on speed and the other focused on accuracy. Here are a few of the conclusions: (1) Accuracy is more important than speed, (2) From the viewpoint Mathematics of speed, it makes little difference which is emphasized, speed or accuracy, (3) From the viewpoint of accuracy, it is much better to emphasize accuracy, and (4) In teaching addition Consultant to pupils of the fourth and fifth grades of the elementary schools it is better to emphasize accuracy rather than speed.Given recent work by Professor Jo Boaler and her group at at Stanford and the NCTM’s definition of math fluency as being accurate, flexible, and efficient with computations, research concluding that teaching accuracy over speed does not come Mathematique as a surprise. What I did find surprising; however, is that this article was published in 1926. Not 1996, not 2006 or even 2016, but 1926. Since 1926, how many thousands of students have suffered through timed math often known as “mad minutes” that focus on speed? How many students were discouraged from studying engineering, science, medicine, or other Archives disciplines because they thought that being good at math meant memorizing procedures and regurgitating facts quickly? How many scientists did we fail to create since 1926 because memorizing mathematical procedures was a gatekeeper to further study of all sciences?But, wait. 8 x 7 is always 56, right? You just have to know that quickly. If you can’t recall February this immediately, then how can you possibly go on to study fractions or solve differential equations? I would argue that knowing how to figure out 8 x 7 flexibly, accurately, and 2020 efficiently is much more meaningful than rapid recall or memorization. When I was a high school teacher, I found too many students who could not answer questions like, “What is 8 x November 7?” When students are taught their times tables through “mad minutes” or other techniques that rely on speed, they might remember them in the short term, but will not be able to 2019 recall them years later. In particular, many high schoolers struggled to recall the 8 times tables. When I would ask them the answer to 8 x 7 they would blurt out 52, 58, or some October 2019 other incorrect answer. When I said, ”no,” they usually would look at me with a puzzled look, shrug their shoulders and say, “Hmmmm...I don’t know,” or reach for their calculator. September Instead of giving up or relying on technology for simple facts, I want to live in a society of people who, when they forget a simple fact like 8 x 7, have another way to figure it 2019 out efficiently. One possible scenario: I can’t recall 8 x 7, but I know that 10 x 7 is 70 and I need to take away 2 x 7 which is 14, leaving me with 56. So, 8 x 7 is 56. There are multiple other ways to answer this question without using pencil & paper or relying on a calculator. This is true mental math because it relies on problem-solving and making Categories connections between various numbers and operations. Regrettably, many people equate mental math to “mad minutes” and rapid regurgitation of facts. A student who understands multiple ways to calculate 8 x 7 exhibits true number sense. If we rely solely on rote procedures and memorization, we produce students and adults who only have one way to approach a question All like 8 x 7. When their memory fails, they are at a loss at what to do to find the answer.Making Math Moments that Matter, a podcast and wonderful website curated by Kyle Pearce & Compassion John Orr, recently Tweeted that “teaching math exclusively from your textbook is like teaching with a bag over your head.” That statement really resonates with me because when Math teachers teach solely from a text, they are teaching a very one dimensional view of mathematics. What other types of activities might students engage with to learn the content? What Mathed questions might they come with on their own if they are given fewer prompts & procedures to mimic? What technology currently exists that might not have been available when the textbook was published? Given what we know about current best practices in math, how can teachers lift the paper bags off of their heads? To explore these ideas, I reflect back upon my journey as a high school math teacher. When I first started teaching, I was very much a follower of the textbook and had that paper bag firmly planted over my head. In fact, I remember the first few weeks at my first international school in Caracas in 1999. The book order was delayed and had not been delivered. How could I teach IB math without a textbook? The mere thought paralyzed me. I thought I was a good teacher because I was good at explaining procedures found in the textbooks to my students. If I made my explanations clear in their delivery, then I was doing my job. I had missed the whole point by focusing on what I was doing rather than on what students are learning. As a mathematics consultant, I now support teachers today in discovering how math can be learned through applications and problem solving. As a new teacher, I only gave problem-solving lip service. My early students solved the word problems found at the end of the chapter and did very little true problem solving. What changed in my thinking and in my practice? How did I remove the paper bag from my head? I had to learn to let go by starting small and trusting that my students would still learn the necessary content. This was not easy for me to do and it can be very scary. I loved being the sage on the stage. I was comfortable there and received positive feedback from students who told me they enjoyed my math classes. Looking back on my early days as a teacher, I wasn't teaching much mathematics. I was teaching tricks, procedures, and “how to’s.” I was an expert at helping students make sense of textbook problems, which meant explaining procedures and tricks to them clearly. What I wasn’t doing was creating experiences for my students to be true problem-solvers. How can we provide all students the opportunity to problem-solve and make meaning of the mathematics they are learning? How can teachers shift their math instruction? I offer three suggestions: 1. Start small. 2. Find a professional learning network (PLN). 3. Include students and parents in the conversation. Starting small is often one of the most difficult things for educators to wrap our heads around. If there is a better way of doing things, then I need to throw out the textbook tomorrow, find all my own resources based on the standards, and spend five hours preparing for every lesson. No. Instead, take one thing from one lesson and change it. Consider one learning outcome from a lesson and find an interesting problem (lots of places to start here) that involves more student thinking and less teacher direction. If you’ve never done this before, it can be very hard to let go. Note: you don’t have to make up your own problems at the start, you can try an open middle or youcubed problem. Here is an example from Jo Both questions explore the concept of area. But the second allows for student exploration, creativity, and conversations about what area means. Secondly, join a PLN with other educators who are also interested in teaching math differently. They might be at your school or across the globe. It might be through a Critical Friends Group or through work with an instructional coach. There is nothing better than being able to share a success, no matter how small, with supportive colleagues. Twitter is a great place to start. I learn so much on Twitter from the math and education people whom I follow. There is a plethora of information out there and you can be selective over what pops up on your feed. You might find a fellow teacher who lives in a different country, but with whom you connect and share great ideas. There are Facebook groups devoted to teaching math. Don't be afraid to put yourself out there virtually or in person.Finally, inform your students about what you are doing. If they have been taught traditionally for years, they are used to sitting and getting the information from the sage on the stage. This type of passive learning is much easier than having to engage in problem solving. If they don’t understand why you are doing things differently and prepare them for it, there might be unnecessary pushback. Sometimes, when using a “student-as-worker, teacher-as-coach” approach to instruction, students assume you don’t know the answers. If you tell them at the start that you do in fact know the answers, I find they are more willing to dig into the problems. I used to tell my students, “Yes, I know the answers to the questions you are working on. No, I’m not going to tell you even though I know.” I also shared this information during parents’ night. I had very little parent pushback because I invited them into the conversation and gave them the resources to support their children outside of the classroom. If their child’s math class looks different, it’s our job as educators to help parents navigate this. If not, it can lead to viral misinformation, like the father who wrote a check using Common Core math.My journey from teacher of procedures, to teacher of mathematics, to mathematics consultant has not been linear. In fact, it has been anything but that. I am continually growing and learning as I continue along this path. I enjoy supporting teachers as they find their own paths to teaching more of the beauty of mathematics and fewer procedures. I became a math consultant because I am passionate about empowering educators with the mathematical expertise needed to inspire the problem solvers of the future. I love sharing my triumphs and failures in a vulnerable way to help both beginning and experienced teachers grow. Is there one thing you’d like to do differently in your math classroom? Try it today. Take that first step and let me know how it goes. 1 Comment Teknik Industri link 9/6/2024 04:30:10 am What conclusions can be drawn from A.I. Messick's study on teaching addition to fourth and fifth graders, particularly regarding the emphasis on accuracy versus speed? Greeting : <a href="https://journals.telkomuniversity.ac.id/">Telkom University</a> Leave a Reply.
{"url":"http://www.mathematique.us/blog/welcome-to-the-mathematics-of-the-twenties","timestamp":"2024-11-14T00:55:06Z","content_type":"text/html","content_length":"49743","record_id":"<urn:uuid:51026d7d-d8f5-4fa4-81a5-4243a77f3e48>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00446.warc.gz"}
3 to the power This section requires Javascript. You are seeing this because something didn't load right. We suggest you, (a) try refreshing the page, (b) enabling javascript if it is disabled on your browser and, finally, (c) loading the non-javascript version of this page . We're sorry about the hassle.
{"url":"https://solve.club/problems/3-to-the-power/3-to-the-power.html","timestamp":"2024-11-08T06:07:10Z","content_type":"text/html","content_length":"113443","record_id":"<urn:uuid:0d40762f-55cb-4474-9005-7567d08f3ad9>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00175.warc.gz"}
Momentum, Work and Energy previous index next Momentum, Work and Energy At this point, we introduce some further concepts that will prove useful in describing motion. The first of these, momentum, was actually introduced by the French scientist and philosopher Descartes before Newton. Descartes’ idea is best understood by considering a simple example: think first about someone (weighing say 45 kg) standing motionless on high quality (frictionless) rollerskates on a level smooth floor. A 5 kg medicine ball is thrown directly at her by someone standing in front of her, and only a short distance away, so that we can take the ball’s flight to be close to horizontal. She catches and holds it, and because of its impact begins to roll backwards. Notice we’ve chosen her weight so that, conveniently, she plus the ball weigh just ten times what the ball weighs by itself. What is found on doing this experiment carefully is that after the catch, she plus the ball roll backwards at just one-tenth the speed the ball was moving just before she caught it, so if the ball was thrown at 5 meters per second, she will roll backwards at one-half meter per second after the catch. It is tempting to conclude that the “total amount of motion” is the same before and after her catching the ball, since we end up with ten times the mass moving at one-tenth the speed. Considerations and experiments like this led Descartes to invent the concept of “momentum”, meaning “amount of motion”, and to state that for a moving body the momentum was just the product of the mass of the body and its speed. Momentum is traditionally labeled by the letter p, so his definition was: momentum = p = mv for a body having mass m and moving at speed v. It is then obvious that in the above scenario of the woman catching the medicine ball, total “momentum” is the same before and after the catch. Initially, only the ball had momentum, an amount 5x5 = 25 in suitable units, since its mass is 5kg and its speed is 5 meters per second. After the catch, there is a total mass of 50kg moving at a speed of 0.5 meters per second, so the final momentum is 0.5x50 = 25, the total final amount is equal to the total initial amount. We have just invented these figures, of course, but they reflect what is observed experimentally. There is however a problem here—obviously one can imagine collisions in which the “total amount of motion”, as defined above, is definitely not the same before and after. What about two people on rollerskates, of equal weight, coming directly towards each other at equal but opposite velocities—and when they meet they put their hands together and come to a complete halt? Clearly in this situation there was plenty of motion before the collision and none afterwards, so the “total amount of motion” definitely doesn’t stay the same! In physics language, it is “not conserved”. Descartes was hung up on this problem a long time, but was rescued by a Dutchman, Christian Huygens, who pointed out that the problem could be solved in a consistent fashion if one did not insist that the “quantity of motion” be positive. In other words, if something moving to the right was taken to have positive momentum, then one should consider something moving to the left to have negative momentum. With this convention, two people of equal mass coming together from opposite directions at the same speed would have total momentum zero, so if they came to a complete halt after meeting, as described above, the total momentum before the collision would be the same as the total after—that is, zero—and momentum would be conserved. Of course, in the discussion above we are restricting ourselves to motions along a single line. It should be apparent that to get a definition of momentum that is conserved in collisions what Huygens really did was to tell Descartes he should replace speed by velocity in his definition of momentum. It is a natural extension of this notion to think of momentum as defined by momentum = mass x velocity in general, so, since velocity is a vector, momentum is also a vector, pointing in the same direction as the velocity, of course. It turns out experimentally that in any collision between two objects (where no interaction with third objects, such as surfaces, interferes), the total momentum before the collision is the same as the total momentum after the collision. It doesn’t matter if the two objects stick together on colliding or bounce off, or what kind of forces they exert on each other, so conservation of momentum is a very general rule, quite independent of details of the collision. Momentum Conservation and Newton’s Laws As we have discussed above, Descartes introduced the concept of momentum, and the general principle of conservation of momentum in collisions, before Newton’s time. However, it turns out that conservation of momentum can be deduced from Newton’s laws. Newton’s laws in principle fully describe all collision-type phenomena, and therefore must contain momentum conservation. To understand how this comes about, consider first Newton’s Second Law relating the acceleration a of a body of mass m with an external force F acting on it: F = ma, or force = mass x acceleration Recall that acceleration is rate of change of velocity, so we can rewrite the Second Law: force = mass x rate of change of velocity. Now, the momentum is mv, mass x velocity. This means for an object having constant mass (which is almost always the case, of course!) rate of change of momentum = mass x rate of change of velocity. This means that Newton’s Second Law can be rewritten: force = rate of change of momentum. Now think of a collision, or any kind of interaction, between two objects A and B, say. From Newton’s Third Law, the force A feels from B is of equal magnitude to the force B feels from A, but in the opposite direction. Since (as we have just shown) force = rate of change of momentum, it follows that throughout the interaction process the rate of change of momentum of A is exactly opposite to the rate of change of momentum of B. In other words, since these are vectors, they are of equal length but pointing in opposite directions. This means that for every bit of momentum A gains, B gains the negative of that. In other words, B loses momentum at exactly the rate A gains momentum so their total momentum remains the same. But this is true throughout the interaction process, from beginning to end. Therefore, the total momentum at the end must be what it was at the beginning. You may be thinking at this point: so what? We already know that Newton’s laws are obeyed throughout, so why dwell on one special consequence of them? The answer is that although we know Newton’s laws are obeyed, this may not be much use to us in an actual case of two complicated objects colliding, because we may not be able to figure out what the forces are. Nevertheless, we do know that momentum will be conserved anyway, so if, for example, the two objects stick together, and no bits fly off, we can find their final velocity just from momentum conservation, without knowing any details of the collision. The word “work” as used in physics has a narrower meaning than it does in everyday life. First, it only refers to physical work, of course, and second, something has to be accomplished. If you lift up a box of books from the floor and put it on a shelf, you’ve done work, as defined in physics, if the box is too heavy and you tug at it until you’re worn out but it doesn’t move, that doesn’t count as work. Technically, work is done when a force pushes something and the object moves some distance in the direction it’s being pushed (pulled is ok, too). Consider lifting the box of books to a high shelf. If you lift the box at a steady speed, the force you are exerting is just balancing off gravity, the weight of the box, otherwise the box would be accelerating. (Of course, initially you’d have to exert a little bit more force to get it going, and then at the end a little less, as the box comes to rest at the height of the shelf.) It’s obvious that you will have to do twice as much work to raise a box of twice the weight, so the work done is proportional to the force you exert. It’s also clear that the work done depends on how high the shelf is. Putting these together, the definition of work is: work = force x distance where only distance traveled in the direction the force is pushing counts. With this definition, carrying the box of books across the room from one shelf to another of equal height doesn’t count as work, because even though your arms have to exert a force upwards to keep the box from falling to the floor, you do not move the box in the direction of that force, that is, upwards. To get a more quantitative idea of how much work is being done, we need to have some units to measure work. Defining work as force x distance, as usual we will measure distance in meters, but we haven’t so far talked about units for force. The simplest way to think of a unit of force is in terms of Newton’s Second Law, force = mass x acceleration. The natural “unit force” would be that force which, pushing a unit mass (one kilogram) with no friction of other forces present, accelerates the mass at one meter per second per second, so after two seconds the mass is moving at two meters per second, etc. This unit of force is called one newton (as we discussed in an earlier lecture). Note that a one kilogram mass, when dropped, accelerates downwards at ten meters per second per second. This means that its weight, its gravitational attraction towards the earth, must be equal to ten newtons. From this we can figure out that a one newton force equals the weight of 100 grams, just less than a quarter of a pound, a stick of butter. The downward acceleration of a freely falling object, ten meters per second per second, is often written g for short. (To be precise, g = 9.8 meters per second per second, and in fact varies somewhat over the earth’s surface, but this adds complication without illumination, so we shall always take it to be 10.) If we have a mass of m kilograms, say, we know its weight will accelerate it at g if it’s dropped, so its weight is a force of magnitude mg, from Newton’s Second Law. Now back to work. Since work is force x distance, the natural “unit of work” would be the work done be a force of one newton pushing a distance of one meter. In other words (approximately) lifting a stick of butter three feet. This unit of work is called one joule, in honor of an English brewer. Finally, it is useful to have a unit for rate of working, also called “power”. The natural unit of “rate of working” is manifestly one joule per second, and this is called one watt. To get some feeling for rate of work, consider walking upstairs. A typical step is eight inches, or one-fifth of a meter, so you will gain altitude at, say, two-fifths of a meter per second. Your weight is, say (put in your own weight here!) 70 kg. (for me) multiplied by 10 to get it in newtons, so it’s 700 newtons. The rate of working then is 700 x 2/5, or 280 watts. Most people can’t work at that rate for very long. A common English unit of power is the horsepower, which is 746 watts. Energy is the ability to do work. For example, it takes work to drive a nail into a piece of wood—a force has to push the nail a certain distance, against the resistance of the wood. A moving hammer, hitting the nail, can drive it in. A stationary hammer placed on the nail does nothing. The moving hammer has energy—the ability to drive the nail in—because it’s moving. This hammer energy is called “kinetic energy”. Kinetic is just the Greek word for motion, it’s the root word for cinema, meaning movies. Another way to drive the nail in, if you have a good aim, might be to simply drop the hammer onto the nail from some suitable height. By the time the hammer reaches the nail, it will have kinetic energy. It has this energy, of course, because the force of gravity (its weight) accelerated it as it came down. But this energy didn’t come from nowhere. Work had to be done in the first place to lift the hammer to the height from which it was dropped onto the nail. In fact, the work done in the initial lifting, force x distance, is just the weight of the hammer multiplied by the distance it is raised, in joules. But this is exactly the same amount of work as gravity does on the hammer in speeding it up during its fall onto the nail. Therefore, while the hammer is at the top, waiting to be dropped, it can be thought of as storing the work that was done in lifting it, which is ready to be released at any time. This “stored work” is called potential energy, since it has the potential of being transformed into kinetic energy just by releasing the hammer. To give an example, suppose we have a hammer of mass 2 kg, and we lift it up through 5 meters. The hammer’s weight, the force of gravity, is 20 newtons (recall it would accelerate at 10 meters per second per second under gravity, like anything else) so the work done in lifting it is force x distance = 20 x 5 = 100 joules, since lifting it at a steady speed requires a lifting force that just balances the weight. This 100 joules is now stored ready for use, that is, it is potential energy. Upon releasing the hammer, the potential energy becomes kinetic energy—the force of gravity pulls the hammer downwards through the same distance the hammer was originally raised upwards, so since it’s a force of the same size as the original lifting force, the work done on the hammer by gravity in giving it motion is the same as the work done previously in lifting it, so as it hits the nail it has a kinetic energy of 100 joules. We say that the potential energy is transformed into kinetic energy, which is then spent driving in the nail. We should emphasize that both energy and work are measured in the same units, joules. In the example above, doing work by lifting just adds energy to a body, so-called potential energy, equal to the amount of work done. From the above discussion, a mass of m kilograms has a weight of mg newtons. It follows that the work needed to raise it through a height h meters is force x distance, that is, weight x height, or mgh joules. This is the potential energy. Historically, this was the way energy was stored to drive clocks. Large weights were raised once a week and as they gradually fell, the released energy turned the wheels and, by a sequence of ingenious devices, kept the pendulum swinging. The problem was that this necessitated rather large clocks to get a sufficient vertical drop to store enough energy, so spring-driven clocks became more popular when they were developed. A compressed spring is just another way of storing energy. It takes work to compress a spring, but (apart from small frictional effects) all that work is released as the spring uncoils or springs back. The stored energy in the compressed spring is often called elastic potential energy, as opposed to the gravitational potential energy of the raised weight. Kinetic Energy We’ve given above an explicit way to find the potential energy increase of a mass m when it’s lifted through a height h, it’s just the work done by the force that raised it, force x distance = weight x height = mgh. Kinetic energy is created when a force does work accelerating a mass and increases its speed. Just as for potential energy, we can find the kinetic energy created by figuring out how much work the force does in speeding up the body. Remember that a force only does work if the body the force is acting on moves in the direction of the force. For example, for a satellite going in a circular orbit around the earth, the force of gravity is constantly accelerating the body downwards, but it never gets any closer to sea level, it just swings around. Thus the body does not actually move any distance in the direction gravity’s pulling it, and in this case gravity does no work on the body. Consider, in contrast, the work the force of gravity does on a stone that is simply dropped from a cliff. Let’s be specific and suppose it’s a one kilogram stone, so the force of gravity is ten newtons downwards. In one second, the stone will be moving at ten meters per second, and will have dropped five meters. The work done at this point by gravity is force x distance = 10 newtons x 5 meters = 50 joules, so this is the kinetic energy of a one kilogram mass going at 10 meters per second. How does the kinetic energy increase with speed? Think about the situation after 2 seconds. The mass has now increased in speed to twenty meters per second. It has fallen a total distance of twenty meters (average speed 10 meters per second x time elapsed of 2 seconds). So the work done by the force of gravity in accelerating the mass over the first two seconds is force x distance = 10 newtons x 20 meters = 200 joules. So we find that the kinetic energy of a one kilogram mass moving at 10 meters per second is 50 joules, moving at 20 meters per second it’s 200 joules. It’s not difficult to check that after three seconds, when the mass is moving at 30 meters per second, the kinetic energy is 450 joules. The essential point is that the speed increases linearly with time, but the work done by the constant gravitational force depends on how far the stone has dropped, and that goes as the square of the time. Therefore, the kinetic energy of the falling stone depends on the square of the time, and that’s the same as depending on the square of the velocity. For stones of different masses, the kinetic energy at the same speed will be proportional to the mass (since weight is proportional to mass, and the work done by gravity is proportional to the weight), so using the figures we worked out above for a one kilogram mass, we can conclude that for a mass of m kilograms moving at a speed v the kinetic energy must be: kinetic energy = ½mv² Exercises for the reader: both momentum and kinetic energy are in some sense measures of the amount of motion of a body. How do they differ? Can a body change in momentum without changing in kinetic energy? Can a body change in kinetic energy without changing in momentum? Suppose two lumps of clay of equal mass traveling in opposite directions at the same speed collide head-on and stick to each other. Is momentum conserved? Is kinetic energy conserved? As a stone drops off a cliff, both its potential energy and its kinetic energy continuously change. How are these changes related to each other? previous index next
{"url":"http://galileoandeinstein.physics.virginia.edu/lectures/momentum.html","timestamp":"2024-11-11T07:20:38Z","content_type":"text/html","content_length":"23078","record_id":"<urn:uuid:c97e775d-16d7-4e88-98e5-9c51bbb6baa2>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00868.warc.gz"}
News and updates Contact details Slack channel for discussions Join here (send me an email if the link has expired). Student reps A first mid-term course evaluation will take place Sept 25 10.10-10.30 (immediately after a morning lecture) and a final meeting on Nov 27 13.00-13.45 (over Zoom). Course purpose This course aims to learn scientific approaches, i.e., research methods, and statistics to analyze data we collect. The analysis can then become the basis for decision support in initiatives to improve performances in software development organizations. The course prepares students for the master thesis project. The exam will likely be on Oct. 23 (you must always register to an exam some weeks before!) The course is at 50% speed, and we, thus, expect you to spend 20h/week on the course. You will need those hours, and the first two weeks will have a high workload. All lectures and labs will be on campus. I strongly recommend you to try out Exercise 1 as soon as possible since it's about installing all the stuff you will need in this course! Course overview (E are exercises, LN are lecture notes) │Week│Chapter│Notes │E │LN │Videos │Papers^1 │Presentations │ │ │(s) │ │ │ │ │ │ │ │ │ │ │ │ │ │ │How to build a case (Aug. 30 08:15-12:00) │ │ 1 │1—3 │A high-level introduction to the concepts we use in the course. │E1│LN1 │V1, V2 │The ABC of SE research │dat246-L1.pdf (Aug. 30 10:15-12:00) │ │ │ │ │ │ │ │ │Lab (Aug. 30 13.00-15.00) │ │ │ │ │ │ │V3, V4, │ │Research ethics (Sept. 4 08:15-10:00) │ │ 2 │4—6 │Math notation of model specifications. │E2│LN2 │V5, V6 │ │dat246-L2.pdf (Sept. 6 10:15-12:00) │ │ │ │ │ │ │ │ │Lab (Sept. 6 13.15-15.00) │ │ │ │We ground our assumptions on information theory and the concept of │ │LN3.1│ │ │Validity threats (Sept. 11 08:15-10:00) │ │ 3 │7 │maximum entropy. │E3│LN3 │V7 │ │dat246-L3.pdf (Sept. 13 10:15-12:00) │ │ │ │ │ │ │ │ │Lab (Sept. 13 13.15-15.00) │ │ │ │Interactions, Ch. 8, we won't emphasize in the course. However, │ │ │ │ │Case study research (Sept. 18 08:15-10:00) │ │ 4 │8—9 │understanding MCMC is important! │E4│LN4 │V8 │Guidelines for conducting and reporting case│dat246-L4.pdf (Sept. 20 10:15-12:00) │ │ │ │ │ │ │ │study research │Lab (Sept. 20 13.15-15.00) │ │ │ │ │ │ │ │ │Evidence-based software engineering and │ │ 5 │10—11 │GLMs is what we eat for breakfast :) We must understand the maxent │E5│LN5 │V9, V10 │A crash course in good and bad controls │systematic reviews (Sept. 25 08:15-10:00) │ │ │ │principle! Binomial, Poisson, and Multinomial models. │ │ │ │ │dat246-L5.pdf (Sept. 27 10:15-12:00) │ │ │ │ │ │ │ │ │Lab (Sept. 27 13.15-15.00) │ │ │ │Over-dispersed and zero-inflated outcomes, and ordered categorical │ │ │ │ │Survey research (Oct. 2 08:15-10:00) │ │ 6 │12 │outcomes and predictors (e.g. Likert scale values). │E6│LN6 │V11 │Survey research in software engineering │dat246-L6.pdf (Oct. 4 10:15-12:00) │ │ │ │ │ │ │ │ │Lab (Oct. 4 13.15-15.00) │ │ │ │ │ │ │ │ │Guest lecture - Action Research (Oct. 9 │ │ 7 │13 │Multilevel models! │E7│LN7 │V12, V13│ │08:15-10:00) │ │ │ │ │ │ │ │ │dat246-L7.pdf (Oct. 11 10:15-12:00) │ │ │ │ │ │ │ │ │Lab (Oct. 11 13.15-15.00) │ │ │ │ │ │ │ │Applying Bayesian analysis guidelines to │Guest lecture - Design Science Research (Oct. │ │ 8 │14 │Modeling covariance. Continuous varying intercepts, e.g., Gaussian │E8│LN8 │V14, V15│empirical software engineering data │16 08:15-10:00) │ │ │ │Processes! Chapters 15–16 we will not cover in the course. │ │ │ │with replication package │dat246-L8.pdf (Oct. 18 10:15-12:00) │ │ │ │ │ │ │ │ │Lab (Oct. 18 13.15-15.00) │ 1. These papers are compulsory reading. Course literature In this course, we will use one book: R. McElreath. Statistical Rethinking: A Bayesian Course with Examples in R and STAN. 2nd edition. ISBN: 9780367139919. We will study Chapters 1-7, 9-14. Chapter 8 we will not emphasize much, while chapters 15-16 we will not cover in this course. The book uses R and Stan through the rethinking package to specify and sample statistical models. The first exercise (E1) provides instructions on how to set things up in Windows, OS X, and Linux. Additionally, a number of papers will be used which you'll be able to find in the File area. We will clearly indicate if a paper is not compulsory reading (if we don't say anything, assume that it is compulsory reading). Course design Each week we will cover 1-3 chapters in the book. Before we meet we expect you to have read the chapters for that week and, in addition, gone through the videos connected to the chapters. The lectures that we will have will focus on three things: • Covering the most important things from each chapter. • Going through practical hands-on examples. • Presenting research methods commonly used in empirical software engineering (i.e., things you will use for your master thesis). I cannot stress enough how important it is that you a) read the chapter(s), and b) go through the videos connected to each chapter, before each lecture. I also expect you to go to Canvas (i.e., this course home page) every day and check if things have been added or changed. If there are any changes to the lectures (e.g., if one would be canceled for some reason) then this will be notified above under News and updates. Learning objectives and syllabus • Knowledge and understanding: □ Describe, understand, and apply empiricism in software engineering □ Describe, understand, and partly apply the principles of case study research/experiments/surveys. □ Describe and understand the underlying principles of meta-analytical studies. □ Explain the importance of research ethics. □ Recognize and define codes of ethics when conducting research in software engineering. □ State and explain the importance of threats to validity and how to control said threats. □ Describe and explain the concepts of probability space (incl. conditional probability), random variable, expected value, and random processes, and know a number of concrete examples of the □ Describe Markov chain Monte Carlo methods such as Metropolis. □ Describe and explain Hamiltonian Monte Carlo. □ Explain and describe multicollinearity, post-treatment bias, collider bias, and confounding □ Describe and explain ways to avoid overfitting • Skills and abilities: □ Assess the suitability of and apply methods of analysis on data □ Analyze descriptive statistics and decide on appropriate analysis methods. □ Use and interpret code of ethics for software engineering research. □ Design statistical models mathematically and implement said models in a programming language. □ Make use of random processes, i.e., Bernoulli, Binomial, Gaussian, and Poisson distributions, with over-dispersed outcomes. □ Make use of ordered categorical outcomes (ordered-logit) and predictors □ Assess the suitability of, from an ontological (natural process) and epistemological (maxent) perspective, various statistical distributions □ Make use of and assess directed acyclic graphs to argue causality □ State and discuss the tools used for data analysis and, in particular, judge their output. □ Judge the appropriateness of particular empirical methods and their applicability to attack various and disparate software engineering problems. □ Question and assess common ethical issues in software engineering research. □ Assess diagnostics from Hamiltonian Monte Carlo and quadratic approximation using information theoretical concepts, i.e., information entropy, WAIC, and PSIS-LOO. □ Judge posterior probability distributions for out-of-sample predictions and conduct posterior predictive checks. Study plan (Chalmers) Study plan (GU) Examination form If you copy any text at all you must reference it appropriately - if in doubt ask. Plagiarism is something we do not look positively upon and each year students are suspended because of it. This course is examined in two components. First, a written exam at the end of the course. Second, an individual assignment during the course. If you pass the written exam you are given 5 credits. If you pass the individual assignment you are given 2.5 credits. Students are given the grades fail, 3, 4, or 5 in the course. In order to pass the course, you will need to pass both the assignment and the exam, but we set the final course grade only according to the grade you got on the written exam. For the written exam all learning outcomes, as listed above, can be tested. For the individual assignment, there is mainly a focus on Bayesian data analysis (i.e., skills and abilities, and judgment and approach). Additionally, we recommend students go through exercises as found in the file area. Even though conducting these exercises will not give any extra credits or bonus points that can be added to the written exam or assignment, you will be in a much better place once you've done the exercises. Written exam deadlines (note these are preliminary!): • Oct. 23 PM • Jan. 5 AM • Aug. 27 PM Assignment deadlines:
{"url":"https://chalmers.instructure.com/courses/25331/assignments/syllabus","timestamp":"2024-11-12T15:51:21Z","content_type":"text/html","content_length":"95618","record_id":"<urn:uuid:0c9e1089-6a32-49e3-9897-ba79d7032511>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00277.warc.gz"}
Linear Problems (with Extended Range) Have Linear Optimal Algorithms 1984 Reports Linear Problems (with Extended Range) Have Linear Optimal Algorithms Abstract. Let F1 and F2 be normed linear spaces and S: F0→F2 a linear operator on a balanced subset F0 of F2. If N denotes a finite dimensional linear information operator on F0, it is known that there need not be a linear algorithm φ:N(F0)→F2 which is optimal in the sense that ||φ:(N(f))-S(f)|| is minimized. We show that the linear problem defined by S and N can be regarded as having a linear optimal algorithm if we allow the range of φ to be extended in a natural way. The result depends upon imbedding F2 isometrically in the space of continuous functions on a compact Hausdorff space X. This is done by making use of a consequence of the classical Banach-Alaoglu theorem. More About This Work Academic Units Department of Computer Science, Columbia University Published Here February 15, 2012
{"url":"https://academiccommons.columbia.edu/doi/10.7916/D8PN9DSG","timestamp":"2024-11-12T16:17:52Z","content_type":"text/html","content_length":"16263","record_id":"<urn:uuid:d0f2505b-7dd2-4781-bbf8-403cc46fc0b9>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00323.warc.gz"}
Frontiers | Optimal Stimulation Protocol in a Bistable Synaptic Consolidation Model • School of Computer and Communication Sciences and School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland Synaptic changes induced by neural activity need to be consolidated to maintain memory over a timescale of hours. In experiments, synaptic consolidation can be induced by repeating a stimulation protocol several times and the effectiveness of consolidation depends crucially on the repetition frequency of the stimulations. We address the question: is there an understandable reason why induction protocols with repetitions at some frequency work better than sustained protocols—even though the accumulated stimulation strength might be exactly the same in both cases? In real synapses, plasticity occurs on multiple time scales from seconds (induction), to several minutes (early phase of long-term potentiation) to hours and days (late phase of synaptic consolidation). We use a simplified mathematical model of just two times scales to elucidate the above question in a purified setting. Our mathematical results show that, even in such a simple model, the repetition frequency of stimulation plays an important role for the successful induction, and stabilization, of potentiation. 1. Introduction Synaptic plasticity, i.e., the modification of the synaptic efficacies due to neural activity, is considered the neural correlate of learning (Hebb, 1949; Martin et al., 2000; Caroni et al., 2012; Nabavi et al., 2014; Hayashi-Takagi et al., 2015; Holtmaat and Caroni, 2016). It involves several biochemical mechanisms which interact on multiple timescales. The induction protocols for short-term plasticity (STP, on the order of hundreds of milliseconds) (Turrigiano et al., 1996; Markram et al., 1998) and for the early phase of long-term potentiation or depression (LTP or LTD, on the order of minutes to hours) (Levy and Stewart, 1983; Brown et al., 1989; Artola et al., 1990; Bliss and Collingridge, 1993; Markram et al., 1997; Sjöström et al., 2001) are well-established and have led to numerous models (Bienenstock et al., 1982; Gerstner et al., 1996; Song et al., 2000; Van Rossum et al., 2000; Senn et al., 2001; Shouval et al., 2002; Rubin et al., 2005; Pfister and Gerstner, 2006; Brader et al., 2007; Graupner and Brunel, 2007; Clopath et al., 2010; Gjorgjieva et al., 2011; Nicolas and Gerstner, 2016). On the other hand, various experiments have shown that the further evolution of synaptic efficacies on the timescale of hours depends in a complex way on the stimulation protocol (Frey and Morris, 1997; Dudai and Morris, 2000; Nader et al., 2000; Redondo and Morris, 2011). This phenomenon is called synaptic consolidation, to be distinguished from memory consolidation, which is believed to take place through the interaction between hippocampus and cortex and which occurs on an even longer timescale (Hasselmo, 1999; Roelfsema, 2006; Brandon et al., 2011). Such a richness of plasticity mechanisms across multiple timescales has been hypothesized to be fundamental in explaining the large storage capacity of memory networks (Fusi et al., 2005; Benna and Fusi, 2016). Synaptic consolidation is often studied in hippocampal or cortical slices, in which it is induced by extra-cellular stimulation of afferent fibers with short current pulses (Frey and Morris, 1997; Sajikumar and Frey, 2004a,b). Experimental protocols are typically organized in multiple repetitions of stimulation episodes, with variable repetition frequency and duration of each episode (Figure 1A). The dependence of the consolidation dynamics on the parameters of the experimental protocol is complex and has remained elusive. Both the intra-episode pulse frequency and the inter-episode delay play an important role in determining whether a synapse gets potentiated or not after the stimulation (Kumar and Mehta, 2011; Larson and Munkácsy, 2015). Furthermore, recent evidence suggests the existence of optimal parameters to achieve consolidation (Kumar and Mehta, 2011; Larson and Munkácsy, 2015). Existing models succeeded in reproducing experimental results on early and late LTP ( Clopath et al., 2008; Barrett et al., 2009; Ziegler et al., 2015; Kastner et al., 2016), by a mathematical description of the interaction of different synaptic mechanisms. However, the complexity of those models prevents a complete characterization of the dynamics, that links stimulation protocols to synaptic consolidation. Here we address the following question: why is the temporal structure of stimulation, i.e., the timing of repetitions, so important for synaptic consolidation? (Zhou et al., 2003; Kramár et al., 2012; Benna and Fusi, 2016). FIGURE 1 Figure 1. Schematic experimental setup and modeling framework. (A) Schematic of extra-cellular stimulation in experiments. The plasticity-inducing stimulus consists of several episodes of duration t [on] with inter-episode interval t[off]. Zoom: Each episode contains several high-frequency pulses. (B) Schematic of single-synapse consolidation model. The synapse is described by a weight variable w with time constant τ[w] and a slower consolidation variable z with time constant τ[z] ≥ τ[w]. Each episode corresponds to a rectangular plasticity-inducing stimulus I(t). (C) Phase-plane for a specific choice of f(w, z) and g(w, z), I(t) = 0, and τ[z] = 7τ[w]. The fixed points in (w, z) = (−1, −1) and (w, z) = (1, 1) are stable and correspond to an unpotentiated and potentiated synapse, respectively. The black line separates the basins of attraction of the two stable fixed points. (D) Evolution of the system dynamics in the phase-plane. The system is initialized in the unpotentiated state and it evolves under the effect of a plasticity-inducing stimulus made of three pulses. We introduce a phenomenological model of synaptic consolidation (Figures 1B–D) in which, as suggested by experiments (Petersen et al., 1998; O'Connor et al., 2005; Bosch et al., 2014), both model variables are bistable. We find that, despite the simplicity of our model, potentiation of a synapse depends in a complex way on the temporal profile of the stimulation protocol. Our results suggest that not just the total number of stimulation pulses, but also the precise timing within an episode and across repetitions of episodes are important, in agreement with anecdotal evidence that changes in protocols can have unexpected consequences. 2. Methods In what follows, we introduce the synaptic consolidation model that we analyze in the Results section. Since describing the details of molecular interactions inside a synapse as a system of differential equations (Bhalla and Iyengar., 1999; Lisman and Zhabotinsky, 2001) would be far too complicated for our purpose, we aim to capture the essential dynamics responsible for synaptic consolidation with an effective low-dimensional dynamical system. In this view, variables are mathematical abstractions that represent the global state of a network of biochemical molecules inside a synapse, e.g., during a transition from one metastable configuration to another (Bosch et al., 2014). 2.1. Choice of the Model A one-dimensional dynamical system is not expressive enough to capture experimental data. Indeed, in a one-dimensional differential equation, it would be sufficient to know the instantaneous state of a single variable of the synapse (such as the weight) to predict its evolution, while this is not the case in experiments. As a natural step toward more complexity, we consider a general autonomous two-dimensional model $dwdt=f(w,z)dzdt=g(w,z), (1)$ where w represents the measured efficacy of a synaptic contact point (e.g., the amplitude of the EPSP caused by pre-synaptic spike arrival), while z is an abstract auxiliary variable. For simplicity, both variables will be considered unit-less. We choose the functions f and g, such that $τwdwdt=-Kw(w-w0)(w+w0)w+Cw(z-z0w0w)+Iτzdzdt =-Kz(z-z0)(z+z0)z+Cz(w-w0z0z), (2)$ where (w, z) = ±(w[0], z[0]) are the stable fixed points of the two-dimensional system in the presence of a fixed coupling C[w] ≥ 0, C[z] ≥ 0 and in the absence of a drive, i.e., I = 0. In our simulations, we always choose w[0] = z[0] = 1. For K[w] ≠ 0 and K[z] ≠ 0, we could divide Equation (2) by K[w] and K[z] to further reduce the numbers of parameters. However, we will stick to a notation with explicit K[w] and K[z] since we do not want to exclude the choice K[w] = 0 or K[z] = 0. Without loss of generality, we will choose K[w, z] ∈ {0, 1}, i.e., either zero or unity. Note that the choice K[z] = 0 implies that the dynamics of the auxiliary variable z are linear, while K[z] = 1 implies full non-linearity. The choice of the model is explained in the next section. 2.2. Simplification Steps of the 2D-Dynamics In this section we present the arguments leading from Equation (1) to (2). Readers not interested in the details may jump to the next sections. One way to tackle the very general system in Equation (1) is to perform a Taylor expansion around w = 0 for the first equation $dwdt=A(z)+B(z)·w+C(z)·w2+D(z)·w3+… (3)$ and around z = 0 for the second one $dzdt=A′(w)+B′(w)·z+C′(w)·z2+D′(w)·z3…. (4)$ An expansion up to the third order enables us to implement the bistable dynamics (Petersen et al., 1998; O'Connor et al., 2005) of single contact points. Bistability requires the system to have at least two stable fixed points at finite value. This condition cannot be met by degree 1 or degree 2 polynomials since they can have at most one stable fixed point. Therefore, bistability requires a polynomial of degree 3 or higher in at least one equation. To be more general, we will consider a system in which both polynomials are of degree 3. We restrict our analysis to the situation in which we have linear coupling between the two variables, of the form A(z) = A[0] + A[1] · z, B(z) = B, C(z) = C, and D(z) = D. Analogously, in the second equation we set ${A}^{\prime }\left(w\right)={A}_ {0}^{\prime }+{A}_{1}^{\prime }·w$, B′(w) = B′, C′(w) = C′, and D′(w) = D′. Bistability is be obtained with a negative coefficient of the third power in both equations. Before we start the analysis, we rewrite Equations (3) and (4) in a more symmetric form. To do so we proceed in three steps. (i) Assuming that the degree 3 polynomial has three real roots, we rewrite our system in the more intuitive form $τwdwdt=-K1(w-w1)(w-w2)(w-w3)+C1zτzdzdt =-K2(z-z1)(z-z2)(z-z3)+C2w, (5)$ where C[1] and C[2] are coupling constants and the roots w[1], w[2], w[3] correspond to the fixed points of the equations in the uncoupled case (C[1] = C[2] = 0). The parameters τ[w] and τ[z] can be interpreted as time constants since they do not influence the location of the fixed points but only the speed of the dynamics. K[1] and K[2] are two positive constants that scale the whole polynomial, while C[1] and C[2] are positive constants that control the amount of coupling between the two variables. If we exclude the coupling terms, each equation corresponds to an over-damped particle moving in a double-well potential (Strogatz, 2014). The parameters K[1], K[2], τ[w], τ[z], C[1], C[2], w[1], w[2], w[3] are simple transformations of the parameters A[0], A[1], B, C, D, ${A} _{0}^{\prime }$, ${A}_{1}^{\prime }$, B′, C′, D′ of the original system. For example K[1] = D. (ii) In order to further simplify our study, we assume that in both equations one of the three roots is zero, one is positive and one negative, equally distant from zero. Following (Zenke et al., 2015), we add a plasticity induction term to the first equation that describes the drive provided by an LTP induction protocol. The equations now read $τwdwdt=-K1(w-w̄)(w+w̄)w+C1z+Iτzdzdt =-K2(z-z̄)(z+z̄)z+C2w. (6)$ In the absence of coupling, the double well potential related to Equation (6) has minima in $w=±\stackrel{̄}{w}$, $z=±\stackrel{̄}{z}$ and a local maximum in w = 0 (z = 0). Notice that this seems to imply that a synaptic weight can take both positive and negative values, which is biologically implausible. However, this choice simplifies the calculations without loss of generality, since it is always possible to go back to a system with positive weights by applying a coordinate translation. (iii) In the absence of a drive (I = 0), the system has eight free parameters, which all influence the location of the fixed points. In a final transformation step we rewrite Equation (6) such that the location of two stable fixed points becomes independent of the coupling constants C[1] and C[2]. The reason for doing this is that the stable fixed points of the system are easier to access experimentally than other constants. In particular, the value of w at the stable fixed point should be related to the synaptic weight measured experimentally. We, therefore, rewrite the system in the form of Equation (2), where w[0] and z[0] are the absolute values of the stable fixed point and the parameters can be mapped from Equation (6) to (2), for example, K[w] = K[1] and ${C}_{w}=\left({K}_ {1}{\stackrel{̄}{w}}^{2}-{K}_{1}{w}_{0}^{2}\right){w}_{0}/{z}_{0}$ and analogously for C[z] and K[z]. 2.3. Nullclines and Phase-Plane Analysis Since the system is two-dimensional, it can be studied using phase-plane analysis, following a well-established tradition in computational neuroscience (Wilson and Cowan, 1972; Ermentrout, 1996, 2002 ; Rinzel and Ermentrout, 1998). The fixed points of the system are graphically represented by the intersections of the nullclines (i.e., the curves defined by either $\frac{dw}{dt}=0$ or $\frac{dz} {dt}=0$), which in our system are: $w-nullcline:z=z0w0w+KwCw(w-w0)(w+w0)w-ICwz-nullcline:w=w0z0z+KzCz(z-z0)(z+z0)z. (7)$ The maximum number of fixed points for the system in Equation (2) can be easily computed. To do so, consider a more general form of two nullclines: $w-nullcline:z=Pn(w)z-nullcline:w=Qm(z), (8)$ where P[n](z) is a polynomial of degree n in w and, analogously, Q[m](w) is a polynomial of degree m in z (cf. Equation 7). To find the fixed points of the system (Equation 8) we need to solve: $w=Qm(Pn(w)). (9)$ Equation (9) is a polynomial equation of degree n · m in w and therefore it allows a number of real solutions s, 0 ≤ s ≤ n · m. Applying this formula to our case, we find that we can have a maximum of nine fixed points. In order to reduce the number of parameters from 8 to 4, we first consider the symmetric case (section 2.4) in which the two equations have the same parameters. Moreover, since we make the choice z [0] = w[0] = 1, the actual number of free parameters is three. In the next section, we show the effect of changing the coupling coefficients. Then, we briefly comment on the effect of the time constants and of a constant plasticity-inducing stimulus I. We will move to the analysis of the asymmetric cases in section 2.5. 2.4. Symmetric Changes of Coupling Coefficients Reveal Two Bifurcations We study the case of symmetric coupling C[w] = C[z] = C and analyze how a change of coupling strength influences the dynamics of the system. As an aside, we note that for symmetric coupling we can define a pseudopotential (Cohen and Grossberg, 1983) $V(w,z)=Kw4w4+Kz4z4−12w0(Kww03−z0C)w2 −12z0(Kzz03−w0C)z2−Cwz+I (10)$ in which the dynamical variables move according to ${\tau }_{w}\frac{dw}{dt}=-\frac{\partial V}{\partial w}$ and ${\tau }_{z}\frac{dz}{dt}=-\frac{\partial V}{\partial z}$. We fix τ[w] = τ[z], K[w] = K[z] = 1, I = 0, w[0] = z[0] = 1 and vary C in Equation (2). In the case C = 1, the system is in a rather simple regime: there are two stable fixed points in (w, z) = (−1, −1) and (w, z) = (1, 1) and a saddle fixed point at the origin (Figure 2). The basins of attraction of the stable fixed points are separated by the z = −w diagonal. FIGURE 2 Figure 2. Phase-plane diagram and basins of attractions for the symmetric case with equal coupling constants, C[w] = C[z] = C. The plasticity-inducing stimulus is null, I = 0. (A) C = 1, phase-plane with field arrows. The color of the arrows is proportional to the field strength. w− and z− nullclines are indicated in red and blue, respectively. The line that separates the two basins of attraction is indicated in black. (B) Same as (A), but C = 0.4. Compared to (A), we notice the creation of two saddle points. (C) Same as (A), but C = 0.2. The maximum number of fixed points is achieved. In this case we have four basins of attraction. If we decrease the coupling C, we encounter two bifurcations. A first pitchfork bifurcation takes place at C = 1/2, when the two nullclines are tangent to each other in the saddle point. Beyond the bifurcation point of the coupling coefficient, we observe the creation of two additional saddle points (Figure 2B). The stability properties, the location and the basins of attraction of the other two fixed points remain unchanged, but the local field strength changes, as shown by the colored arrows. The second pitchfork bifurcation takes place at C = 1/3. For this coupling value, each of the two new saddle points splits into a stable fixed point and two further saddle points. Therefore, for very weak coupling we observe four basins of attractions, whose shape is shown in Figure 2C. The stability of the fixed points in (w, z) = (−1, −1) and (w, z) = (1, 1) is not affected by the bifurcations. On the other hand, if we increase the coupling coefficient to a value C > 1, then the two nullclines will progressively flatten, but the location of the three fixed points is unchanged with respect to the case C = 1. These observations have been summarized in the bifurcation diagram of Figure 3A. We observe that there are actually three pitchfork bifurcations, but that two of them are degenerate since they happen for the same value of C. FIGURE 3 Figure 3. Bifurcations diagrams. (A) Fixed points in the symmetric case. Dashed lines indicate unstable fixed points while continuous lines indicate stable fixed points. Orange and green dots indicate bifurcation points. (B) Bifurcation points in the C[w] – C[z] plane (black) for the general (asymmetric) case. The dashed gray line corresponds to C[w] = C[z]. The orange and green dots indicate the corresponding bifurcations in (A). Note that, in (B), the bifurcation at ${C}_{w}={C}_{z}=\frac{1}{3}$ (green dot) is a degenerate point. 2.5. Asymmetric Parameter Choices Shape the Basins of Attraction As a more general case, we consider asymmetric coupling C or timescale τ. When the coupling coefficients are asymmetric, we can plot the position of the bifurcation points in the C[w]—C[z] plane ( Figure 3B). The choice C[w] = C[z] of the previous section corresponds to the dashed gray line. We notice that in the asymmetric case it is possible to have three distinct bifurcations (for example, we can fix C[w] = 0.3 and decrease C[z], from 1 to 0). We find that, for C[w] + C[z] > 1, the number of fixed points is always three and no bifurcation is possible. On the other hand, if C[w] + C[z] < 1, the system enters in the regime with minimum five fixed points. Moreover, we can analytically compute the bifurcation value of one coupling constant, given the other. An asymmetric choice C[w] ≠ C[z] influences the shape of the basins of attraction (Figure 4A). FIGURE 4 Figure 4. Asymmetric parameter choices. (A) In the case C[w] = 3 > C[z] = 1, the curvature of the w−nullcline (red) is smaller than that of the z−nullcline (blue) and the basins of attraction are deformed compared to Figure 2A. (τ[z] = τ[w] = 1) (B) For τ[z]/τ[w] = 3 and C[w] = C[z] = 1, nullclines are not affected (compare to Figure 2A) but the basins of attraction are. (C) For I = 0.5 (all other parameters set to 1), the basin of attraction of the fixed point at (−1, −1) is smaller than of the fixed point at (1, 1). If we keep C[w] = C[z] but consider instead τ[z] > τ[w], the system in Equation (2) may be interpreted as two different molecular mechanisms that act on different timescales. For example, the variable z can be interpreted as a tagging mechanism or a consolidation variable while w is the weight variable or amplitude of a post-synaptic potential. A comparison of Figure 2A and Figure 4B shows that the changes in τ do not affect the nullclines but change the flow field and the basin of attraction. Another way by which we can introduce asymmetry in the system is by adding a plasticity-inducing stimulus I. It follows from Equation (7) that a value I > 0 will cause a down shift of the w −nullcline. The case of C[w] = C[z] = 1, τ[w] = τ[z] = 1 s, K[w] = K[z] = 1 and I > 0 is shown in Figure 4C. A plasticity-inducing stimulus I > 0 also implies a reduction of the basin of attraction of the lower stable fixed point in favor of an increase of the basin of attraction of the upper stable fixed point. For high values of I, the basin of attraction of the lower fixed point disappears via a steady state bifurcation. Therefore, when I > 0 is large enough, the system is forced to move to the upper fixed point that can be interpreted as a potentiated state of the synapse. Analogously, when I < 0, the attraction basin of the lower fixed point is enlarged and leads, eventually, to a bifurcation in which the upper fixed point and the saddle point are lost. A possible generalization of the model would be to consider the coupling coefficients C[w] and C[z] as dynamical variables, as it has been explored in previous work (Ziegler et al., 2015). In these models, the coupling parameters C[w] and C[z] of the two dynamical variables alternates between C[w] = 0 and C[z] = 1 or C[w] = 1 and C[z] = 0, implementing a write-protection mechanism. The price we pay is the introduction of additional differential equations and parameters for the dynamics of the coupling coefficients. In the specific implementation of Ziegler et al. (2015), the dynamical coupling is controlled by a low-pass filter of the plasticity-inducing stimulus I and the concentration of neuromodulators on plasticity. 2.6. Numerical Simulations All figures were obtained using Python 2.7, except for the bifurcation plot in Figure 3, which was created with Wolfram Mathematica. In the phase-plane plots, the separatrix between the basins of attraction was obtained doing a mesh-grid search: we initialized the dynamical system (Equation 2) in each point of a 100 × 100 grid in the w, z space (w, z ∈ [−1.5, 1.5]) and checked to which stable fixed point it converges. Therefore we interpolated the separation line. The trajectory of the system in the phase-plane was obtained by solving the system in Equation (2) using the Runge-Kutta 4 method with integration step dt = 0.01. In Figures 6, 7, we inject an external stimulus into the dynamical equations. The system trajectory is always initialized in the depotentiated state (−1, −1) and the simulation is stopped when the trajectory enters into the basin of attraction of the potentiated state (1, 1). The position of the stable fixed points depends on the choice w[0] = z[0] = 1, which we made for simplicity. In fact, we can remap the values of the synaptic weight w the desired (positive range) with an affine transformation, without loss of generality. 3. Results The two-dimensional model, introduced Methods section, predicts a complex dependence of the synaptic consolidation dynamics upon the parameters of the experimental protocol. This complex dependence has similarities with the behavior observed in experiments (Sajikumar et al., 2005; Larson and Munkácsy, 2015; cf. Figure 1). First, we describe how we abstract the experimental protocol into a time-dependent plasticity-inducing stimulus I(t). Then, we show the response of our model to different stimulation protocols. In our model, the plasticity-inducing stimulus I(t) drives the synaptic weight w via a non-linear equation characterized by a time constant τ[w]. The weight w is coupled to a second variable z with time constant τ[z] (Equation 2). The variable z is an abstract description of the complex metastable states (potentiated or unpotentiated) caused by consolidation (Redondo and Morris, 2011; Bosch et al., 2014). After an analysis of a single rectangular stimulation (one episode), in section 3.2, we will move to the more realistic case of repetitive stimulation across multiple episodes. Throughout the results section, we will focus on synaptic potentiation. Since the self-interaction term in Equation (2) is symmetric with respect to w = z = 0, synaptic depression of a potentiated state is the mirror image of synaptic potentiation of a unpotentiated state. 3.1. Abstraction of the Stimulation Protocol In their seminal work, Bliss and Lømo (1973) showed that repeated high-frequency stimulation of afferent fibers can lead to long-lasting synaptic potentiation. In later work it was shown that low-frequency stimulation can lead to long-lasting synaptic depression (Bashir and Collingridge, 1994). In order to keep the analysis transparent, we use a time-dependent, real-valued quantity I(t) as an abstraction for such experimental protocols. In what follows, we will refer to I(t) as to the plasticity-inducing stimulus. Note that, we do not perform an explicit mapping from the electrical current used in LTP experiments for the stimulation of pre-synaptic fibers onto the plasticity-inducing stimulus I(t) that influences the dynamics of Equation (2). A precise mapping would require additional assumptions on (i) how extra-cellular stimulation triggers axonal spikes in multiple fibers, (ii) how pre-synaptic spike arrivals cause post-synaptic firing and (iii) how pre- and post-synaptic neural activity leads, potentially via a Hebbian model, to the induction of early-LTP. This means that, in principle, the model's dynamics is rich enough to reproduce the four classical synaptic-consolidation experiments (Frey and Morris, 1997; Nader et al., 2000), however, we would need to set at least four free parameters, corresponding to the amplitudes of the external input I, needed for strong and weak LTP and LTD. Instead, we model a set of extra-cellular high-frequency pulses as a single rectangular plasticity-inducing stimulus of positive amplitude (Figure 1B). The larger the stimulation frequency, the larger the amplitude of I(t). Analogously, a set of extra-cellular current pulses at low frequency is modeled as a single negative rectangular plasticity-inducing stimulus. The compression of multiple extra-cellular pulses into a single rectangular episode I(t) is justifiable since the time between single pulses, even in the case of low-frequency stimulation, is very short compared to the timescale of plasticity. This implies that multiple short pulses in experiments can be well approximated by a single episode, described by one prolonged rectangular stimulus in our model (Figures 1A,B). In agreement with well-established plasticity models (Bienenstock et al., 1982; Senn et al., 2001; Pfister and Gerstner, 2006; Clopath et al., 2010; Gjorgjieva et al., 2011), we use I > 0 to describe a high-frequency stimulation since a positive I leads to potentiation (see section Methods and Figure 4C). Conversely, a negative I favors depotentiation. On the other hand, experiments that involve global variables, such as cross-tagging (Sajikumar and Frey, 2004a,b), can not be explained by our model. 3.2. One Episode We consider the case in which our two-variable synapse model is stimulated with a single rectangular plasticity-inducing stimulus I(t) of variable amplitude and duration t[on] (Figure 5A). Experimentally, this would correspond to single-episode, high-frequency protocols of variable stimulation intensity (i.e., pulse frequency) and duration. For each choice of duration and amplitude, we initialize the system in the unpotentiated state, defined by the initial value (w, z) = (−1, −1) and we numerically integrate the system dynamics until convergence. We then measure the final state of the synapse, i.e., whether it converged to the potentiated or returned to the unpotentiated state. In Figure 5, we plot the curve that separates the region of the parameter space that yields potentiation (shaded area) from the one that does not. Different curves correspond to different time constants τ[w] and τ[z] of the synaptic variables w and z in Equation (2). FIGURE 5 Figure 5. Potentiation during a single episode. Different curves correspond to different ratios of the time constant τ[z] and τ[w] in Equation (2). (A) Schematic representation of single-episode stimulations, corresponding to different choices of t[on]. (B) Schematic representation of different single-episode stimulations with constant area. The black line is proportional to 1/amplitude in order to stress that all pulses have the same area. (C) Separation curves between regions of successful or unsuccessful potentiation as a function of amplitudes and duration t[on] of a the plasticity-inducing stimulus. The initial state is always the unpotentiated synapse (w = z = −1). The shaded region of the parameter space is the one in which the synapse gets potentiated. (D) Same as (C) but as a function of amplitude and area of the stimulus during the episode. The two insets show examples of trajectories (green lines) in the phase-plane for two different parameters choices. The solid green lines represent the dynamical evolution of the system during the application of the external stimulus, while the dotted green line shows the relaxation of the system to a stable fixed point after the stimulation. Red: w-nullcline; blue: z-nullcline; black: separatrix. The parameters that are not specified in the figure are: C[w] = C[z] = 1, I = 0. Figure 5C illustrates a rather intuitive result, i.e., if the amplitude of the plasticity-inducing stimulus is increased, the duration needed for potentiation decreases. Moreover, if the amplitude is too small, we cannot achieve potentiation, even for an infinite pulse duration. The limit of infinite pulse duration is in the following called the “DC” limit. The effect of DC-stimulation can be easily understood from a phase-plane analysis (Figure 4). Indeed, the introduction of a constant term I > 0 in the w equation (DC term), yields a shift in the w-nullcline vertically downward. However, if the term is too small to cause the loss of the low fixed point, potentiation cannot be achieved (Figure 4C). The separation curves in Figure 5C indicate that the minimal duration of an episode necessary for potentiation decreases as the intensity of the plasticity-inducing stimulus increases. We wondered whether the relevant parameter for potentiation is the area under the rectangular plasticity-inducing stimulus. To study this, we performed a similar analysis, with the amplitude of the plasticity-inducing stimulus and its area as independent variables. For each choice of area and amplitude, the duration of the episode is given by t[on] = area/amplitude (see Figure 5B). The results are shown in Figure 5D. If there were a regime in which the relevant parameter is the area of the pulse, then the curve separating parameters of successful from unsuccessful potentiation would be horizontal. However, we find a near-horizontal curve only for τ[z] = τ[w], limited to the high-amplitude region. For τ[z] > τ[w] we find the existence of an optimal value of the amplitude that yields potentiation with the minimal area. If we increase the amplitude beyond this optimal value, the necessary area under the stimulus curve I(t) starts to increase again. In order to understand this effect, we look again at the phase-plane, in particular at the dependence of the separatrix on the timescale separation. In the limit in which the amplitude goes to infinity and the duration goes to 0 while the area of the whole plasticity-inducing stimulus stays the same, the stimulus can be described by a Dirac-δ function. In Figure 5D, we can see that, if τ[z ] ≫ τ[w], the separatrix tends to an horizontal line for w ≫ 1. Since a δ-pulse input is equivalent to an instantaneous horizontal displacement of the momentary system state in the phase-plane, a single δ-pulse cannot bring the system across the separatrix. The δ-pulse stimulation is, of course, a mathematical abstraction. In a real experimental protocol, such a stimulation can be approximated by a short episode of very intense high frequency stimulation. Due to the necessary finite duration of an episode, the system response in the phase-plane will not be a perfectly horizontal displacement. However, achieving potentiation with short pulses can still be considered as difficult, because it would require a disproportionately large stimulation amplitude. Our findings highlight the fact that changing parameters, such as the ratio of τ[w] and τ[z], gives rise to different behaviors of the model in response to changes in the stimulation protocols. We may use this insight to design optimal experimental protocols for single-episode plasticity induction. In particular, a model with timescale separation would predict the existence of an optimal stimulus intensity for which the total stimulus area necessary for potentiation is minimized. We emphasize that any model where consolidation works on a timescale that is slower than that of plasticity induction will exhibit timescale separation and be therefore sensitive to details of the stimulation protocol. 3.3. Repeated Episodes As a second case, we consider the potentiation of a synapse induced by repetitions of several stimulation episodes. In an experimental setting, this type of stimulation would correspond to several episodes of high-frequency stimulations, characterized by three parameters: the intensity of stimulation during each episode (amplitude), the duration (t[on]) of each episode and the inter-episode interval, t[off] (cf. Figures 6A,B). To keep the analysis transparent we apply a number of repetitions large enough to decide whether potentiation is successful or not given the three parameters. Notice that if t[off] = 0 we are back to the DC stimulation as defined in the previous section. FIGURE 6 Figure 6. Potentiation with repeated episodes. (A) Schematic representation of stimulation protocols characterized by different t[off], while t[on] = τ[w] is fixed. (B) Schematic representation of stimulation protocols with t[on] = 0.01τ[w]. (C) Potentiation region for stimulation with long episodes for fixed t[on] = τ[w]. The curves for different ratios τ[z]/τ[w] (see color code) indicate the separation between the region that yields potentiation (shaded) and the region that does not (white) as a function of intensity of stimulation in each episode (amplitude) and inter-episode interval ( t[off]). (D) Potentiation regions for protocols with shorter episode t[on] = 0.01τ[w]. The potentiation region is shaded. The two insets show examples of trajectories (green lines) in the phase-plane for the same choice of stimulation parameters but different timescale separation. The solid green lines represent the dynamical evolution of the system during the application of the external stimulus, while the dotted green line shows the relaxation of the system to a stable fixed point after the end of the stimulation protocol. The parameters that are not specified otherwise are: C[w] = C[z] = 1, I = 0. The curves in Figures 6C,D show the separation between parameters that lead to successful potentiation (shaded) or not (white) in the amplitude-t[off] space for fixed values of t[on] and for different τ[z]/τ[w] ratios. In Figure 6C, we fix t[on] = τ[w]. We observe that, at least for low timescale ratios, it exists an amplitude above which the synapse gets potentiated independently of t [off], which suggests that, for this intensity of the stimulation, the potentiation happens during the first episode. The amplitude necessary to obtain potentiation in one pulse, however, increases rapidly with the τ[z]/τ[w] ratio (see section 3.2). On the other hand, if the value of t[off] is small enough (i.e., for high repetition frequency), potentiation can be achieved with smaller amplitudes and the timescale ratio is less important (notice the superimposed lines in the bottom left part of the plot). If we decrease the pulse duration to t[on] = 0.01τ[w], we obtain qualitatively similar separation curves, but potentiation now requires much larger values for the amplitude of the plasticity-inducing stimulus (see Figure 6B), than t[on] = τ[w] (see Figure 6A). Importantly, in the case of timescale separation (e.g., τ[z] = 7τ[w]) several repetitions are needed before the consolidation variable z has sufficiently increased so that the synapse state crosses the separatrix (Figure 6, insert). In analogy to the analysis performed in section 3.2, we search for an optimal stimulation protocol in the case of repeated episodes. In order to allow a direct comparison between single and repetitive episodes, we measured the total area under the stimulation curve I(t) in the repetitive episode scenario, limited to the minimal number of episodes sufficient to achieve potentiation. In Figure 7A, we show the minimum stimulation area (number of episodes times the area of each rectangular plasticity inducing stimulus) required to achieve potentiation, as a function of the amplitude and the frequency of the stimulus for strong timescale separation (τ[z]/τ[w] = 7). We notice that the minimum stimulation area (white star) corresponds to t[off] ~ 10t[on], i.e., the waiting time between episodes is ten times long than each episode. In real experimental conditions, however, it might be difficult to control the intensity of the stimulation. For this reason, we consider a fixed intensity (e.g., amplitude I = 10 in Figure 7A) and vary the inter-episode time t[off]. We find that there exists an optimal stimulation frequency to obtain potentiation with minimal total area (see Figure 7B). These results highlight two main facts: (i) for many stimulation intensities (only two are shown in the graph), one can find an optimal repetition frequency, (ii) there is a broad region in the parameter space (t[on], amplitude, and area) where the number of pulses needed to achieve consolidation is constant. Indeed, the broad region around the minima in Figure 7B (fixed amplitude and t[on]) where the area is approximately constant corresponds a constant number of pulses (n[pulses] = area/t[on]). FIGURE 7 Figure 7. Stimulation effort needed to achieve potentiation for τ[z]/τ[w] = 7 and t[on] = 0.01τ[w]. (A) The potentiation domain (shaded region in Figure 6) is colored proportionally to the stimulation area needed to achieve potentiation with a repetitive pulse stimulus. The minimum stimulation area is ≃ 8.34, it is indicated by the white star and corresponds to the parameters values t [off] = 0.11τ[w], amplitude = 17.75 and 47 pulses. (B) Slices of the diagram in (A) for amplitude = 10 (dashed line) and for amplitude = 20 (dash-dotted line) are plotted against t[off]. One can notice that for a fixed stimulation amplitude, there is an optimal repetition frequency $f=\frac{1}{{t}_{\mathrm{\text{on}}}+{t}_{\mathrm{\text{off}}}}$ that minimizes the stimulus area required to achieve potentiation. The parameters that are not specified otherwise are: C[w] = C[z] = 1, I = 0. 4. Discussion We introduced and analyzed a minimal mathematical model of synaptic consolidation, that consists of two ODEs with linear coupling terms and cubic non-linearity. Since it is a two-dimensional model, the system can be studied using phase-plane techniques. While our model can have up to four stable fixed points, we focused on the case of two stable fixed points, to allow the physical interpretation of the fixed points as an unpotentiated or potentiated synapse. The weight variable w should be seen as the bistable building block of complex synapses. While there is evidence that the potentiation of a single synapse is a all-or-none process (Petersen et al., 1998; O'Connor et al., 2005; Bosch et al., 2014), recent results challenge this view in favor of a modular structure of the synapse (Lisman, 2017). Either way, it is possible to identify a bistable basic element of the synapse. We showed that our minimal model responds to stimulation protocols in a non-trivial way: we quantified the total stimulation strength by the stimulus area defined as duration times intensity, where intensity is a combination of intra-episode frequency and current amplitude of extra-cellular pulses. We found that the minimal stimulus area necessary to induce potentiation depends non-monotonically on the choice of stimulus parameters. In particular, we found that, for both single-episode and multiple-episode stimulation, it is possible to choose the stimulation parameters (intensity, duration, and inter-episode frequency) optimally, so as to minimize the stimulus area (Figures 5, 7 and Table 1). Figure 7 can be used to compare the minimum stimulation area needed to achieve potentiation in a single episode (corresponding to the choice t[off] = 0) to the case of repetitive pulses (t[off] ≠ 0). We conclude that, for a fixed stimulation area, stimulation over several episodes is advantageous to achieve potentiation, in agreement with some widely used protocols (Larson and Munkácsy, 2015). The effect is stronger if the consolidation variable z is slow compared to the weight variable (τ[z] ≫ τ[w]). Note that in experiments it is often impossible to have a fine control on the stimulation amplitude: extra-cellular stimulation of fibers must be strong enough to excite the post-synaptic neuron, but there is no control of the post-synaptic firing rate, which could undergo adaptation or exhibit other time-dependent mechanisms. In summary, the existence of an optimal stimulation frequency is the direct consequence of two very fundamental synaptic properties: (i) the bistability of a synaptic basic element, and (ii) the time scale separation between the internal synaptic mechanisms. TABLE 1 Table 1. Parameter values used in Figures 5–7 unless otherwise indicated in figures captures. The minimum of the total stimulus area is particularly pronounced in the regime of strong separation of timescale (τ[z] ≫ τ[w]), which is the relevant regime in view of the experimental consolidation literature which suggests multiple consolidation mechanisms with a broad range of timescales (Bliss and Collingridge, 1993; Reymann and Frey, 2007). Assuming that the timescale τ[w] is on the order of a few seconds, as suggested by some plasticity induction experiments at the level of single contact points (Petersen et al., 1998), we can interpret a short stimulation episode of duration 0.01 · τ[w] ~ 20 ms as a burst of few pulses at high frequency. For example, one particularly interesting protocol is the theta burst stimulation, where each burst consists of 4 pulses at 100 Hz corresponding to a burst of 30 ms duration (Larson and Munkácsy, 2015). Assuming that this stimulation does not correspond to an extremely small amplitude value (a reasonable assumption since the experimentalists want to induce LTP), our model predicts an optimal frequency (see Figure 7) on the order of t[off] = 0.11τ[w] ~ 200 ms, which is in rough agreement with the experimental protocols where theta bursts are repeated every 200 ms (Larson and Munkácsy, 2015). When comparing the optimal stimulation frequency obtained by our model to experimental data, we should keep in mind that in experiments timing effect come from different sources. In Larson and Munkácsy (2015), the key factor that determines the optimal stimulation protocol is the feed-forward inhibition. Moreover, in Sajikumar et al. (2005), Kumar and Mehta (2011), and Larson and Munkácsy (2015) the position of the stimulation (apical vs. basal) plays a fundamental role, together with priming of NMDA channels. Finally, the fraction of NMDA vs. AMPA receptors is another fundamental element. None of these factors is taken into account in our simplified model. We have described the simplified dynamics of a bistable basic element of synaptic consolidation (which can be interpreted as a single contact point or a synaptic sub-unit). However, in most experiments, we can only observe the collective effect of many such elements together (Malenka, 1991; Bliss and Collingridge, 1993). Such a collective effect can be interpreted as the average number of potentiated contact points. For a detailed comparison between our model and these experiments, it would be needed to simulate the dynamics of the pre- and post-synaptic neuron groups and of their contact points, in order to obtain an average quantity that can be compared with the continuous change of EPSP observed in experiments. Such approach has been taken in Ziegler et al. (2015) and it requires several assumptions, among others the specification of the dependence of the plasticity induction current I on the pre- and post-synaptic activity, the parameters of the two populations and possible recurrent interactions (see also 3.1) (see Supplementary Material, for a qualitative comparison). For these reasons, such a comparison goes beyond the aim of this work. Moreover, since the model describes a single synaptic contact, it cannot be applied to more complex experiments that involve cross-tagging, where effects of protein synthesis are shared between several synapses ( Sajikumar and Frey, 2004b). On the other hand, our results highlight the fact that our model shares similar response properties with the population-averaged quantities measured in experiments, such as its sensitivity to the stimulation frequency and the preference for multiple repetitions. Altogether, these findings suggest that our model possesses the necessary dynamical repertoire to reproduce some of the experimental results (such as Malenka, 1991; Bliss and Collingridge, 1993). Using our model we can only make some qualitative predictions on experimentally measurable quantities. For example, by comparing Figures 6C,D, we can see that the optimal stimulation parameters change when varying the episode duration t[on]. More precisely, our model predicts that for shorter t[on] the optimal stimulus requires a large stimulus intensity during each episode. The proposed framework is related to a number of previous modeling approaches to synaptic consolidation. In particular, the memory formation in networks of excitatory and inhibitory neurons in Zenke et al. (2015) is based on a synaptic plasticity model with a linear weight variable and a slower consolidation variable, corresponding to a choice of K[w] = 0 in Equation (2) of our model. If we exploit this relation between the two models, the coupling term C[w] should depend on the post-synaptic activity. Such a time-dependent coupling coefficient is similar to the gating variable in the write-protected model (Ziegler et al., 2015). The write-protected model (Ziegler et al., 2015) can be considered as a three-dimensional generalization of our framework. In our model the weight variable w is directly coupled to the consolidation variable z whereas in the write-protected model w is coupled to an intermediate tag-related variable which is then coupled to z. The dynamical understanding of the interplay between stimulation protocol and autonomous dynamics gained here by studying the two-dimensional system can be also applied to a three-(or higher-) dimensional generalization, under the assumption that coupling exists only between pairs of variables and that there is timescale separation. Using such a multi-dimensional generalization, it would be possible to explain a much larger set of experimental results. In addition, the model presented in Ziegler et al. (2015) features coupling coefficients that are dynamically adjusted as a function of the induction protocol itself. A change of coupling C makes a model at the same time more expressive and harder to analyze (cf. section 2.3). The cascade model (Fusi et al., 2005) can be related to the model in the present paper by introducing several slow variables z[1], …, z[n] with time constants τ[1], …, τ[n]. The coupling from k to k + 1 is analogous to the coupling of w to z in Equation (2). Even though this extended model and the cascade model share the concept of slower variables, there are some important differences between the two. First, the cascade model (Fusi et al., 2005) is intrinsically stochastic, i.e., the stochasticity due to spiking events is combined with the stochasticity of plasticity itself. Second, the transitions among states in the cascade model are instantaneous (Fusi et al., 2005). In our framework instead, even though there are discrete stable states, the transitions need some time to happen and this is exactly why the frequency of a repetitive stimulus matters in our model. Similarly to the cascade model (Fusi et al., 2005), the “communicating vessels” model (Benna and Fusi, 2016) relies on multiple hidden variables. However, in contrast to the cascade model (Fusi et al., 2005), the dynamics in the “communicating-vessels” model are determined by continuous variables that obey continuous-time differential equations (Benna and Fusi, 2016). If we truncate the “communication-vessels” model to a single hidden variable, the resulting dynamics fall into our framework, with the simple choice K[w] = K[z] = 0. Extensions to multiple hidden variables with progressively bigger timescales is possible analogously to our discussion above. Indeed experimental results show that the internal bio-chemical mechanisms of the synapse are characterized by different timescales (Reymann and Frey, 2007; Bosch et al., 2014). Similar to the cascade model, the “state based model” proposed in Barrett et al. (2009), consist of synapses whose state can shift from e-LTP to l-LTP according to some transition-rates. The model captures two internal mechanisms (tagging and anchor for AMPAR). The probability that a particular synapse is in a specific state is a continuous quantity that depends on the transition probabilities. The main similarity to our model is the presence of a bistable basic synaptic element. Finally the synaptic plasticity model proposed in Shouval et al. (2002) proposes a non-linear dynamics for the synaptic weights, similarly to our model. The main goal of Shouval et al. (2002) is to relate the amount of NMDAR with the Calcium level in the synapse. However, their model is not bistable and no attempt is made to capture the internal synaptic state. To summarize, our model focuses on a single transition using two variables. If these variables have different intrinsic timescales, the temporal pattern of the stimulation protocol plays a crucial role. We believe that these insights are applicable beyond our two-variable model in situations where multiple variables covering multiple timescales are pair-wise coupled to each other. This includes well-known consolidation models such as the cascade model (Fusi et al., 2005), the communicating vessels model (Benna and Fusi, 2016), and the write-protected synapse model (Ziegler et al., Data Availability Statement All datasets analyzed for this study are included in the article/Supplementary Material. Author Contributions All authors contributed to the conception of the study. CG implemented the code needed to produce all figures, except for Figure 3. SM performed the bifurcation analysis presented in Figure 3 and contributed to the revision and improvement of the figures. CG and SM wrote the first draft of the manuscript. All authors contributed to correcting and improving the manuscript. All authors contributed to manuscript revision, read, and approved the submitted version. This research was supported by the Swiss national science foundation, grant agreement 200020_165538 and by the HBP. Conflict of Interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The authors would like to thank Tilo Schwalger for useful comments and discussions. Supplementary Material The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/fncom.2019.00078/full#supplementary-material Artola, A., Bröcher, S., and Singer, W. (1990). Different voltage dependent thresholds for inducing long-term depression and long-term potentiation in slices of rat visual cortex. Nature 347, 69–72. doi: 10.1038/347069a0 Barrett, A. B., Billings, G. O., Morris, R. G. M., and van Rossum, M. C. W. (2009). State based model of long-term potentiation and synaptic tagging and capture. PLoS Comput. Biol. 5:e1000259. doi: Bashir, Z. I., and Collingridge, G. L. (1994). An investigation of depotentiation of long-term potentiation in the CA1 region of the hippocampus. Exp. Brain Res. 79, 437–443. doi: 10.1007/BF02738403 Benna, M. K., and Fusi, S. (2016). Computational principles of synaptic memory consolidation. Nat. Neurosci. 19, 1697–1706. doi: 10.1038/nn.4401 Bhalla, U. S., and Iyengar, R. (1999). Emergent properties of networks of biological signaling pathways. Science 283, 381–387. doi: 10.1126/science.283.5400.381 Bienenstock, E. L., Cooper, L. N., and Munro, P. W. (1982). Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. J. Neurosci. 2, 32–48. doi: 10.1523/JNEUROSCI.02-01-00032.1982 Bliss, T. V. P., and Collingridge, G. L. (1993). A synaptic model of memory: long-term potentiation in the hippocampus. Nature 361, 31–39. doi: 10.1038/361031a0 Bliss, T. V. P., and Lømo, T. (1973). Long-lasting potentation of synaptic transmission in the dendate area of anaesthetized rabbit following stimulation of the perforant path. J. Physiol. 232, 351–356. doi: 10.1113/jphysiol.1973.sp010273 Bosch, M., Castro, J., Saneyoshi, T., Matsuno, H., Sur, M., and Hayashi, Y. (2014). Structural and molecular remodeling of dendritic spine substructures during long-term potentiation. Neuron 82, 444–459. doi: 10.1016/j.neuron.2014.03.021 Brader, J. M., Senn, W., and Fusi, S. (2007). Learning real-world stimuli in a neural network with spike-driven synaptic dynamics. Neural Comput. 19, 2881–2912. doi: 10.1162/neco.2007.19.11.2881 Brandon, M. P., Bogaard, A. R., Libby, C. P., Connerney, M. A., Gupta, K., and Hasselmo, M. E. (2011). Reduction of theta rhythm dissociates grid cell spatial periodicity from directional tuning. Science 332, 595–599. doi: 10.1126/science.1201652 Brown, T. H., Ganong, A. H., Kairiss, E. W., Keenan, C. L., and Kelso, S. R. (1989). “Long-term potentation in two synaptic systems of the hippocampal brain slice,” in Neural Models of Plasticity, eds J. H. Byrne and W. O. Berry (San Diego, CA: Academic Press Inc.), 266–306. Caroni, P., Donato, F., and Muller, D. (2012). Structural plasticity upon learning: regulation and functions. Nat. Rev. Neurosci. 13, 478–490. doi: 10.1038/nrn3258 Clopath, C., Busing, L., Vasilaki, E., and Gerstner, W. (2010). Connectivity reflects coding: a model of voltage-based spike-timing-dependent-plasticity with homeostasis. Nat. Neurosci. 13, 344–352. doi: 10.1038/nn.2479 Clopath, C., Ziegler, L., Vasilaki, E., Busing, L., and Gerstner, W. (2008). Tag-trigger-consolidation: a model of early and late long-term-potentiation and depression. PLoS Comput. Biol. 4:e1000248. doi: 10.1371/journal.pcbi.1000248 Cohen, M. A., and Grossberg, S. (1983). Absolute stability of global pattern formation and parallel memory storage by competitive neural networks. IEEE Trans. Syst. Man. Cybern. Syst. 13, 815–823. doi: 10.1109/TSMC.1983.6313075 Dudai, Y., and Morris, R. G. M. (2000). “To consolidate or not to consolidate: what are the questions,” in Brain, Perception, Memory Advances in Cognitive Sciences, ed J. J. Bolhuis (Oxford: Oxford University Press), 149–162. Ermentrout, B. (1996). Type I membranes, phase resetting curves, and synchrony. Neural Comput. 8, 979–1001. doi: 10.1162/neco.1996.8.5.979 Ermentrout, B. (2002). Simulating, Analyzing, and Animating Dynamical Systems: A Guide to XPPAUT for Researchers and Students, Vol. 14. Philadelphia, PA: Siam. Frey, U., and Morris, R. G. M. (1997). Synaptic tagging and long-term potentiation. Nature 385, 533–536. doi: 10.1038/385533a0 Fusi, S., Drew, P. J., and Abbott, L. F. (2005). Cascade models of synaptically stored memories. Neuron 45, 599–611. doi: 10.1016/j.neuron.2005.02.001 Gerstner, W., Kempter, R., van Hemmen, J. L., and Wagner, H. (1996). A neuronal learning rule for sub-millisecond temporal coding. Nature 383, 76–78. doi: 10.1038/383076a0 Gjorgjieva, J., Clopath, C., Audet, J., and Pfister, J. P. (2011). A triplet spike-timing–dependent plasticity model generalizes the Bienenstock–Cooper–Munro rule to higher-order spatiotemporal correlations. Proc. Natl. Acad. Sci. U.S.A. 108, 19383–19388. doi: 10.1073/pnas.1105933108 Graupner, M., and Brunel, N. (2007). STDP in a bistable synapse model based on CaMKII and associate signaling pathways. PLoS Comput. Biol. 3:e221. doi: 10.1371/journal.pcbi.0030221 Hasselmo, M. (1999). Neuromodulation: acetylcholine and memory consolidation. Trends Cogn. Sci. 3, 351–359. doi: 10.1016/S1364-6613(99)01365-0 Hayashi-Takagi, A., Yagishita, S., M. Nakamura, F. S., Wu, Y. I., Loshbaugh, A. L., Kuhlman, B., et al. (2015). Labelling and optical erasure of synaptic memory traces in the motor cortex. Nature 525, 333–338. doi: 10.1038/nature15257 Hebb, D. O. (1949). The Organization of Behavior. Oxford: Wiley. Holtmaat, A., and Caroni, P. (2016). Functional and structural underpinnings of neuronal assembly formation in learning. Nat. Neurosci. 19, 1553–1562. doi: 10.1038/nn.4418 Kastner, D. B., Schwalger, T., Ziegler, L., and Gerstner, W. (2016). A model of synaptic reconsolidation. Front. Neurosci. 10:206. doi: 10.3389/fnins.2016.00206 Kramár, E. A., Babayan, A. H., Gavin, C. F., Cox, C. D., Jafari, M., Gall, C. M., et al. (2012). Synaptic evidence for the efficacy of spaced learning. Proc. Natl. Acad. Sci. U.S.A. 109, 5121–5126. doi: 10.1073/pnas.1120700109 Kumar, A., and Mehta, M. R. (2011). Frequency-dependent changes in nmdar-dependent synaptic plasticity. Front. Comput. Neurosci. 5:38. doi: 10.3389/fncom.2011.00038 Larson, J., and Munkácsy, E. (2015). Theta-burst LTP. Brain Res. 1621, 38–50. doi: 10.1016/j.brainres.2014.10.034 Levy, W. B., and Stewart, D. (1983). Temporal contiguity requirements for long-term associative potentiation/depression in hippocampus. Neuroscience 8, 791–797. Lisman, J. (2017). Glutamatergic synapses are structurally and biochemically complex because of multiple plasticity processes: long-term potentiation, long-term depression, short-term potentiation and scaling. Philos. Trans. R. Soc. B Biol. Sci. 372:20160260. doi: 10.1098/rstb.2016.0260 Lisman, J. E., and Zhabotinsky, A. M. (2001). A model of synaptic memory: a CaMKII/PP1 switch that potentiates transmission by organizing an AMPA receptor anchoring assembly. Neuron 31, 191–201. doi: Malenka, R. C. (1991). Postsynaptic factors control the duration of synaptic enhancement in area CA1 of the hippocampus. Neuron 6, 53–60. doi: 10.1016/0896-6273(91)90121-F Markram, H., Lübke, J., Frotscher, M., and Sakmann, B. (1997). Regulation of synaptic efficacy by coincidence of postsynaptic APs and EPSPs. Science 275, 213–215. doi: 10.1126/science.275.5297.213 Markram, H., Wu, Y., and Tosdyks, M. (1998). Differential signaling via the same axon of neocortical pyramidal neurons. Proc. Natl. Acad. Sci. U.S.A. 95, 5323–5328. doi: 10.1073/pnas.95.9.5323 Martin, S. J., Grimwood, P. D., and Morris, R. G. M. (2000). Synaptic plasticity and memory: an evaluation of the hypothesis. Annu. Rev. Neurosci. 23, 649–711. doi: 10.1146/annurev.neuro.23.1.649 Nabavi, S., Fox, R., Proulx, C. D., Lin, J. Y., Tsien, R. Y., and Malinow, R. (2014). Engineering a memory with LTD and LTP. Nature 511, 348–352. doi: 10.1038/nature13294 Nader, K., Schafe, G. E., and LeDoux, J. E. (2000). Reply-reconsolidation: the labile nature of consolidation theory. Nat. Rev. Neurosci. 1, 216–219. doi: 10.1038/35044580 Nicolas, F., and Gerstner, W. (2016). Neuromodulated spike-timing-dependent plasticity, and theory of three-factor learning rules. Front. Neural Circuits 9:85. doi: 10.3389/fncir.2015.00085 O'Connor, D. H., Wittenberg, G. M., and Wang, S. S. H. (2005). Graded bidirectional synaptic plasticity is composed of switch-like unitary events. Proc. Natl. Acad. Sci. U.S.A. 102, 9679–9684. doi: Petersen, C. C. H., Malenka, R. C., Nicoll, R. A., and Hopfield, J. J. (1998). All-or-none potentiation at CA3-CA1 synapses. Proc. Natl. Acad. Sci. U.S.A. 95, 4732–4737. doi: 10.1073/pnas.95.8.4732 Pfister, J. P., and Gerstner, W. (2006). Triplets of spikes in a model of spike timing-dependent plasticity. J. Neurosci. 26, 9673–9682. doi: 10.1523/JNEUROSCI.1425-06.2006 Redondo, R. L., and Morris, R. G. M. (2011). Making memories last: the synaptic tagging and capture hypothesis. Nat. Rev. Neurosci. 12, 17–30. doi: 10.1038/nrn2963 Reymann, K. G., and Frey, J. U. (2007). The late maintenance of hippocampal LTP: requirements, phases,synaptic tagging, late associativity and implications. Neuropharmacology 52, 24–40. doi: 10.1016/ Rinzel, J., and Ermentrout, G. B. (1998). Analysis of neural excitability and oscillations. Methods Neuronal Model. 2, 251–292. Roelfsema, P. R. (2006). Cortical algorithms for perceptual grouping. Annu. Rev. Neurosci. 29, 203–227. doi: 10.1146/annurev.neuro.29.051605.112939 Rubin, J. E., Gerkin, R. C., Bi, G. Q., and Chow, C. C. (2005). Calcium time course as a signal for spike-timing-dependent plasticity. J. Neurophysiol. 93, 2600–2613. doi: 10.1152/jn.00803.2004 Sajikumar, S., and Frey, J. U. (2004a). Late-associativity, synaptic tagging, and the role of dopamine during LTP and LTD. Neurobiol. Learn. Mem. 82, 12–25. doi: 10.1016/j.nlm.2004.03.003 Sajikumar, S., and Frey, J. U. (2004b). Resetting of synaptic tags is time- and activity dependent in rat hippocampal CA1 in vitro. Neuroscience 129, 503–507. doi: 10.1016/j.neuroscience.2004.08.014 Sajikumar, S., Navakkode, S., Sacktor, T. C., and Frey, J. U. (2005). Synaptic tagging and cross-tagging: the role of protein kinase Mζ in maintaining long-term potentiation but not long-term depression. J. Neurosci. 25, 5750–5756. doi: 10.1523/JNEUROSCI.1104-05.2005 Senn, W., Tsodyks, M., and Markram, H. (2001). An algorithm for modifying neurotransmitter release probability based on pre- and postsynaptic spike timing. Neural Comput. 13, 35–67. doi: 10.1162/ Shouval, H. Z., Bear, M. F., and Cooper, L. N. (2002). A unified model of NMDA receptor-dependent bidirectional synaptic plasticity. Proc. Natl. Acad. Sci. U.S.A. 99:10831. doi: 10.1073/ Sjöström, P. J., Turrigiano, G., and Nelson, S. B. (2001). Rate, timing, and cooperativity jointly determine cortical synaptic plasticity. Neuron 32, 1149–1164. doi: 10.1016/S0896-6273(01)00542-6 Song, S., Miller, K. D., and Abbott, L. F. (2000). Competitive Hebbian learning through spike-time-dependent synaptic plasticity. Nat. Neurosci. 3, 919–926. doi: 10.1038/78829 Strogatz, S. H. (2014). Nonlinear Dynamics and Chaos: With Applications to Physics, Biology, Chemistry, and Engineering. Boca Raton, FL: Hachette. Turrigiano, G. G., Marder, E., and Abbott, L. F. (1996). Cellular short-term memory from a slow potassium conductance. J. Neurophysiol. 75, 963–966. doi: 10.1152/jn.1996.75.2.963 Van Rossum, M. C. W., Bi, G. Q., and Turrigiano, G. G. (2000). Stable Hebbian learning from spike timing-dependent plasticity. J. Neurosci. 20, 8812–8821. doi: 10.1523/JNEUROSCI.20-23-08812.2000 Wilson, H. R., and Cowan, J. D. (1972). Excitatory and inhibitory interactions in localized populations of model neurons. Biophys. J. 12, 1–24. doi: 10.1016/S0006-3495(72)86068-5 Zenke, F., Agnes, E. J., and Gerstner, W. (2015). Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks. Nat. Commun. 6:6922. doi: 10.1038/ Zhou, Q., Tao, H. W., and Poo, M. M. (2003). Reversal and stabilization of synaptic modifications in a developing visual system. Science 300, 1953–1957. doi: 10.1126/science.1082212 Ziegler, L., Zenke, F., Kastner, D. B., and Gerstner, W. (2015). Synaptic consolidation: from synapses to behavioral modeling. J. Neurosci. 35, 1319–1334. doi: 10.1523/JNEUROSCI.3989-14.2015 Keywords: synaptic consolidation, plasticity, LTP, stimulation frequency, bistability Citation: Gastaldi C, Muscinelli S and Gerstner W (2019) Optimal Stimulation Protocol in a Bistable Synaptic Consolidation Model. Front. Comput. Neurosci. 13:78. doi: 10.3389/fncom.2019.00078 Received: 10 April 2019; Accepted: 21 October 2019; Published: 13 November 2019. Edited by: Mayank R. Mehta , University of California, Los Angeles, United States Copyright © 2019 Gastaldi, Muscinelli and Gerstner. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Chiara Gastaldi, chiara.gastaldi@epfl.ch
{"url":"https://www.frontiersin.org/journals/computational-neuroscience/articles/10.3389/fncom.2019.00078/full","timestamp":"2024-11-11T07:22:36Z","content_type":"text/html","content_length":"668977","record_id":"<urn:uuid:0171c2c9-baef-456e-bc7e-d50c7904b0fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00392.warc.gz"}
Calculate time difference in hours as decimal value in Excel November 12, 2024 - Excel Office Calculate time difference in hours as decimal value in Excel To get the duration between two times in as decimal hour (i.e. 3 hrs, 4.5 hrs, 8 hrs, etc.) you can use a formula based on the MOD function. In the example shown, the formula in D5 is: Excel hours In Excel, one day is the number 1, so 1 hour = 1/24 = 0.041666667. In other words, hours are just fractional parts of a day: Time Fraction Hours 3:00 AM 0.125 3 6:00 AM 0.25 6 9:00 AM 0.375 9 12:00 PM 0.5 12 3:00 PM 0.625 15 6:00 PM 0.75 18 9:00 PM 0.875 21 12:00 AM 1 24 To convert these fractional values to decimal hours, just multiply by 24. For example .5 * 24 = 12 hours, .24 * 24 = 6 hours, etc. Times that cross midnight When times cross midnight, the problem becomes more tricky, since the end time will often be less than the start time. One elegant way to handle this challenge is to add the MOD function to the formula. For example, to calculate hours between 9 PM and 3 AM: The MOD function takes care of the negative problem by “flipping” negative values to the required positive value. (In this way, the MOD function works a bit like a clock. Hours between times To calculate hours between times, you can simply subtract the start time from the end time when both times are in the same day. For example, with start time of 9:00 AM and an end time of 3:00 PM, you can simply use this formula: =(3:00 PM-9:00 AM)*24 =.25 * 24
{"url":"https://www.xlsoffice.com/excel-functions/date-and-time-functions/calculate-time-difference-in-hours-as-decimal-value-in-excel/","timestamp":"2024-11-12T05:52:05Z","content_type":"text/html","content_length":"65280","record_id":"<urn:uuid:b2c9def3-f2f6-49e4-bb02-98e07780893b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00747.warc.gz"}
ACM Other ConferencesDeep Multilevel Graph Partitioning Partitioning a graph into blocks of "roughly equal" weight while cutting only few edges is a fundamental problem in computer science with a wide range of applications. In particular, the problem is a building block in applications that require parallel processing. While the amount of available cores in parallel architectures has significantly increased in recent years, state-of-the-art graph partitioning algorithms do not work well if the input needs to be partitioned into a large number of blocks. Often currently available algorithms compute highly imbalanced solutions, solutions of low quality, or have excessive running time for this case. This is due to the fact that most high-quality general-purpose graph partitioners are multilevel algorithms which perform graph coarsening to build a hierarchy of graphs, initial partitioning to compute an initial solution, and local improvement to improve the solution throughout the hierarchy. However, for large number of blocks, the smallest graph in the hierarchy that is used for initial partitioning still has to be large. In this work, we substantially mitigate these problems by introducing deep multilevel graph partitioning and a shared-memory implementation thereof. Our scheme continues the multilevel approach deep into initial partitioning - integrating it into a framework where recursive bipartitioning and direct k-way partitioning are combined such that they can operate with high performance and quality. Our integrated approach is stronger, more flexible, arguably more elegant, and reduces bottlenecks for parallelization compared to existing multilevel approaches. For example, for large number of blocks our algorithm is on average at least an order of magnitude faster than competing algorithms while computing partitions with comparable solution quality. At the same time, our algorithm consistently produces balanced solutions. Moreover, for small number of blocks, our algorithms are the fastest among competing systems with comparable quality.
{"url":"https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.ESA.2021.48/metadata/acm-xml","timestamp":"2024-11-04T21:58:23Z","content_type":"application/xml","content_length":"19599","record_id":"<urn:uuid:64055d70-0812-43c7-b12e-451affac8971>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00707.warc.gz"}
[Solved] Consider the following grammar along with translation rules. Consider the following grammar along with translation rules. S → S[1] # T {S[1.val]*T[.val]} S → T {S[.val] = T[.val]} T → T[1]%R {T[.val] = T[1.val] ÷ R[.val]} T → R {T[.val] = R[.val]} R → id {R[.val] = id[.val]} Here # and % are operators and id is a token that represents an integer and id[•val ] represents the corresponding integer value. The set of non-terminals is {S, T, R, P} and a subscripted non-terminal indicates an instance of the non-terminal. Using this translation scheme, the computed value of S[•val] for root of the parse tree for the expression 20#10%5#8%2%2 is _____________. Answer (Detailed Solution Below) 80 GATE CS Full Mock Test 3 K Users 65 Questions 100 Marks 180 Mins The correct answer is 80. The given grammar along with translation rules is, S → S1 # T {S1.val*T.val} S → T {S.val = T.val} T → T1%R {T.val = T1.val ÷ R.val} T → R {T.val = R.val} R → id {R.val = id.val} The given string is= 20#10%5#8%2%2 Rule 1: In the given SDT, % has more precedence than # and % because it is away from starting symbol. Rule 2: Both % and * are left-associative because the left non-terminal is present left most on the right side grammar. The given string is= 20#10%5#8%2%2 The given string is= 20x10÷5 x 8÷2÷2 The given string is= 20 x2x4÷2 The given string is= 20 x2x2 The given string is= 40 x2 The given string is= 80 Hence the correct answer is 80. Latest GATE CS Updates Last updated on Sep 27, 2024 -> GATE CS 2025 application deadline has been extended. Interested candidates can apply online till 3rd October 2024. -> The exam will be conducted on 1st, 2nd, 15th, and 16th February 2025. -> Candidates applying for the GATE CE must satisfy the GATE Eligibility Criteria. -> The candidates should have BTech (Computer Science). Candidates preparing for the exam can refer to the GATE CS Important Questions to improve their preparation. -> Candidates must check their performance with the help of the GATE CS mock tests and GATE CS previous year papers for the GATE 2025 Exam.
{"url":"https://testbook.com/question-answer/consider-the-following-grammar-along-with-translat--62172866621149b04c01a94c","timestamp":"2024-11-08T15:32:03Z","content_type":"text/html","content_length":"188819","record_id":"<urn:uuid:038d811c-0345-4b7b-80ce-ae4b72bc88c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00805.warc.gz"}
Flexible Procedures for Clustering Various methods for clustering and cluster validation. Fixed point clustering. Linear regression clustering. Clustering by merging Gaussian mixture components. Symmetric and asymmetric discriminant projections for visualisation of the separation of groupings. Cluster validation statistics for distance based clustering including corrected Rand index. Standardisation of cluster validation statistics by random clusterings and comparison between many clustering methods and numbers of clusters based on this. Cluster-wise cluster stability assessment. Methods for estimation of the number of clusters: Calinski-Harabasz, Tibshirani and Walther's prediction strength, Fang and Wang's bootstrap stability. Gaussian/multinomial mixture fitting for mixed continuous/categorical variables. Variable-wise statistics for cluster interpretation. DBSCAN clustering. Interface functions for many clustering methods implemented in R, including estimating the number of clusters with kmeans, pam and clara. Modality diagnosis for Gaussian mixtures. For an overview see package?fpc.
{"url":"https://www.rdocumentation.org/packages/fpc/versions/2.2-10","timestamp":"2024-11-06T10:49:14Z","content_type":"text/html","content_length":"110103","record_id":"<urn:uuid:d7b0db05-b20b-4420-ba57-5242ffe2f489>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00896.warc.gz"}
0.1 M HCL solution Chemistry Forums for Students > Undergraduate General Chemistry Forum 0.1 M HCL solution I'm a clinical lab dude, and it's been a log time (11 Years) since chem gen. I'm trying to make a 0.1 M HCL solution to lyse RBC'S in body fluid. I came up with 1.0375ml HCL diluted to 125ml with H2O(volume needed). Is this correct?? If not please show your work so I can do it correctly in the furture. I think your data is insufficient.I mean of what molarity of 1.035ml of HCL are you Diluting with 125ml of water.If you are having M1 molarity of 1.035ml of HCL and you are diluting it to V2 ml to obtain a Solution of M2 molarity.Then use this molarity equation: The concentrated HCl that you buy is about 12 M. If that's the stuff you are using then your calculation checks out. Here's my work: 12 M HCl * 0.0010375 L = 0.01245 mol HCl 0.01245 mol HCl / 0.125 L = 0.0996 M HCl Probably close enough for your purposes. I think the conc. HCl is a little more than 12 M, but that's the number I was taught to remember. (BTW, if you assume the conc. HCl is 12.2 M then the solution would end up being 0.101 M. See this thread for why the 12.2M number matters: http://www.chemicalforums.com/index.php?board=2;action=display;threadid=857). Thanks, that was a correct assumption, I was using 12 M HCL concentrated (diluting 1.0375 ml to 125 ml with H2O... sorry for not stating that in my original inquiry. Thanks again for checking my work and releaving my doubts!! --- Quote from: movies on August 18, 2004, 12:38:04 PM ---The concentrated HCl that you buy is about 12 M. If that's the stuff you are using then your calculation checks out. Here's my work: 12 M HCl * 0.0010375 L = 0.01245 mol HCl 0.01245 mol HCl / 0.125 L = 0.0996 M HCl Probably close enough for your purposes. I think the conc. HCl is a little more than 12 M, but that's the number I was taught to remember. (BTW, if you assume the conc. HCl is 12.2 M then the solution would end up being 0.101 M. See this thread for why the 12.2M number matters: http://www.chemicalforums.com/index.php?board=2;action=display;threadid=857). --- End quote --- [0] Message Index Go to full version
{"url":"https://www.chemicalforums.com/index.php?topic=895.0;wap2","timestamp":"2024-11-08T02:23:48Z","content_type":"application/xhtml+xml","content_length":"4060","record_id":"<urn:uuid:285bf06d-a2ab-4a4e-94bc-68f160eeebd9>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00863.warc.gz"}
A 689N person lies down on a 2m, 30N board. Their feet are exactly at fulcrum.... Answer #1 Similar Homework Help Questions • What would happen if a support is placed exactly at x = 79 cm followed by the removal of the supports at the subject's head and feet? QUESTION What would happen if a support is placed exactly at x = 79 cm followed by the removal of the supports at the subject's head and feet? (Assume the board is removed along with the supports. Select all that apply.) The subject would remain balanced. The subject's center of gravity would shift toward his feet. The subject's center of gravity would not shift. The subject's feet would sink toward the ground. The subject's center of gravity would shift toward his head. The subject's head would sink... • Two trickling filters operating in parallel are being designed to treat the domestic wastewater from a town of 5500 people where the average per-capita wastewater generation is 450 liters per person p... Two trickling filters operating in parallel are being designed to treat the domestic wastewater from a town of 5500 people where the average per-capita wastewater generation is 450 liters per person per day. A plastic trickling filter media and filter height of 3.5 m have been selected. The influent BOD5 is 120 mg/L and the facility must meet an NPDES permit limiting effluent BOD5 to 35 mg/L. The filter media constant (n) is 0.67 and the BOD decay constant for... A 689N person lies down on a 2m, 30N board. Their feet are exactly at fulcrum.... A 689N person lies down on a 2m, 30N board. Their feet are exactly at fulcrum. They are 1.75m tall. Scale reads 330N. Where is their center of mass relative to their feet? Where is their center of mass as a percent of their height? Please show all work and steps. Free Homework Help App Download From Google Play Scan Your Homework to Get Instant Free Answers Need Online Homework Help? Ask a Question Get Answers For Free Most questions answered within 3 hours.
{"url":"https://www.homeworklib.com/question/2105524/a-689n-person-lies-down-on-a-2m-30n-board-their","timestamp":"2024-11-08T18:04:00Z","content_type":"text/html","content_length":"48748","record_id":"<urn:uuid:35c22777-596d-4343-8fbb-736ed0521b44>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00269.warc.gz"}
Numbers that are not divided by square of its lower terms. First see this example: let you have a number 8 which is divisible by 4 which is square of 2 and 27 which is divisible by 9 which is square of 3.I need a solution for numbers input from 1 to 10^ 18.hope for efficient program written in any programming language. Thanks in advance.
{"url":"https://www.queryhome.com/tech/129259/numbers-that-are-not-divided-by-square-of-its-lower-terms","timestamp":"2024-11-13T21:56:59Z","content_type":"text/html","content_length":"105426","record_id":"<urn:uuid:98c5dcf9-f696-4c20-bc58-e791907b590c>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00578.warc.gz"}
1.3. Kernel ridge regression 1.3. Kernel ridge regression# Kernel ridge regression (KRR) [M2012] combines Ridge regression and classification (linear least squares with l2-norm regularization) with the kernel trick. It thus learns a linear function in the space induced by the respective kernel and the data. For non-linear kernels, this corresponds to a non-linear function in the original space. The form of the model learned by KernelRidge is identical to support vector regression (SVR). However, different loss functions are used: KRR uses squared error loss while support vector regression uses \(\epsilon\)-insensitive loss, both combined with l2 regularization. In contrast to SVR, fitting KernelRidge can be done in closed-form and is typically faster for medium-sized datasets. On the other hand, the learned model is non-sparse and thus slower than SVR, which learns a sparse model for \(\epsilon > 0\), at prediction-time. The following figure compares KernelRidge and SVR on an artificial dataset, which consists of a sinusoidal target function and strong noise added to every fifth datapoint. The learned model of KernelRidge and SVR is plotted, where both complexity/regularization and bandwidth of the RBF kernel have been optimized using grid-search. The learned functions are very similar; however, fitting KernelRidge is approximately seven times faster than fitting SVR (both with grid-search). However, prediction of 100000 target values is more than three times faster with SVR since it has learned a sparse model using only approximately 1/3 of the 100 training datapoints as support vectors. The next figure compares the time for fitting and prediction of KernelRidge and SVR for different sizes of the training set. Fitting KernelRidge is faster than SVR for medium-sized training sets (less than 1000 samples); however, for larger training sets SVR scales better. With regard to prediction time, SVR is faster than KernelRidge for all sizes of the training set because of the learned sparse solution. Note that the degree of sparsity and thus the prediction time depends on the parameters \(\epsilon\) and \(C\) of the SVR; \(\epsilon = 0\) would correspond to a dense model. “Machine Learning: A Probabilistic Perspective” Murphy, K. P. - chapter 14.4.3, pp. 492-493, The MIT Press, 2012
{"url":"https://scikit-learn.qubitpi.org/modules/kernel_ridge.html","timestamp":"2024-11-07T13:51:57Z","content_type":"text/html","content_length":"40019","record_id":"<urn:uuid:d4bbda59-1368-4633-9947-d8427b519d85>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00127.warc.gz"}
Writing Better Proofs Part 3: Notation This post first appeared 17 May 2021. When writing a proof we often have a choice of notation. Perhaps we are naming variables or deciding if functions act from the left \( \theta(x) \), or right \( (x)\vartheta\). We may be referring to existing theorems or constructions (is it the HOMFLY polynomial or the HOMFLYPT?) or may need to introduce our own. In all cases there is a single golden rule: be consistent. As long as you set out your notation early and stick to it, you are being clear. Change notation half way through or be vague about it, and you are just going to lose your reader. However, there are a few “silver rules” you should also follow. Use Existing Standards If you are answering a question or proving a lemma or statement, you should always use the notation set out in the question. If the current literature favours one standard of notation, unless you have very good reason not to, use that notation in your work. The only reason to break this rule is if your notation is clearer or conveys more specific information. If so, introduce it clearly at the outset and stick to it. Problem Prove that any map from \(M\) to a filtered module \(N\) is surjective. Solution Let \(N\) be filtered by \(N = N_0 \supset N_1\supset\ldots \supset N_m\). Then let \(f : M \to N_0\) be any map. We will show that \(f\) is an epimorphism and hence surjective. Perhaps the most common example of this is the \((\epsilon,\delta)\) proof-type in analysis. Here, almost always, \(\epsilon\) is fixed before \(\delta\). Please do not start a proof For each \(\ delta\) there exists an \(\epsilon\) unless you are making a joke^1. Choose Variable Names Carefully Most often (in English proof writing) certain variable names are tacitly “reserved” for certain uses. In particular, variables \(i\) and \(j\) are often indices that are summed over, \(n\) and \(m\) are likely fixed. The variables \(x\) and \(y\) are unknowns and \(p\) is often a prime. These are to be taken as guidelines only (nobody will complain too loudly if you sum over \(n\) or if \(x\) is fixed) but breaking these conventions can lead to confusion. Consider the equation \[i = \sum_{n = 0}^j n\cdot (m_x +j) \] versus the semantically equivalent \[x = \sum_{i = 0}^n i\cdot (m_j +y). \] The latter is far easier to read because the reader is expecting the various elements to go where they do. Not being surprising to the reader is a key element in writing a good proof. Be Aware of How Variables Look Some variables look very similar, especially if handwritten. Consider the notation below. Let \(a_0, a_1,\ldots, a_n\) be chosen such that \[ a_0 + a_1\alpha + \cdots a_n\alpha^n = 0.\] Here the alpha’s and a’s are just begging to be mixed up. Best to pick different coefficient names^2. On the other hand, the letters \(i\) and \(j\) are also easy to mix up. If reasonable, other “summation” variables such as \(r\) and \(s\) could be used. Otherwise be careful to make sure it is clear which is which. Avoid the dreaded \(i_j\) and \(j_i\) at all costs. Be Sparing with \(\Sigma\) and \(\Pi\) The two capital Greek letters \(\Sigma\) and \(\Pi\) have special meaning in most subjects. They represent summation and products respectively. Having said that, sometimes they can be used as variable names. Its status as a “super capital S” means that \(\Sigma\) is often used for a set of sets. The letter \(\Pi\) sometimes is used to contain a set of indices or denote a particularly special prime ideal. LaTeX sets these apart by adjusting their aspect ratios very subtly. The string $\sum \Sigma \prod \Pi$ renders as \(\sum \Sigma \prod \Pi\). Sometimes this is sufficient to hint to the reader the difference between the symbols, but when handwritten this is almost impossible. Don’t let any of this dissuade you from coming up with your own notation, though. As long as it is clear and consistent, you are in the right. It may be better than the existing notation and may make your life substantially easier. Perhaps new notation allows you to distinguish between two subtly different situations, or simplifies your equations from a horrible case statement to a simple Just remember the golden rule: be consistent. This is part of a series on writing better proofs. The pages in the series are not static and will be updated and improved as time goes on. If you have comments or suggestions, please email me. I’d be particularly interested in hearing if you disagree with me or have suggestions for topics to cover. 1. I once, completely seriously and by accident, started a presentation with “Let \(\epsilon\) be a small negative number…” The silence was deafening. ↩︎ 2. Can I suggest \(c_i\)? ↩︎
{"url":"https://robertandrewspencer.com/writing-proofs-notation/","timestamp":"2024-11-09T20:41:29Z","content_type":"text/html","content_length":"11850","record_id":"<urn:uuid:cffa5760-5a5c-4d98-82b0-82b7cf035f4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00869.warc.gz"}
Grade 3 Mathematics Term 1 The Standards for Mathematics are statements about what students should know and be able to do in order to meet the requirements of The National Standards Curriculum. These standards are structured according to the content and process strands identified in the Curriculum. For each content and process strand, a standard has been developed which is aligned with the Curriculum Attainment Targets. The Curriculum has outlined the expectations for progress through each grade level. It, therefore, serves as a guide for monitoring the progress of each student based on the standards for each grade as students will be performing at varying levels throughout the year, and will be working at a different pace. In light of this, these Standards therefore, provide support for the development of assessment programmes to assess students’ achievement in relation to the targets set by the Curriculum. Each content strand (number, measurement, algebra, geometry, statistics and probability) has a related standard outlining what students should know and be able to do in order to meet the requirements of the Curriculum. Aligned to each standard is the Curriculum Attainment Targets which specifically breaks down the content strand to several measurable goals aimed at achieving the standard.
{"url":"https://pepelearningacademy.com/catalog/info/id:127,cms_featured_course:1","timestamp":"2024-11-09T04:12:38Z","content_type":"text/html","content_length":"60623","record_id":"<urn:uuid:82b23452-7fe6-444c-b375-6c38bcf43fb8>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00515.warc.gz"}
Linear Algebra/Basis Vectors - Wikibooks, open books for an open world A basis of a vector space V is a set of vectors which have the following properties: • They are linearly independent. • Their linear combinations build up every vector of V. A vector space is of dimension d if there exists d linearly independent vectors and that any d+1 vectors are linearly dependent. In a vector space of dimension d, any d linearly independent vectors form a basis for that vector space. Let there be d vectors. Let x be another vector. Then those d vectors and x are linearly dependent, so x is linearly dependent on those d vectors. Hence, those d vectors form a basis. If a vector space has d vectors for a basis, then it is of dimension d. If you have m linearly independent vectors in a vector space of dimension n (with m<=n), then you can choose n-m vectors which form a basis of the vector space along with the starting m vectors. Those m vectors do not form a basis since it is not equal to n, so there exists a vector in the vector space linearly independent of them. Continuing choosing vectors independent of the previous ones in this fashion until one has n vectors.
{"url":"https://en.m.wikibooks.org/wiki/Linear_Algebra/Basis_Vectors","timestamp":"2024-11-03T20:25:50Z","content_type":"text/html","content_length":"27652","record_id":"<urn:uuid:0869d3e0-5841-4937-b92f-cea690c47a06>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00744.warc.gz"}
Minimax algorithm and alpha-beta pruning This article will teach you about the minimax algorithm and alpha-beta pruning, from a beginner's perspective. Photo by Faye Cornish on Unsplash This article aims at providing the reader with an introduction to the minimax search algorithm, and to alpha-beta pruning – an optimisation over that same algorithm. I am writing this article as a complete beginner with regards to this topic, which hopefully will benefit you and me: • it will benefit me because, in order to write a good article, I will have to push myself to really understand the algorithms; and • it will benefit you because, as I am a beginner in this topic, I will be extra careful to explain all the little details. The article will introduce the theoretical concepts needed to understand the minimax algorithm, as well as the alpha-beta pruning optimisation. I will be implementing the algorithms in Python. Feel free to join me in writing some Python code, do it in any other language you prefer, or just skip the coding parts altogether! On top of this, we will be taking a look at these algorithms from the perspective of a game. That is, we want to implement these algorithms so that we can use them as artificial intelligence algorithms to play games against humans. Therefore, the language I will be using will also revolve around that: “players”, “moves”, “scores”, “winning/losing”, etc. So, without further ado, let's start! Prior knowledge Like I just said, I am a complete beginner at this topic. The only things I have done are as follows: • I read this Wikipedia article; and • watched two YouTube videos at 2× speed. If you only want to watch one, I recommend you watch this one. If you have the time and patience to watch both, then start with this one and only then watch this. The minimax algorithm The minimax algorithm is the algorithm around which this whole article revolves, so it is best if we take some time to really understand it. In a short, but unhelpful sentence, the minimax algorithm tries to maximise my score, while taking into account the fact that you will do your best to minimise my score. Suppose that you and me are playing Tic Tac Toe, I'm the crosses (X), it's my turn to play, and the game looks like this: A game of Tic Tac Toe I have to pick a move and, for that, I analyse all of my possibilities: My three possible moves. So, I play out these three moves inside my head, and I see that none of the moves gives me a win. Therefore, there is no obvious choice that I should make. What does this mean? It means I need to keep playing out the moves inside my head, and I need to predict what you would do in each situation: My three possible moves together with the two possible replies for each. Now, this reveals something interesting! If we look at the third and fifth games of the bottom row, we see that you could win the game: Your winning positions are highlighted. This shows that we probably don't want to go down those paths! But you and me are humans, and we are taking a look at the whole drawing all at once. Let's just play out the rest of the other alternative games to see what could happen: The whole game played out. Now that we played out the whole game in our heads, we need to see what choices each player will make. This is straightforward: • I want to play what is best for me; I want to maximise my score, because I want to win; and • you want to play what is worst for me; you want to minimise my score, because you want me to lose. So, in order to evaluate all these alternatives, I have to start at the bottom. If I manage to reach any of the games in the fourth row, we will end in a draw: We draw if we get to the fourth row. Now, I need to evaluate the game positions in the third row. In other words, I need to look at the third row, and figure out how good each position is for myself. What does that depend on? It depends on how well I can do after we reach that position. What we see is that some games in the third row will result in a draw and some in a loss for me: In the third row, some games will result in a draw and some in a loss. Now, I need to go up one level and examine the games in the second row. Remember, all I am doing is trying to figure out what move I should make. In order to do that, I played out all possibilities in my head, and am now trying to predict how you will play. If I want to examine the positions in the second row, now I have to think like you! Why is that? Because the arrows from the second row to the third represent your moves. What does that mean? It means that I am the one who chooses to which position of the second row we go, but you are the one who picks where to go next! For example, if I decide to make the middle move, what would you do in response? Hypothetical scenario in which I play the middle move. If I decide to make the middle move, then we can ignore the left and right parts of the sketch, and you can focus on only two positions from the third row. One of them makes me lose, the other one ends in a draw. Which one do you pick? The one that makes me lose, obviously! Therefore, when I examine the second row, I need to figure out what's the worst-case scenario, for me, for each position. Why? Because if we do go to that position in the second row, I know you will pick what is best for you/worst for me. This means that the second row of the diagram ends up looking like this: Value of the game positions in the second row. Finally, this diagram brings some clarity! Now I understand that: • if I make the move on the left, the game ends in a draw; • if I make the centre move, I lose; and • if I make the move on the right, I lose. Thus, the obvious choice is to go with the move on the left. Tree structure abstraction In order to practise what we are still trying to grasp about the minimax algorithm, we will implement it. Trying to implement the algorithm will push you towards understanding, so let's do it. In order to focus on the details of the minimax algorithm, we will abstract away the context of a game, and instead will focus on the tree structure of the sketch I showed above. Let me take the sketch above and refactor it: I replaced all the specific drawings by circles. When it is my turn, the circle has an arrow pointing up: that's because I want to increase my score as much as possible. When it is your turn, the circle has an arrow pointing down: that's because you want to decrease my score as much as possible. Then, I gave a score of 0 to the positions that ended in a draw and a score of -1 to the positions that ended in a loss. What happens next is that we need to make the information flow upwards, so that I can make a move. When the information flows upwards, we need to fill in the circles: • a circle with an arrow pointing down picks the lowest number below it; and • a circle with an arrow pointing up picks the highest number below it. This corresponds to: • you pick the move that is worst for me; and • I pick the move that is best for me, respectively. This is the structure that we want to work with. Minimax dummy implementation In order to solidify our knowledge, let's make a dummy implementation of the minimax algorithm. To make it as simple as possible, we will implement the minimax algorithm over a tree. The tree will have nodes that branch out, and it will have terminal positions with fixed values. Our job is to implement an algorithm that traverses the tree and figures out which moves will be played out. Here is the simple tree structure, in Python: class Choice: def __init__(self, left, right): self.left = left self.right = right class Terminal: def __init__(self, value): self.value = value With these two classes, we can build a tree like the following: tree = Choice( This corresponds to the following tree: A sketch of the `tree` variable. Now we implement our minimax algorithm. The algorithm needs to accept the tree root node of the tree structure and a Boolean flag to tell which one of us is playing. Why is that? Because, when I go down the tree, I want to call the function recursively. As we go down the tree, we switch back and forth between being my turn or your turn and, as we do that, we switch the objective of the algorithm: • when it's my turn, the algorithm is trying to maximise the scores; and • when it's your turn, the algorithm is trying to minimise the scores. Then, this is the structure of the algorithm: • if we have a choice, then we need to ask the algorithm to evaluate both branches, and we pick the appropriate one; • otherwise, just return the value of the terminal. All in all, the code looks like this: def minimax(tree, maximising_player): if isinstance(tree, Choice): lv = minimax(tree.left, not maximising_player) rv = minimax(tree.right, not maximising_player) if maximising_player: return max(lv, rv) return min(lv, rv) return tree.value Notice that we use not maximising_player when calling minimax recursively to switch back and forth between the players. Now, we can call the minimax function with the previous tree, and see what we get. If we start by saying that maximising_player = True, that means that the top of the tree has an arrow pointing up. In that case, the result should be 5: Diagram showing the final score if the first player tries to maximise. If we call minimax with maximising_player = False, that means that the top of the tree has an arrow pointing down. In that case, the result should be -2: Diagram showing the final score if the first player tries to minimise. To check this, add the two calls to your script and run it: print(minimax(tree, True)) print(minimax(tree, False)) A better minimax The dummy minimax algorithm we implemented above worked on trees with a very specific structure. Now we will try to make it slightly more generic, by allowing tree nodes to have an arbitrary number of children. For that, we can start by improving the classes that represent the tree structure: class Tree: def __init__(self, children): self.children = children class Terminal(Tree): def __init__(self, value): # A terminal state is a tree with no children: self.value = value Now that each tree may have multiple subtrees, we can no longer evaluate the left and right subtrees by hand. Instead, we need to use a for loop to traverse all the subtrees. We also need to distinguish between the case when we are maximising and minimising: def minimax(tree, maximising_player): if isinstance(tree, Terminal): return tree.value if maximising_player: max_ = float("-inf") for subtree in tree.children: max_ = max(minimax(subtree, not maximising_player), max_) return max_ min_ = float("+inf") for subtree in tree.children: min_ = min(minimax(subtree, not maximising_player), min_) return min_ However, we can see that there is some duplicated code. What we need to do is realise that the code under the if statement has the exact same structure as the code under the else, it's just that we start with a different value and use a different function to update the running value. By noticing that, we can rewrite the function to look like this: def minimax(tree, maximising_player): if isinstance(tree, Terminal): return tree.value val, func = (float("-inf"), max) if maximising_player else (float("+inf"), min) for subtree in tree.children: val = func(minimax(subtree, not maximising_player), val) return val The code above uses a conditional expression to initialise the running value and the function that we use to update the value. The for loop that follows has the same structure that the two loops in the previous version of minimax. Personally, I would go one step further and rewrite the function as def minimax(tree, maximising_player): if isinstance(tree, Terminal): return tree.value v, f = (float("-inf"), max) if maximising_player else (float("+inf"), min) return f((minimax(sub, not maximising_player) for sub in tree.children), default=v) Or, now that I did, maybe I wouldn't..? If you want to study Python, try to understand why this function still works. Now, what we need to do is test our generic minimax function. Let's create a tree: tree = Tree([ which corresponds to this tree: A heterogeneous tree. Now, take your time to walk through this tree and figure out what should be the result of applying the minimax algorithm to the tree, both as the maximising player and not as the maximising player. This is what you should get: print(minimax(tree, True)) # 7 print(minimax(tree, False)) # 5 Alpha-beta pruning rationale Now that we have a basic understanding of the minimax algorithm, let's introduce the alpha-beta pruning optimisation. But first, why? So far, we only applied the minimax algorithm to very small trees. For trees of this size, the algorithm has no trouble going through the whole tree. However, suppose we were playing chess. In chess, players can generally make plenty of different moves, which means that the trees that I have been drawing would get very huge, very fast. On top of that, chess matches last for much longer than just three or four moves, which means that the trees can also get very deep. In short, it is a lot of work to traverse the whole tree. Having said this, it would be nice if we could optimise our minimax algorithm in some way. For example, what if we managed to avoid traversing certain parts of the tree, if we know those parts are irrelevant? That would be great! But how could we ever realise that a certain part of the tree is irrelevant..? Let's look at the heterogeneous tree from before, from the point of view of the maximising player: Previous tree, from the point of view of the maximiser. And now that we are looking at the tree, let's try to evaluate the tree. We will start at the bottom left, where there is a node that is maximising, and gets to choose between a 3 and a 4. Obviously, that node picks the 4: The value `4` is picked. Now it's time to evaluate the node to the right of that one, which I highlighted here: The next node to be evaluated is highlighted. Not only did I highlight the next node, but I also drew a little arrow from the terminal value 8 to that highlighted node. Think with me: • the highlighted node is a maximising node; and • the terminal value 8 is connected to the highlighted node. Therefore, we know that the highlighted node will evaluate to 8 or higher. However, the node immediately above is a minimising node: The highlighted node evaluates to `8` or more, and is under a minimising node. What can we tell, then? We already know that the minimising node will pick the path on the left! Why? Because that node wants to minimise the score, and it has two alternatives: • it goes left, picking a path worth 4; or • it goes right, picking a path that is worth 8 or more! If you want to keep the score as low as possible, what alternative do you pick? The first one, of course! And, in doing this, you managed to completely ignore a portion of the tree: A part of the tree was left unvisited/unevaluated. This is the essence of alpha-beta pruning! The intuitive explanation of alpha-beta pruning is, well, fairly intuitive. Now, we need to turn it into something objective, so that we can translate it into code. In the example above, why were we able to ignore a part of the tree? We managed to ignore a part of the tree because, at some point, we realised that the maximising node would result in a move that is too good for the minimising node immediately above, which already knows of a move that has a lower score. Slow down and re-read the paragraph above, please. To make sure you understand, here is a small tree: An incomplete tree from the maximiser's perspective. The root node is seen from the maximiser's perspective, there is a terminal node on the left (represented by the question mark) and there is a subtree to the right. In turn, that subtree is seen from the minimiser's perspective and has a terminal node on the left, whose value is 5. To its right, the remainder of the subtree is represented by an ellipsis (“...”). Question: what value(s) could the ? have, which would allow us to completely ignore the “...” subtree to the right, after we find the terminal node of value 5? I'll help you out. The minimiser node sees a 5, so what do we know? We know that the minimiser node will evaluate to 5 or less: The minimiser node evaluates to 5 or less. Immediately above it, is a maximiser node, who already visited the ? node. The maximiser node has two options: • it goes with the option on the left, which is worth ?; or • it goes with the option on the right, which is worth 5 or less. If it happens that ? is better than a 5 or less, then we can ignore the “...” subtree and just go with the option on the left! In other words, ? can be any value larger than 5: If `?` is greater than 5, the “...” subtree can be ignored. Was this easy? If not, re-read it carefully, grab a pen and a piece of paper, and make the drawings, the arrows, etc. Let's try another one (bear with me, these two exercises are handy!): An incomplete tree from the minimiser's perspective. In this new exercise, the root node is seen from the minimiser's perspective, and contains two children. On the left, a terminal node with an unknown value. On the right, a subtree. In turn, the subtree is seen from the maximiser's perspective, and contains three children: • a terminal of value 5; • a terminal of value 7; and • a subtree “...” that was omitted from the drawing. Question: what values can the subtree ? have so that these two restrictions apply: • the algorithm has to keep going after it finds the 5; and • the algorithm can stop after it finds the 7. Let's work this out. When we first reach the terminal 5, we know that the maximiser node will evaluate to 5 or more: At first, we know that the maximiser node will evaluate to 5 or more. The maximiser node is 5 or more, but we don't want the algorithm to stop. The algorithm would be able to stop if 5 were already too large when compared to ? because, in that case, the minimiser node would know that going left is always better. Therefore, ? has to be greater than 5: The left terminal has to be greater than 5. If you can't see why, imagine that the terminal node is less than 5. For example, imagine it's 4. Then convince yourself that, in that case, the algorithm could stop as soon as the 5 is hit. Alright, so if the left terminal is greater than 5, the algorithm doesn't stop and then looks at the 7. When it does, it realises that the maximiser node will evaluate to 7 or more: The terminal 7 increases the value of the maximiser node. But, at this point, we wanted our algorithm to be able to stop. Therefore, the terminal node on the far left can't be that high. It had to be high enough so that the 5 wouldn't stop the algorithm, but it has to be low enough so that the 7 can stop the algorithm! Therefore, if the left terminal has a value between 5 (exclusive) and 7 (inclusive), we satisfy the restrictions of my problem statement. For example, if the left terminal had a value of 6, then the restrictions of the problem statement would be satisfied: The left terminal containing a hypothetical value of 6. Alpha and beta These two exercises we just did had us thinking about the algorithm in an interesting way: we were thinking about what the algorithm had to have seen earlier on, so that it could stop at a particular point in time. Both exercises had the same focus: when evaluating a node \(n\), when can I know the node above will never pick \(n\)? From the exercises above, we figured this out: • if a minimising node \(n\) has a value of \(v\) or lower, and the maximising node above knows a path with a value greater than or equal to \(v\), then we can stop; and, similarly, • if a maximising node \(n\) has a value of \(v\) or higher, and the minimising node above knows a path with value less than or equal to \(v\), then we can stop. So, for us to be able to optimise our minimax algorithm, we need to find a way to keep track of two things: • when we are inside a minimising node, we need to know what's the value of the highest node that the maximising node above has seen; and • when we are inside a maximising node, we need to know what's the value of the lowest node that the minimising node above has seen. I know this is too many words, so let's focus on one case: If a minimising node \(n\) has a value of \(v\) or lower, and the maximising node above knows a path with a value greater than or equal to \(v\), then we can stop. Therefore, when we are inside a minimising node, we need to know what's the value of the highest node that the maximising node above has seen. This is what we call “alpha” (\(\alpha\)), in alpha-beta pruning. \(\alpha\) will be the best value that a maximiser node has found elsewhere. (The “best” value of a maximiser is also the highest.) When we are inside a minimising node, we run through its children, evaluating them, and updating the value \(v\) to always be the lowest one found so far. If, at any point in time, \(v \leq \alpha\), we can stop. We can stop because we know that the maximiser node above will have an option that is worth \(\alpha\). If the current minimiser has a value of \(v\), that can only go down (because we are in a minimising node), then we can save ourselves the trouble to trying to get \(v\) to be even lower. We will never go down that subtree, either way. In contrast, we use “beta” (\(\beta\)) to keep track of the lowest option that a minimiser node has found so far. When a maximiser node finds its value above \(\beta\), we can stop. Implementing alpha-beta pruning The first step to implementing alpha-beta pruning is modifying the minimax algorithm so that it also accepts values for alpha and beta, which can have default values of \(-\infty\) and \(+\infty\), def pruning(tree, maximising_player, alpha=float("-inf"), beta=float("+inf")): Then, we need to make sure that these values are passed down the recursion calls: def pruning(tree, maximising_player, alpha=float("-inf"), beta=float("+inf")): if isinstance(tree, Terminal): return tree.value val, func = (float("-inf"), max) if maximising_player else (float("+inf"), min) for subtree in tree.children: val = func(pruning(subtree, not maximising_player, alpha, beta), val) return val But these are useless if we never really use them to prune branches of the tree. So, we can do the checks that we talked about earlier: • in a maximising node, we can see if the value is too high when compared to beta; and • in a minimising node, we can see if the value is too low when compared to alpha. This gives us this code: def pruning(tree, maximising_player, alpha=float("-inf"), beta=float("+inf")): if isinstance(tree, Terminal): return tree.value val, func = (float("-inf"), max) if maximising_player else (float("+inf"), min) for subtree in tree.children: val = func(pruning(subtree, not maximising_player, alpha, beta), val) if (maximising_player and val >= beta) or (not maximising_player and val <= alpha): return val This implements the logic to prune the branches, but it won't prune anything yet. Why not? Because we never update alpha and beta, that's why! Updating them is similar to updating val, we just need to distinguish between the maximising/minimising cases: def pruning(tree, maximising_player, alpha=float("-inf"), beta=float("+inf")): if isinstance(tree, Terminal): return tree.value val, func = (float("-inf"), max) if maximising_player else (float("+inf"), min) for subtree in tree.children: val = func(pruning(subtree, not maximising_player, alpha, beta), val) if maximising_player: alpha = max(alpha, val) beta = min(beta, val) if (maximising_player and val >= beta) or (not maximising_player and val <= alpha): return val And that's it! We can make it cleaner, but let's take it for a test drive. First, let's make sure that it gives the correct results: tree = Tree([ print(pruning(tree, True)) # 7 print(pruning(tree, False)) # 5 Now, we need to make sure that it actually prunes a couple of things. We can check that with ease. First, implement the dunder method __str__ on the Tree and Terminal classes. This allows them to be printed in a nice way: class Tree: def __init__(self, children): self.children = children def __str__(self): return f"Tree({', '.join(str(sub) for sub in self.children)})" class Terminal(Tree): def __init__(self, value): self.value = value def __str__(self): return f"T({self.value})" With this in place, printing the tree above should result in this: # Tree(Tree(Tree(T(3), T(4)), Tree(T(8), Tree(T(-2), T(10)), T(5))), T(7)) Now, we add a print(tree) statement at the top of the pruning algorithm. This way, we can see what trees/terminals are being visited by the function: def pruning(tree, maximising_player, alpha=float("-inf"), beta=float("+inf")): Remember that, in the maximising case, we could avoid visiting a subtree: A subtree that can be ignored. For example, the terminal with value 10 should never be visited. Let's run the algorithm as the maximising player, and see what the output is: print(pruning(tree, True)) """ Output is: Tree(Tree(Tree(T(3), T(4)), Tree(T(8), Tree(T(-2), T(10)), T(5))), T(7)) Tree(Tree(T(3), T(4)), Tree(T(8), Tree(T(-2), T(10)), T(5))) Tree(T(3), T(4)) Tree(T(8), Tree(T(-2), T(10)), T(5)) In the output above, each line corresponds to a recursive call. When the output just looks like T(?), that's because we reached a terminal... and, as you can see, a couple of terminals were never The terminals T(-2), T(10), and T(5), were never visited! The tree Tree(T(-2), T(5)) was also never visited, as expected. Refactoring the alpha-beta pruning implementation The algorithm is working just fine, but it doesn't look as good as it could! The first thing we can do is refactor the condition that allows us to break out of the loop: def pruning(tree, maximising_player, alpha=float("-inf"), beta=float("+inf")): for subtree in tree.children: if maximising_player: alpha = max(alpha, val) beta = min(beta, val) if (maximising_player and val >= beta) or (not maximising_player and val <= alpha): return val Notice that when we are the maximising player, alpha keeps getting updated to get increasingly higher values, that come from val. Then, we check if val >= beta. When we are not the maximising player, beta keeps getting updated to get increasingly lower values, that come from val. Then, we check if val <= alpha. We can combine these two cases together, and just write beta <= alpha: def pruning(tree, maximising_player, alpha=float("-inf"), beta=float("+inf")): if isinstance(tree, Terminal): return tree.value val, func = (float("-inf"), max) if maximising_player else (float("+inf"), min) for subtree in tree.children: val = func(pruning(subtree, not maximising_player, alpha, beta), val) if maximising_player: alpha = max(alpha, val) beta = min(beta, val) if beta <= alpha: return val By the way, now that we have an explicit if maximising_player statement, we can get rid of the assignment to func: def pruning(tree, maximising_player, alpha=float("-inf"), beta=float("+inf")): if isinstance(tree, Terminal): return tree.value val = float("-inf") if maximising_player else float("+inf") for subtree in tree.children: sub_val = pruning(subtree, not maximising_player, alpha, beta) if maximising_player: val = max(val, sub_val) alpha = max(alpha, sub_val) val = min(val, sub_val) beta = min(beta, sub_val) if beta <= alpha: return val In this article, you: • explored a game scenario as a tree with branches representing possible moves; • learned the minimax algorithm; • implemented the minimax algorithm for: □ homogeneous tree structures; and □ trees with an arbitrary number of children. • understood when the minimax algorithm can save time; • solved challenges that made you think about the values that we want to keep track of in the alpha-beta pruning algorithm; • implemented the alpha-beta pruning algorithm; and • verified that your implementation is able to prune irrelevant search paths. Next steps We implemented alpha-beta pruning over explicit trees, but I said I wanted to use this to play games. Therefore, I need to implement a game, and then create an abstraction that allows the alpha-beta pruning algorithm to work its magic without having to worry about the exact game that we implemented. Then, we piece everything together! This will be material for a couple of future articles, so subscribe to my newsletter not to miss them. Become a better Python 🐍 developer 🚀 +35 chapters. +400 pages. Hundreds of examples. Over 30,000 readers! My book “Pydon'ts” teaches you how to write elegant, expressive, and Pythonic code, to help you become a better developer. >>> Download it here 🐍🚀.
{"url":"https://mathspp.com/blog/minimax-algorithm-and-alpha-beta-pruning?ref=sangkon.com","timestamp":"2024-11-11T04:44:03Z","content_type":"text/html","content_length":"75136","record_id":"<urn:uuid:1dfe899c-36a4-4e39-8704-21ae55dcc117>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00694.warc.gz"}
High-Pressure Brake Systems for Heavy-Duty Vehicles in context of brake pressure 26 Sep 2024 Title: Optimization of High-Pressure Brake Systems for Heavy-Duty Vehicles: A Study on Brake Pressure Dynamics Abstract: High-pressure brake systems are crucial for heavy-duty vehicles to ensure safe and efficient braking performance. This article presents a comprehensive analysis of high-pressure brake systems, focusing on the dynamics of brake pressure. Theoretical models and mathematical formulations are developed to understand the relationships between various system parameters and their impact on brake pressure. Introduction: Heavy-duty vehicles, such as trucks and buses, require advanced braking systems to ensure safe stopping distances and prevent accidents. High-pressure brake systems have become increasingly popular due to their ability to provide consistent and reliable braking performance under varying load conditions. However, the dynamics of high-pressure brake systems are complex and influenced by multiple factors, including system pressure, fluid viscosity, and component geometry. Theoretical Background: The brake pressure (P) in a high-pressure brake system can be described using the following formula: P = (ΔP \* A) / (L + ΔL) • P is the brake pressure • ΔP is the pressure drop across the brake chamber • A is the effective area of the brake piston • L is the length of the brake hose • ΔL is the additional length due to compression or expansion Brake Pressure Dynamics: The dynamics of high-pressure brake systems are influenced by the following factors: 1. System pressure: The system pressure (P) affects the brake pressure (P) and can be described using the formula: P = P \* (A / A') where A' is the effective area of the master cylinder. 2. Fluid viscosity: The fluid viscosity (μ) affects the pressure drop across the brake chamber and can be described using the formula: ΔP = (L \* μ) / (A \* Δt) where Δt is the time interval between brake applications. 3. Component geometry: The component geometry, including the brake piston area (A) and the brake hose length (L), affects the brake pressure (P) and can be described using the formula: P = P \* (A / A') \* (L + ΔL) Conclusion: High-pressure brake systems for heavy-duty vehicles are complex systems that require a deep understanding of their dynamics. Theoretical models and mathematical formulations have been developed to describe the relationships between various system parameters and their impact on brake pressure. These findings can be used to optimize high-pressure brake systems, ensuring safe and efficient braking performance under varying load conditions. 1. System design: Brake system designers should consider the effects of system pressure, fluid viscosity, and component geometry on brake pressure when designing high-pressure brake systems. 2. Component selection: Brake component manufacturers should select materials and designs that minimize pressure drop and maximize effective area to ensure optimal braking performance. 3. Testing and validation: Brake system testing and validation protocols should be developed to ensure that high-pressure brake systems meet safety and performance standards. Future Work: Further research is needed to investigate the effects of other factors, such as temperature and humidity, on high-pressure brake system dynamics. Additionally, experimental studies can be conducted to validate the theoretical models and mathematical formulations presented in this article. Related articles for ‘brake pressure’ : • Reading: High-Pressure Brake Systems for Heavy-Duty Vehicles in context of brake pressure Calculators for ‘brake pressure’
{"url":"https://blog.truegeometry.com/tutorials/education/363bc0b48b6a8b3a7a84550ed8e5844e/JSON_TO_ARTCL_High_Pressure_Brake_Systems_for_Heavy_Duty_Vehicles_in_context_of_.html","timestamp":"2024-11-12T06:55:54Z","content_type":"text/html","content_length":"18595","record_id":"<urn:uuid:6bee1d57-6fd1-4910-99ad-f8ef74b81b4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00226.warc.gz"}
Correlated Binomial Process Cohen and Kontorovich (COLT 2023) initiated the study of what we call here the Binomial Empirical Process: the maximal empirical mean deviation for sequences of binary random variables (up to rescaling, the empirical mean of each entry of the random sequence is a binomial hence the naming). They almost fully analyzed the case where the binomials are independent, which corresponds to all random variable entries from the sequence being independent. The remaining gap was closed by Blanchard and Voráček (ALT 2024). In this work, we study the much more general and challenging case with correlations. In contradistinction to Gaussian processes, whose behavior is characterized by the covariance structure, we discover that, at least somewhat surprisingly, for binomial processes covariance does not even characterize convergence. Although a full characterization remains out of reach, we take the first steps with nontrivial upper and lower bounds in terms of covering numbers. • concentration • convergence • empirical process • high dimension • subgaussian ASJC Scopus subject areas • Artificial Intelligence • Software • Control and Systems Engineering • Statistics and Probability Dive into the research topics of 'Correlated Binomial Process'. Together they form a unique fingerprint.
{"url":"https://cris.bgu.ac.il/en/publications/correlated-binomial-process","timestamp":"2024-11-03T15:51:51Z","content_type":"text/html","content_length":"49035","record_id":"<urn:uuid:cb5be0ba-7a39-487a-b80a-4c0a8e06c0f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00852.warc.gz"}
Finding Unknown in a Table Using Proportional Relations Question Video: Finding Unknown in a Table Using Proportional Relations Mathematics In the table below, the numbers in the first row are proportional to the corresponding numbers in the second row. What is the value of 𝑥? Video Transcript In the table below, the numbers in the first row are proportional to the corresponding numbers in the second row. What is the value of 𝑥? So again, the numbers in the first row are proportional to the corresponding numbers in the second row. Therefore, three is to four as 𝑥 is to six. And we can solve by cross multiplying. Three times six is 18 and four times 𝑥 is four 𝑥. So to solve for 𝑥, we divide both sides by four. The fours cancel and then 18 divided by four is 4.5. So the value of 𝑥 is 4.5.
{"url":"https://www.nagwa.com/en/videos/743154372635/","timestamp":"2024-11-06T18:53:27Z","content_type":"text/html","content_length":"240820","record_id":"<urn:uuid:e76672c3-40ec-4101-a29d-2888a0ec37be>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00101.warc.gz"}
Physics - Recall for Chapter 22; Electric Fields What is an electric field? A region of space where a charge will experience an electrostatic force. What is electric field strength? The force per unit charge on a positive test charge placed at a point in a field. What are the two different units of electric field strength? \[\text{NC}^{-1} \text{ or } \text{Vm}^{-1}\] What does the direction of an electric field line show? The direction of the force exerted on a positive test charge. What does the spacing of electric field lines show? What is a test charge? When is the equation $E = \frac{V}{d}$ used? To find the uniform field strength between two parallel plates. What is Coulomb’s law in words? The force between two point charges is: • Proportional to the product of the two charges. • Inversely proportional to the square of their separation. What is electric potential difference? The work done per unit positive charge in moving between two points. What is the electric potential of a point? The work done per unit positive charge to move from infinity to that point. What is a Volt? What is the relationship between the electrostatic force between two point charges and their separation distance? It is inversely proportional to the square of separation. What is the relationship between electric potential and distance from a point charge? \[\text{electric potential} \propto \frac{1}{\text{distance}}\] What is a uniform electric field? A region of space in which the electric field strength is the same at all points. What is a radial field? An electric field generated by a point charge. What sort of electric fields are characterised by the inverse square law? What is a neutral point in an electric field? A point at which the resultant field strength is zero. Is electric field strength a vector or a scalar quantity? Is electric potential a vector or a scalar quantity? What are two similarities between electric and gravitational fields? • Both obey the inverse square law for field strength. • Both have infinite range. What are two differences between electric and gravitational fields? • Gravitational fields act on masses, electric fields act on charges • Gravitational fields are always attractive, whereas electric potential is always negative \[V = \frac{Q}{4\pi \epsilon _ 0 r}\] When is this equation used? To find the potential in a radial field. \[C = 4 \pi \epsilon _ 0 r\] What is this equation used for? To calculate the capacitance of an isolated sphere. What is the relationship between electric potential energy $E _ p$ and electric potential $V$? \[E_p = Vq\] Related posts
{"url":"https://ollybritton.com/notes/a-level/physics/recall-questions/chapter-22-electric-fields/","timestamp":"2024-11-05T09:09:27Z","content_type":"text/html","content_length":"509216","record_id":"<urn:uuid:da3634ba-f406-45cf-8ba6-b5a85de5be7e>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00572.warc.gz"}
Inverse Correlation Explained - Finance Reference Inverse correlation is a type of relationship between two variables. The term refers to the fact that higher values of one variable are correlated with lower values of the other. The reverse of this relationship is also known as negative correlation. It is one of the strongest forms of correlation, and is commonly used to describe a relationship between two variables. There are many types of inverse relationships, and we will discuss the two most popular types here. You can also learn about parabolic inverse correlation and negative inverse correlation. Negative Inverse correlation Negative correlation, also known as inverse relationship, occurs when a higher value of one variable is associated with a lower value of another. Inverse correlation is an important concept when studying statistics. It is helpful to have some background knowledge before interpreting the relationship between two variables. Listed below are some examples of the relationship between variables and their values. This article explains inverse correlation. Also, learn how to interpret inverse correlation charts. The inverse of a positive correlation is equal to its negative value. For example, if two stocks move in the same direction, but at different rates, this is considered a negative correlation. This is the case when the correlation is less than one. This is known as partial positive correlation, but it is important to remember that correlation is not a causal relationship. The first two steps of determining whether a correlation is positive or negative are the most important. The second step is to determine whether the correlation is high enough. The high correlation level is known as Cohen’s d. Parabolic inverse correlation The inverse correlation of two variables can be determined using parabolic graphs. In parabolic graphs, the increase in the independent variable has a smaller impact on the dependent variable than it does on the independent variables at higher levels. In graphing the correlation, a flat curve forms as the independent variable increases while the dependent variable declines. In the opposite case, the relationship between two variables is inverse. This relationship can be applied to any type of data. In the study of statistical data, the reverse correlation method can be used to answer a wide variety of questions. Its most common application is the evaluation of faces for a single trait. In face-space reverse correlation studies, researchers make use of a database of images of faces varying in demographics, age, and emotion. Researchers first determine the number of trials they want to present participants with and how many stimuli to present at one time. Another inverse correlation is parabolic. In this situation, the independent variable increases at a steeper rate than the dependent variable. The change in the dependent variable is steep at lower values, and becomes more gradual as the independent variable increases. This form of inverse correlation is a useful tool for determining the strength of a specific relationship between two variables. It can also be used for estimating and interpreting mental representations. Relationship between two variables A positive or negative correlation between two variables indicates that the two variables move together, but there is no cause-and-effect relationship. This type of relationship is sometimes referred to as an inverse correlation. In either case, the two variables show a downward slope. A bad romantic partner, on the other hand, will not do anything to make you feel better on a bad day. While these types of relationships are far from perfect, they can be quite different from each other. The inverse correlation between two variables describes how closely a pair of variables is related. For example, Coca-Cola and PepsiCo have a close competition for the same market. When one of them launches a new product, PepsiCo will benefit. Conversely, Coca-Cola will lose market share. This type of relationship is common in highly competitive markets, such as those in the consumer goods Inverse correlation is a statistical measurement of the relationship between two variables. It is used to estimate the benefits of portfolio diversification. Specifically, it can be used to calculate the risk reduction benefits of negatively correlated assets. The definition of inverse correlation is essentially the same as that of correlation. But it can be used to assess the risk reduction benefits of a particular asset, such as gold or oil. There are various methods for calculating the inverse correlation of a pair of variables. It is determined by calculating the Pearson’s r. This identifies the relationship between two variables. When calculating the correlation, the r value must be smaller than zero. In other words, a negative correlation means that the two variables are not related. To do this, the correlation matrix should contain n variables. If the variables are not identical, the r value is zero. Nevertheless, in cases where the variables are not similar, there are cases where the r is negative.
{"url":"https://www.financereference.com/inverse-correlation/","timestamp":"2024-11-06T07:55:22Z","content_type":"text/html","content_length":"128830","record_id":"<urn:uuid:eab5e55c-52ff-4fa9-ad49-f00facc09664>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00826.warc.gz"}
Which of the following represent a function • Thread starter Jaco Viljoen • Start date In summary: These three equations might be treated in Calculus, but equation (b) better not be referred to as a function because it does not meet the vertical line test. Homework Statement Which of the following represent a function: Homework Equations The Attempt at a Solution b) is a circle so has 2 values for y and is not a function. I know that functions only have 1 y relation to x but don't know how to prove whether a and c are functions or not. Thank you Last edited: If domain and codomain of the functions are f: R-->R . Then "a" can't be function because there is no answer for x=0. And also like you said b can't be a function because there are two reflections of Hi Thermo, Thank you for your reply, Thermo said: If domain and codomain of the functions are f: R-->R . Then "a" can't be function because there is no answer for x=0. The others can be a function. Others? meaning b and c? I don't agree or I am missing the plot. B cannot be a function because it is a circle... Yes I realized it later sorry for that. So the answer must be only c is a function. However domains and codomains matter in this case. Would I be able to solve the equation to prove this? Jaco Viljoen said: Homework Statement Which of the following represent a function: a) [itex]\ \ y=-1/2x+3[/itex] b) [itex]\ \ x^2+y^2=25[/itex] c) [itex]\ \ y=2x^2+7x+3[/itex] Homework Equations The Attempt at a Solution b) is a circle so has 2 values for y and is not a function. I know that functions only have 1 y relation to x but don't know how to prove whether a and c are functions or not. Thank you By the way: is incorrect. What you have for (a) literally means ##\displaystyle\ y=-\frac{1}{2}x+3\ .\ ## Is that what you mean? Or do you mean ##\displaystyle\ y=-\frac{1}{2x}+3\ ?\ ## You can graph the equations , (a) and (c) . You should be able to identify what figures the graphs produce. Hi Sammy, you are correct, I get a u shape for c and upside down u shape for a. So a and c are functions. Thank you so much for your input. Jaco Viljoen said: Hi Sammy, you are correct, I get a u shape for c and upside down u shape for a. So a and c are functions. Thank you so much for your input. That U shape is called a parabola. What is the graph of equation (a) called ? Sorry for the wrong info I wasn't so sure I think I am mistaken for continuity or differentiability. SammyS said: That U shape is called a parabola. What is the graph of equation (a) called ? a Hyperbola Thermo said: Sorry for the wrong info I wasn't so sure I think I am mistaken for continuity or differentiability. no problem, Luckily there are many smart guys and gals to double check one another. there is a second part to the question: which statements do not define a one-to-one function? My answer is B because it is a circle and not a function. Homework Helper Gold Member Jaco Viljoen said: there is a second part to the question: which statements do not define a one-to-one function? My answer is B because it is a circle and not a function. Basic equation of a function is y=f(x).. Aren't all of them functions?? B isn't a one to one function and A isn't a continuous function.. They are just the types of function.. Last edited: From my textbook: A function f between two sets of real numbers A and B is a relation in which each element of A is paired with a unique element of B. If you draw a vertical line on the graph, is it possible that the line can intersect the graph at 2 places? If it does the equation is not a function. The function y = f ) is a function if it passes the vertical line test . It is a one-to-one function if it passes both the vertical line test and the horizontal line test. Then none of these are a 1-1 function. Could someone agree/disagree with this? Thank you, A and C because B is not a function. I am not sure anymore. Last edited: Homework Helper Gold Member Jaco Viljoen said: From my textbook: A function f between two sets of real numbers A and B is a relation in which each element of A is paired with a unique element of B. If you draw a vertical line on the graph, is it possible that the line can intersect the graph at 2 places? If it does the equation is not a function. Well is it due this "precalculus" concept? Because as per my knowledge, these are all treated as functions in calculus. cnh1995 said: Well is it due this "precalculus" concept? Because as per my knowledge, these are all treated as functions in calculus. These three equations might be treated in Calculus, but equation (b) better not be referred to as a function. Jaco Viljoen said: The function y = f ) is a function if it passes the vertical line test . It is a one-to-one function if it passes both the vertical line test and the horizontal line test. Then none of these are a 1-1 function. Could someone agree/disagree with this? The first one, which I am assuming is y = -(1/2)x + 3, is a function, and is one-to-one. cnh1995 said: Basic equation of a function is y=f(x).. That is just function notation, but isn't any sort of basic equation. cnh1995 said: Aren't all of them functions?? No, not all of them are functions. cnh1995 said: B isn't a one to one function and A isn't a continuous function.. The equation in a) is a continuous function. In clearer form, the equation is y = -(1/2)x + 3. cnh1995 said: They are just the types of function.. cnh1995 said: Well is it due this "precalculus" concept? Because as per my knowledge, these are all treated as functions in calculus. SammyS said: These three equations might be treated in Calculus, but equation (b) better not be referred to as a function. Sammy is correct. Thermo said: I believe it was meant as (-1/2)x, not (-1)/(2x). hinted that my thought is correct, but he didn't explicitly say that. In any case, what he wrote was this: y=−1/2x+3. The correct interpretation of this is (-1/2)x + 3, not -1/(2x) + 3. (-1/2)x + 3, not -1/(2x) + 3 as mark says, Sorry for the typo, still getting used to typing like this. Jaco Viljoen said: (-1/2)x + 3, not -1/(2x) + 3 as mark says, Sorry for the typo, still getting used to typing like this. In that case, when you said the graph for equation (a) was a hyperbola, you were incorrect. Jaco Viljoen said: which was in response to SammyS said: What is the graph of equation (a) called ? Equation (a) : ##\ y=(-1/2)x + 3\ ## gives the graph of a line with slope of ##\ -1/2\ ## . By the way: Did you say that of the equations gave a 1 to 1 function ? SammyS said: In that case, when you said the graph for equation (a) was a hyperbola, you were incorrect. which was in response to Equation (a) : ##\ y=(-1/2)x + 3\ ## gives the graph of a line with slope of ##\ -1/2\ ## . By the way: Did you say that none of the equations gave a 1 to 1 function ? Hi Sammy, I realized It is not a hyperbola, was thinking of something I was watching. It is a line it is a 1-1 function. FAQ: Which of the following represent a function 1. What is a function? A function is a mathematical relationship between two sets of numbers, where each input (or independent variable) maps to exactly one output (or dependent variable). 2. How can I tell if a given set of points represents a function? To determine if a set of points represents a function, you can use the vertical line test. If a vertical line can be drawn through the graph and only intersects the graph at one point, then the set of points represents a function. If the vertical line intersects the graph at more than one point, then the set of points does not represent a function. 3. Can a function have more than one output for a given input? No, by definition, a function can only have one output for each input. If there are multiple outputs for a given input, then the relationship between the two sets of numbers is not a function. 4. What is the difference between a linear and non-linear function? A linear function is a function where the output varies in direct proportion to the input. This means that the graph of a linear function is a straight line. On the other hand, a non-linear function is a function where the output does not vary in direct proportion to the input. The graph of a non-linear function is not a straight line. 5. How are functions used in real life? Functions are used in various fields, such as engineering, physics, economics, and statistics, to model and analyze relationships between different variables. For example, in economics, functions are used to represent the relationship between supply and demand. In physics, functions are used to describe the motion of objects. In everyday life, functions can also be used to calculate things like interest rates, distances, and probabilities.
{"url":"https://www.physicsforums.com/threads/which-of-the-following-represent-a-function.817425/","timestamp":"2024-11-07T09:13:07Z","content_type":"text/html","content_length":"207255","record_id":"<urn:uuid:2f62f750-2791-4283-b7d3-466069250f85>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00799.warc.gz"}
Optimization of Pervious Geopolymer Concrete Using TOPSIS-Based Taguchi Method Department of Civil and Environmental Engineering, UAE University, Al Ain P.O. Box 15551, United Arab Emirates Department of Civil Engineering, Universiti Malaya, Kuala Lumpur 50603, Malaysia Author to whom correspondence should be addressed. Submission received: 22 June 2022 / Revised: 15 July 2022 / Accepted: 16 July 2022 / Published: 18 July 2022 This paper evaluates the effect of mix design parameters on the mechanical, hydraulic, and durability properties of pervious geopolymer concrete (PGC) made with a 3:1 blend of granulated blast furnace slag (GBFS) and fly ash (FA). A total of nine PGC mixtures were designed using the Taguchi method, considering four factors, each at three levels, namely, the binder content, dune sand addition, alkaline-activator solution-to-binder ratio (AAS/B), and sodium hydroxide (SH) molarity. The quality criteria were the compressive strength, permeability, and abrasion resistance. The Taguchi and TOPSIS methods were adopted to determine the signal-to-noise (S/N) ratios and to optimize the mixture proportions for superior performance. The optimum mix for the scenarios with a compressive strength and abrasion resistance at the highest weights was composed of a binder content of 500 kg/m^3, dune sand addition of 20%, AAS/B of 0.60, and SH molarity of 12 M. Meanwhile, the optimum mix for the permeability-dominant scenario included a 400 kg/m^3 of binder content, 0% of dune sand addition, 0.60 of AAS/B, and 12 M of SH molarity. For a balanced performance scenario (i.e., equal weights for the responses), the optimum mix was similar to the permeability scenario with the exception of a 10% dune sand addition. An ANOVA showed that the binder content and dune sand addition had the highest contribution toward all the quality criteria. Multivariable regression models were established to predict the performance of the PGC using the mix design factors. Experimental research findings serve as a guide for optimizing the production of PGC with a superior performance while conducting minimal experiments. 1. Introduction For the past few decades, the demand for cement has considerably increased to establish new infrastructure and sustain constant growth in the global population. The projected increase is estimated to reach up to 23% by 2050, leading to numerous environmental and economic concerns [ ]. These concerns include greenhouse gas emissions, as the production of cement contributes to 8–10% of the global carbon dioxide (CO ) footprint and the consumption of nearly 1.6 tons of natural resources per ton of cement produced [ ]. Thus, there is a need to lessen the utilization of cement in concrete by identifying alternative sustainable binders that reduce the CO footprint and replenish natural resources. Other global problems involve the currently impervious concrete pavement systems. These systems prevent water and air percolation [ ]. The impervious nature of these surfaces also significantly affects environmental processes, including stormwater runoff and urban heat islands [ ]. Meanwhile, the harmful disposal of industrial wastes in landfills poses environmental distress [ ]. To improve the sustainable development of pavements, the environmental concerns induced by the production of cement, the use of impervious pavements, and the disposal of industrial by-products should be addressed and mitigated. One technique involves the incorporation of supplementary cementitious materials (SCMs) obtained as waste materials from various industries in pervious concrete (PC). This system (i.e., PC) is regarded as a sustainable urban drainage system (SUDS) that promises to alleviate said environmental challenges and promote the environmental sustainability of cities, by improving the percolation of air and water, mitigating CO emissions and the consumption of natural resources, and beneficially recycling industrial wastes [ PC is a gap-graded concrete with zero slump and interconnected pores, allowing water to infiltrate and be collected as groundwater [ ]. In addition to the evaluation of its strength properties, as in conventional concrete, the performance of PC is assessed through its hydraulic properties. Particular PC features include its pore sizes of 2–8 mm, a high volume of pores ranging from 15–30%, and a permeability of 2–6 mm/s [ ]. Due to its porous nature, its mechanical performance is inferior to conventional concrete, with compressive strength ranging between 2 to 28 MPa, suitable for low-to-medium traffic pavement [ ]. This system, i.e., PC, is regarded as the best practice for managing stormwater runoff [ ]. Other benefits linked to the use of PC are noise reduction from tire-surface interaction, improved skid resistance, the protection of native ecosystems, and the recharging of groundwater [ The current literature converges on the notion that industrial waste materials can be incorporated in PC as a replacement to cement to enhance its sustainability [ ]. Common materials used as SCMs include silica fume, fly ash (FA), granulated blast furnace slag (GBFS), and metakaolin, among others. The use of SCMs, such as fly ash and silica fume, also reduce the corrosion rate of steel rebar and prolong the service life of concrete structures [ ]. In fact, the utilization of such waste materials in PC production promises to reduce cement consumption and recycle these wastes rather than disposing of them in stockpiles or landfills [ ]. The complete replacement of cement with alkali-activated binders, either as single, binary, or ternary combinations of SCMs, has also shown promising results [ ]. Indeed, with its superior performance, researchers have shown great interest in utilizing alkali-activated binders to produce an alternative construction material known as geopolymer [ Geopolymers are formed by activating the aluminosilicate precursors such as FA, GBFS, or other materials with alkaline solutions [ ]. This causes the precipitation of three-dimensional polymeric Si-O-Al rigid bonds. Geopolymer materials have been advocated as a sustainable replacement for cementitious binders in concrete, as they can be formulated with considerably less CO emissions compared to ordinary Portland cement [ ]. Even though PC is mainly made with cement, several studies have reported the integration of geopolymers in PC [ ]. The results showed that fly ash-based geopolymers possessed high early strength, reduced shrinkage, and a good resistance to sulfate attack [ ]. Additionally, pervious geopolymer concrete (PGC) made with FA or GBFS displayed superior strength and durability responses compared to its cement-based counterparts [ ]. When cured at 60 °C, the PGC produced with GBFS and FA exhibited an improved compressive strength, signifying an increased polymerization rate at higher curing temperatures [ ]; however, this practice is not feasible on-site. In PGC made with a blended geopolymer binder of GBFS and FA, a higher content of GBFS yielded higher strength responses. Indeed, respective increases of 19, 49, and 47% in compressive, tensile, and flexural strengths were reported, respectively, when the GBFS content increased from 450 to 490 kg/m ]. Accordingly, PGC is considered a promising material in concrete industries, yet further studies are needed to explore the effect of mixture design factors on the strength, durability, and hydraulic properties of PGC. The production of PGC highly depends on predefining factors that affect its strength, permeability, and durability responses. Several factors augment the complexity of the design, including the curing conditions, composition of the raw materials, soluble silicate content, and the activation solution and its alkalinity. As such, mixes become cumbersome and practically inconclusive without an excessive trial-and-error analysis [ ]; thus, extensive experiments are needed to deduce the optimum mix while satisfying the desired performance [ ]. Optimization techniques can be employed based on the desired performances or criteria to overcome such complex processes and excessive trial mixes and deduce the desired optimum mix using a limited number of experiments [ ]. Some of the optimization techniques used in engineering are the Taguchi and TOPSIS (Technique for Order Preference by Similarity to Ideal Solution) methods. Dr. Taguchi developed the Taguchi method based on orthogonal arrays in the factorial technique to optimize the experimental mixtures, rationalizing multiple process variables for distinct factors and levels while reducing the uncontrolled responses (i.e., parameters) [ ]. Due to the lack of flexibility in optimizing several parameters simultaneously using the Taguchi method, the TOPSIS method was further developed to integrate various parameters at once, obtaining a single substantive optimum mix [ ]. Nevertheless, there are no studies on optimizing the mix proportions of PGC for specific desired performances using the Taguchi and TOPSIS methods. This study evaluates the effect of mix design parameters on the performance of sustainable cement-free pervious geopolymer concrete (PGC) derived from waste materials, while also optimizing the mixture proportions for superior performance. The experiments were designed using the Taguchi method, whereby the PGC mixes were proportioned considering various factors, including the binder content, dune sand addition, the ratio of alkaline activator solution-to-binder (AAS/B), and sodium hydroxide (SH) solution molarity. The optimization process was carried out using the TOPSIS method for different performance criteria that represented the mechanical and hydraulic properties and durability of PGC, comprising the compressive strength, permeability, and abrasion resistance, respectively. Multivariate regressions were also developed to predict the properties of the so-produced PGC. These findings can be particularly interesting to engineers that seek to better understand pervious geopolymer concrete, thereby promoting their adoption by the construction industry. 2. Materials and Methods 2.1. Materials Class F FA [ ] and GBFS were locally sourced for use as the precursor binding agents. Their physical and chemical characteristics are presented in Table 1 , while their mineralogy and morphology can be found elsewhere [ ]. Dune sand and crushed dolomitic limestone were employed as the fine and coarse aggregates, respectively. Their respective gradations varied from 0 to 4 mm and 4 to 10 mm, conforming to the ASTM C33 requirements [ ]. The respective dry rodded density, specific gravity, surface area, and fineness modulus of the dune sand and crushed limestone were 1660/1663 kg/m , 2.77/2.82, 141.5/2.49 cm /g, and 1.45/6.82. The sodium silicate (SS) and sodium hydroxide (SH) solutions were blended to form the alkaline activator solution (AAS). The Grade N SS solution had a composition of H O of 6.2:2.5:1.0, by mass. SH solutions, with different molarities of 8, 10, and 12 M, were obtained by dissolving 97% SH flakes in tap water. 2.2. Development of Pervious Concrete Mixes The PGC mixtures were formulated following the Taguchi method. Based on previous studies on geopolymer concrete, the performance was mainly impacted by four factors, including the binder content, dune sand addition, AAS/B, and SH molarity [ ]. The impact of the factors on the performance of the PGC was evaluated through three levels. Table 2 summarizes the factors with their corresponding levels. A 3:1 blend of GBFS and FA was considered, as this blend showed superior properties compared to others [ ]. The binder content represented the amount of binding material (i.e., GBFS and FA) in the PGC mix and ranged between 400 and 500 kg/m . Dune sand was added to the mix in replacement of the coarse aggregates to study its effect on the properties of PGC. It ranged between 0 and 20%, by mass of the total aggregate. Furthermore, the AAS/B represented the quantity of solution added to the mix with respect to the binder content, ranging from 0.5 to 0.6. The SH solution molarity was the last factor considered in the design phase, ranging from 8 to 12 M. The suggested levels were based on typical values adopted in previous work on geopolymer concrete and the ACI 522R-10 guide on pervious concrete [ ]. The ratio of SS-to-SH was fixed to 1.5, by mass, as this mix design parameter was found to have a limited impact on the properties of slag-fly ash geopolymer concrete [ ]. Accordingly, an L9 orthogonal array consisting of four factors, each at three levels, was developed, as shown in Table 3 . If the selected approach were a factorial design, the mixes required to attain the optimum would have been 81 (3 ). This underlines the importance of utilizing the Taguchi method for designing the mixes and minimizing the experimental work. 2.3. Sample Preparation The PGC mixtures were prepared and cast at ambient temperature and a relative humidity of 24 ± 2 °C and 50 ± 5%, respectively. Twenty-four hours before the concrete mixing, the AAS was prepared to ensure the dissipation of heat generated from the chemical reaction between the SH and SS solutions. The mixing sequence consisted of blending the dry components (i.e., crushed limestone, dune sand, and binder) in a mixer for 3 min, followed by the gradual addition of the AAS and further mixing for another 3 min. The obtained fresh concrete was subsequently placed into 100 mm cubic and 100 mm × 200 mm (diameter × height) cylindrical molds to assess the mechanical, hydraulic, and durability properties. The specimens were cast into 2 layers, compacted manually, sealed with plastic wrap, and cured under the same conditions until the testing time. A representative hardened cube sample is shown in Figure 1 2.4. Test Methods The 7-day cube compressive strength ( ) was determined using 100 mm cubes, as per BS EN-12390-3 [ ]. This was defined as the ultimate compressive strength of the cubic samples tested using a Wykeham Farrance compression machine with a loading capacity up to 2000 kN and at a loading rate of 2 kN/ s, as shown in Figure 1 b. Three replicate samples were tested for each mix to compute the average value. The preliminary test results showed limited change in the strength between 7 and 28 days; therefore, the analysis was carried out on 7-day only. Other past work had also implemented this approach [ The permeability of the PGC was evaluated using the falling head permeability test, as per the recommendation of ACI 522R-10 [ ]. The setup is shown in Figure 1 c. The 7-day cylindrical sample with a cross-sectional area ( ) was positioned in a closed container while its sides were coated and sealed with an epoxy mortar to avoid water leakage. The specimen was subjected to axial water flow from the top. Once the water valve was opened, the time ( ) needed for the water to flow from the initial to the final head ( ) was recorded. Hence, the permeability coefficient ( ) was determined using Equation (1): is the coefficient of permeability, is the discharge of water into the collection unit, is the time elapsed (s), is the top length of the specimen, is the top area of the specimen, and h is the applied pressure head (m). The mass loss of the 7-day PGC due to abrasive forces was determined using a Los Angeles abrasion machine, as per ASTM C131 [ ]. A Utest UTA-0602A Los Angeles abrasion machine consisted of a rolled steel drum that was rotated by a speed reducer driven by an electric motor at a speed of 32 rpm. The machine was equipped with an automatic counter, which can be preset to the required number of revolutions of the drum or the total working time, as shown in Figure 1 d. The mass of the samples was measured prior to testing ( 1) and after 500 revolutions ( 2). Hence, the abrasion resistance was calculated using Equation (2): $Abrasion resistance % = W 2 W 1 × 100 %$ 2.5. Framework for the Selection of Optimum PGC Mixes The developed framework comprised three stages ( Figure 2 ). The first stage aimed to design the PGC mixtures by considering various factors and subsequent levels while limiting the number of experiments and understanding the effect of the design factors. The Taguchi and TOPSIS methods were employed in the second stage to find the optimum mix proportions, considering single and multiple performance criteria, respectively. The final phase included establishing regression models to predict the properties of the PGC. 2.5.1. Taguchi Analysis The Taguchi method limits the number of experiments using an orthogonal array based on Gauss’s quadratic function [ ]. This method entails considering the desired property with factors and corresponding levels and evaluating it using a signal-to-noise ratio (S/N) [ ]. Optimization occurs when there is variation between the desired response value and noise factors with the required result. The S/N values are calculated using the target parameter (i.e., response) optimization characteristic, namely “larger is better,” “smaller is better,” and “nominal is better”, as shown in Equations (3)–(5), respectively. “Larger is better” signifies the maximization of the response, while “smaller is better” means that the response is minimized. Meanwhile, in the “nominal is better” characteristic, the standard deviation of the results is used to determine the target value [ The analysis was carried out using the results of the considered properties obtained from the nine experimental mixes. The S/N ratio of “larger is better” was chosen for all properties, i.e., compressive strength, permeability, and abrasion resistance, as they should be maximized in pervious concrete applications: $S / N S = − 10 × log 10 1 n ∑ i = 1 n Y i 2 → ( Smaller is better )$ $S / N L = − 10 × log 10 1 n ∑ i = 1 n 1 Y i 2 → ( Larger is better )$ $S / N N = − 10 × log 10 1 n ∑ i = 1 n ( Y i − Y o ) 2 → ( Nominal is better )$ where S/N denotes the signal-to-noise ratio, represents the number of experiments, is the optimized response, and characterizes the response mean. 2.5.2. TOPSIS Analysis The Taguchi method of analysis optimizes the levels of factors using the calculated S/N ratios for a single property or response; however, to optimize the mixture proportions for multiple properties simultaneously, a TOPSIS-based Taguchi analysis was employed. Hwang and Yoon [ ] developed the TOPSIS method that examines the distance between the target responses and ideal positive and negative solutions. The TOPSIS optimization process was carried out as follows: • The decision matrix was normalized to develop a comparison within the results of the different criteria. The S/N ratios obtained from the Taguchi method were utilized in the process, as per Equation (6): $r i j = a i j ∑ i = 1 m a i j 2$ is the S/N ratio for a performance criterion (response), and denotes the normalized vector of the • After normalization, the weights of the performance criteria were assigned relevant to their importance. Different scenarios could be developed accordingly. The highest weight was given to the most desired or significant criteria to the user, and equal weights were assigned when the criteria were equally important to the user. A weighted normalized matrix was obtained by multiplying the normalized matrix values by the corresponding assigned weights. • Furthermore, the maximum and minimum weighted normalized matrices were allotted as the positive ( ) and negative ( ) ideal solutions, respectively, and calculated as per Equations (7) and (8): $v j + = max v i j | j ∈ J , min v i j | j ∈ J$ $v j − = min v i j | j ∈ J , max v i j | j ∈ J$ • Respective separation measures from the ideal solutions, , were further obtained using Equations (9) and (10): $S + = ∑ j = 1 n v i j − v j + 2$ $S − = ∑ j = 1 n v i j − v j − 2$ • The optimal mix was then deduced from the ranking score or closeness coefficient ( ) obtained using Equation (11). The values of for each scenario, ranging from 0 to 1, were then inputted as Taguchi responses. Using the “larger-is-better” characteristic, the Taguchi method analysis was performed to determine the S/N ratios, whereby the maximum S/N ratio for each level corresponded to the optimum mix: 3. Results and Discussion 3.1. Properties of PGC 3.1.1. Compressive Strength The compressive strength of the PGC mixtures ranged between 12.7 and 40.7 MPa, as summarized in Table 4 . Mix 1, comprising a binder content of 400 kg/m and dune sand addition of 0%, exhibited a low strength response of less than 15 MPa. Conversely, mix 6, with a binder content of 450 kg/m , dune sand addition of 20%, AAS/B of 0.50, and SH solution molarity of 10 M, exhibited the highest strength response of 40.7 MPa. For each group of mixes with a constant binder content (400, 450, and 500 kg/m ), those made with a 0% dune sand addition experienced the lowest strength, i.e., mixes 1, 4, and 7, respectively. Similarly, in each group of mixes, the highest strength was attained with a 20% dune sand addition (i.e., mixes 3, 6, and 9). Such a finding is independent of the AAS/B or SH molarity, indicating the critical influence of dune sand compared to these two factors. Contour plots, showing the effect of the different mix design factors on the compressive strength, are presented in Figure 3 . The highest compressive strength response was attained by adopting a specific range of factors. For instance, a strength response higher than 30 MPa could be achieved for PGC mixtures made with 450–500 kg/m of the binder content, 10–20% of a dune sand addition, 0.50–0.55 of AAS/B, and 8–12 M of SH molarity. The use of a high binder content (i.e., ≥450 kg/m ) led to an increase in strength, owing to a higher hydraulic reaction capacity with the presence of more CaO in the binding matrix [ ]. On the other hand, a low AAS/B ratio and the addition of dune sand (10–20%) increased the strength responses due to the improved particle packing density and reduced void content [ ]. The contribution of the dune sand to strength can be also attributed to its fineness, leading to a denser and more homogenous geopolymer concrete. In fact, mortar and concrete made with dune sand had an equivalent strength to that of their counterparts made with natural aggregates [ ]. Lastly, the incorporation of a SH solution with a molarity of 10 M led to a higher dissolution of hydroxide ions, reflecting higher strength responses (>35 MPa) [ ]. Higher or lower molarities reduced the strength response. It is noteworthy that adopting one level of factors without considering the others could lead to inferior strength responses. For example, the use of a low binder content of 400 kg/m with a 0% dune sand addition in the PGC yielded inferior strength responses below 15 MPa. 3.1.2. Permeability In order to secure adequate water percolation in PC, the permeability coefficient should fall within the range of 1.3–12.2 mm/s [ ]. As summarized in Table 4 , all concrete mixes were within this range. Of these mixes, mix 1, made with 400 kg/m of the binder, a 0% dune sand addition, AAS/B of 0.50, and SH solution of 8 M, attained the highest permeability of 7.23 mm/s. Conversely, mix 6, including a binder content of 450 kg/m , dune sand addition of 20%, AAS/B of 0.50, and SH molarity of 10 M, achieved the lowest permeability response of 3.02 mm/s. The results show that incorporating a higher binder content decreased the permeability, owing to the refinement of the pore structure that hindered the ease of water percolation [ ]. Furthermore, the addition of dune sand to the PGC mixes reduced the permeability. In fact, for each group of binder content (400, 450, and 500 kg/m ), the mixes made with a 20% dune sand addition had the lowest permeability. These findings highlight the more significant impact of binder content and dune sand addition on the permeability of PGC compared to AAS/B and SH molarity, evidenced by the contribution shown in the ANOVA section later. Analogous findings have been noted in conventional pervious concrete made with cement, where the hydraulic performance was greatly affected by the amount of fine aggregate and binder incorporated into the mix [ ]. In addition, the permeability results were in line with those of compressive strength. As such, an exponential relationship was developed between the two properties, as shown in Figure 4 . This correlation can be used to predict the permeability of PGC from its values with a high accuracy (coefficient of determination, R = 0.99). Figure 5 shows contour plots highlighting the effect of various mix design factors on the permeability of the PGC mixtures. It can be observed that high permeability values, exceeding 6 mm/s, were obtained while having a binder content, dune sand addition, AAS/B, and SH solution of 400–450 kg/m , 0–10%, 0.50–0.55, and 8–10 M, respectively. Further increases in the binder content, dune sand addition, AAS/B, and SH molarity led to lower permeability responses, with values lower than 4 mm/s being attained. Such a response could be owed to the higher hydraulic reaction capabilities with the increase in binder content, which leads to less pores in the binding matrix [ ]. Similarly, the increase in dune sand content lowered the percolation capacity of the PGC due to the granular structure and void-filling capacity of the dune sand [ ]. Additionally, the permeability was reduced to below 4 mm/s when the SH solution molarity increased and AAS/B decreased, owing to an increase in the dissolution of hydroxide ions and poor packing density, respectively [ ]. Nevertheless, all 9 mixes tested herein had permeability coefficients between 3.02 and 7.23 mm/s, which are acceptable for pervious concrete applications [ ]. Moreover, it is worth noting that designing a PGC mix with one suitable factor while neglecting others may lead to insufficient permeability. For example, using 500 kg/m of binder content with a dune sand addition of 10–20% resulted in a permeability value below 4 mm/s. 3.1.3. Abrasion Resistance Table 4 summarizes the abrasion resistance of the PGC mixes. The abrasion resistance ranged between 14.4 and 55.0%, with the lowest and highest values being those of mixes 1 and 6, respectively. The results show that the abrasion resistance followed a synonymous pattern to that of the compressive strength. As such, a relationship was established between the two properties. Figure 6 shows that the abrasion resistance and were correlated through a linear equation that could be used in predicting one performance criterion or response from the other with a high accuracy (R = 0.90). To understand the effect of the mix design parameters on the abrasion resistance, bi-variate contour plots were developed, as presented in Figure 7 . The PGC mixes attained an abrasion resistance above 40% when using 450–500 kg/m of the binder content, a 10–20% dune sand addition, 0.50–0.55 of AAS/B, and 8–12 M of SH solution. It can be noticed that these levels of mix design factors are similar to those for attaining a compressive strength exceeding 30 MPa. Furthermore, a decrease in the abrasion resistance was noted when a lower binder content and dune sand addition were employed. Meanwhile, the variation in SH molarity and AAS/B did not seem to significantly impact the abrasion resistance responses, evidenced by their low contributions presented in the ANOVA section later. 3.2. Optimization Results 3.2.1. Taguchi Analysis The Taguchi optimization process was implemented to seek the targeted experimental response of one performance criterion at a time. The optimum mixture proportions for each of the three performance criteria, including compressive strength, permeability, and abrasion resistance, were obtained using the signal-to-noise ratios (S/N), as illustrated in Figure 8 . The optimum mixes for the compressive strength, permeability, and abrasion resistance were A3B3C1D3, A1B1C3D1, and A3B3C1D3, respectively. Owing to their high degree of correlation, the optimum mixes for superior compressive strength and abrasion resistance were the same, having a binder content of 500 kg/m , a dune sand addition of 20%, AAS/B of 0.50, and SH molarity of 12 M. Conversely, the optimum mix to secure the best permeability corresponded to a 400 kg/m binder content, dune sand addition of 0%, AAS/B of 0.60, and SH solution molarity of 8 M. Hence, the Taguchi method revealed that a particular combination of levels was essential to provide either a superior strength and abrasion resistance or permeability of the PGC. 3.2.2. Analysis of Variance (ANOVA) The contribution of each factor toward the strength, permeability, and abrasion resistance responses was assessed through the analysis of variance (ANOVA) at a confidence level of 95%. Figure 9 shows that the dune sand addition had the highest contribution toward strength and abrasion resistance with a value of 59% for each, followed by the binder content having contributions of 35 and 34%, respectively. Increasing the dune sand and binder contents led to a higher strength and abrasion resistance due to a higher hydraulic reaction capacity with the presence of more CaO in the matrix, improved particle packing density, and a reduced void content [ ]. Conversely, the AAS/B and SH molarity had low contributions of less than 6%. Such results were confirmed with the strength results, whereby concrete with adequate strength, i.e., above 20 MPa, could be produced regardless of the AAS/B and SH molarity. These findings show that the binder content and dune sand addition predominantly controlled the mechanical and durability properties of the Furthermore, the contribution of the dune sand addition and binder content toward the permeability were the highest at 56 and 40%, respectively. Indeed, adding dune sand and increasing the binder content were associated with better particle packing and a higher degree of hydraulic reaction, leading to less pores in the binding matrix [ ]. The AAS/B and SH molarity had less significant impacts on the permeability with lower contributions of 3 and 1%, respectively. Evidently, the binder content was more impactful on the hydraulic performance of the PGC than on the mechanical and durability performance. Meanwhile, the opposite was noted for the impact of the dune sand addition; however, using a higher SH solution molarity or reducing the AAS/B resulted in a lower permeability of the PGC due to the respective increased dissolution of hydroxide ions and improved particle packing density [ ]. Nevertheless, it is possible to attain acceptable PGC permeability for concrete pavement applications. 3.2.3. TOPSIS Analysis Four different optimization scenarios were designed. The weights were assigned to the quality criteria as a value out of 10. The normalized weights were then obtained as the ratio of the weight of each criterion to the total weight of the investigated criteria. The normalized weights assigned to each criterion are presented in Table 5 . The first optimization scenario served to maximize the compressive strength while providing less weight to the permeability and abrasion resistance. Conversely, the second and third optimization scenarios aimed to maximize the permeability and abrasion resistance, respectively, while giving lesser weight to the other properties. A fourth optimization scenario, i.e., balanced performance, was carried out with equal weights being assigned to each criterion. The TOPSIS-based Taguchi optimization method aimed to maximize the three criteria. As summarized in Table 6 , the S/N ratios for compressive strength, permeability, and abrasion resistance were calculated using Equation (4). These values were then employed in calculating the closeness coefficients, shown Table 7 . Then, the average S/N ratios of the respective levels for each factor were computed. The maximum S/N value represented the levels of the optimum PGC mix. As shown in Figure 10 a,c, the first and third optimization scenarios (i.e., compressive strength and abrasion resistance with higher weights) yielded the same optimum level of factors of A3B3C3D3. In fact, the optimum mix for superior compressive strength and abrasion resistance comprised a binder content of 500 kg/m , dune sand addition of 20%, AAS/B of 0.60, and SH molarity of 12 M. Alternatively, the second scenario (i.e., permeability being dominant) revealed that A1B1C3D3 were the optimum levels of factors ( Figure 10 b), representing a binder content, dune sand addition, AAS/B, and SH solution molarity of 400 kg/m , 0%, 0.60, and 12 M, respectively. Meanwhile, the levels of the factors for optimization based on the fourth scenario (i.e., the three quality criteria were assigned equal weights of 0.33) were A1B2C3D3 ( Figure 10 d). Hence, the TOPSIS optimization process produced a mix of PGC having a binder content of 400 kg/m , dune sand addition of 10%, AAS/B of 0.60, and SH molarity of 12 M with a balanced performance among the compressive strength, permeability, and abrasion resistance. 3.3. Prediction of PGC Properties A series of multivariable regression models was established to predict the compressive strength, permeability, and abrasion resistance of pervious geopolymer concrete while reflecting on the effect of different factors. Hence, the binder content (A), dune sand addition (B), AAS/B (C), and SH solution molarity (D) were employed at various levels for the proposed models. The levels of factors ranged from 400 to 500 kg/m for the binder content, 0 to 20% for the dune sand addition, 0.50 to 0.60 for AAS/B, and 8 to 12 M for SH solution molarity. The form of the proposed quadratic model is given in Equation (12). Table 8 lists the coefficients of the established models for each quality criterion. The R and root-mean-square-error (RMSE) values were in the respective ranges of 0.98–0.99 and 0.37–1.20. Accordingly, these models could be employed in predicting the properties of PGC with a high accuracy. Additionally, the properties of the optimum mixes (based on the TOPSIS analysis) could be estimated using the newly-developed regression models. Indeed, mixes A3B3C3D3, A1B1C3D3 and A1B2C3D3, i.e., the optimum mixes for scenarios 1/3, 2, and 4, were characterized by a compressive strength of 39.1, 10.5, and 15.6 MPa, respectively. Their corresponding permeability was 3.4, 7.7, and 6.9 mm/s, while their respective abrasion resistances were 52.8, 12.8, and 20.9%. Property = α[0](A) + α[1](B) + α[2](C) + α[3](D) + α[4](A^2) + α[5](B^2) + α[6](C^2) + α[7](D^2) 4. Conclusions This study evaluated the effect of various mix design parameters on the compressive strength, permeability, and abrasion resistance of pervious geopolymer concrete (PGC). The Taguchi and TOPSIS methods were used to optimize the mixture proportions for superior performance. Based on the experimental results and findings, the conclusions are as follows: • A compressive strength and abrasion resistance higher than 30 MPa and 40%, respectively, could be achieved for PGC mixtures made with 450–500 kg/m^3 of binder content, a 10–20% dune sand addition, 0.50–0.55 of AAS/B, and 8–12 M of SH molarity. The abrasion resistance could be accurately predicted from the compressive strength using a newly-developed regression model with a high coefficient of determination of R^2 = 0.90. • High permeability values, exceeding 6 mm/s, were obtained in PGC mixes made with a binder content, dune sand addition, AAS/B, and SH solution of 400–450 kg/m^3, 0–10%, 0.50–0.55, and 8–10 M, respectively. An analytical regression model was established to predict the permeability of the PGC from the compressive strength with a high accuracy (R^2 = 0.99). • Using the Taguchi method, the optimum mixes for superior compressive strength and abrasion resistance were made with a binder content, dune sand addition, AAS/B, and SH molarity of 500 kg/m^3, 20%, 0.50, and 12 M, respectively. Contrarily, the optimized mix design for superior permeability was made with a binder content of 400 kg/m^3, dune sand addition of 0%, AAS/B of 0.6, and SH molarity of 8 M. • An ANOVA revealed that the binder content and dune sand addition had the highest contributions to the compressive strength, permeability and abrasion resistance, while the AAS/B and SH solution molarity had lower contributions toward the performance of the PGC. • A TOPSIS-based Taguchi method was employed in optimizing the mixes in accordance with four optimization scenarios. For the scenarios where the compressive strength and abrasion resistance were more important to the user, the optimum mix comprised a binder content of 500 kg/m^3, dune sand addition of 20%, AAS/B of 0.60, and SH molarity of 12 M. As for the permeability-dominant scenario, the optimum mix had a binder content of 400 kg/m^3, dune sand addition of 0%, AAS/B of 0.60, and SH Molarity of 12 M. Meanwhile, the balanced performance scenario, i.e., equal weights for the three criteria, had an optimum mix comprised of a binder content of 400 kg/m^3, dune sand addition of 10%, AAS/B of 0.60, and SH Molarity of 12 M. • Multivariable regression models were established to predict the compressive strength, permeability, and abrasion resistance from the binder content, dune sand addition, AAS/B, and SH solution molarity with a high accuracy. The R^2 and RMSE values ranged from 0.98 to 0.99 and 0.37 to 1.20, respectively. The optimum mixes, namely, A3B3C3D3, A1B1C3D3, and A2B2C3D3, had compressive strengths of 39.1, 10.5, and 15.6 MPa, permeability of 3.4, 7.7, and 6.9 mm/s, and abrasion resistance of 52.8, 12.8, and 20.9%, respectively. The experimental results and findings highlight the feasibility of producing a sustainable cement-free pervious geopolymer concrete derived from waste materials with adequate compressive strength, permeability, and abrasion resistance for use as a sustainable urban drainage system (SUDS). Author Contributions Conceptualization, H.E.-H. and M.H.; methodology, F.H.A., H.E.-H., S.M., M.H. and K.H.M.; software, F.H.A. and A.E.-M.; validation, H.E.-H., M.H. and K.H.M.; formal analysis, F.H.A., H.E.-H. and A.E.-M.; investigation, F.H.A., H.E.-H., A.E.-M., S.M., M.H. and K.H.M.; resources, H.E.-H., M.H. and K.H.M.; data curation, F.H.A. and A.E.-M.; writing—original draft preparation, F.H.A. and A.E.-M.; writing—review and editing, H.E.-H., S.M., M.H. and K.H.M.; visualization, H.E.-H. and M.H.; supervision, H.E.-H. and M.H.; project administration, H.E.-H. and M.H.; funding acquisition, H.E.-H., M.H. and K.H.M. All authors have read and agreed to the published version of the manuscript. This research was funded by ASPIRE and UAE University, grant numbers 21N235 and 31R277. Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement Data is available upon request from the corresponding author. The authors acknowledge the support of the UAE University staff and lab engineers. Conflicts of Interest The authors declare no conflict of interest. 1. Díaz, E.E.S.; Barrios, V.A.E. Development and use of geopolymers for energy conversion: An overview. Constr. Build. Mater. 2021, 315, 125774. [Google Scholar] [CrossRef] 2. Shah, K.W.; Huseien, G.F.; Xiong, T. Functional nanomaterials and their applications toward smart and green buildings. In New Materials in Civil Engineering; Butterworth-Heinemann: Oxford, UK, 2020; pp. 395–433. [Google Scholar] 3. Lo, F.-C.; Lee, M.-G.; Lo, S.-L. Effect of coal ash and rice husk ash partial replacement in ordinary Portland cement on pervious concrete. Constr. Build. Mater. 2021, 286, 122947. [Google Scholar] [CrossRef] 4. Elizondo-Martínez, E.-J.; Andrés-Valeri, V.-C.; Jato-Espino, D.; Rodriguez-Hernandez, J. Review of porous concrete as multifunctional and sustainable pavement. J. Build. Eng. 2019, 27, 100967. [ Google Scholar] [CrossRef] 5. Chandrappa, A.K.; Biligiri, K.P. Pervious concrete as a sustainable pavement material—Research findings and future prospects: A state-of-the-art review. Constr. Build. Mater. 2016, 111, 262–274. [Google Scholar] [CrossRef] 6. Hwang, S.S.; Cortés, C.M.M. Properties of mortar and pervious concrete with co-utilization of coal fly ash and waste glass powder as partial cement replacements. Constr. Build. Mater. 2020, 270, 121415. [Google Scholar] [CrossRef] 7. Zhong, R.; Leng, Z.; Poon, C.-S. Research and application of pervious concrete as a sustainable pavement material: A state-of-the-art and state-of-the-practice review. Constr. Build. Mater. 2018, 183, 544–553. [Google Scholar] [CrossRef] 8. Joshaghani, A.; Ramezanianpour, A.A.; Ataei, O.; Golroo, A. Optimizing pervious concrete pavement mixture design by using the Taguchi method. Constr. Build. Mater. 2015, 101, 317–325. [Google Scholar] [CrossRef] 9. Khankhaje, E.; Rafieizonooz, M.; Salim, M.R.; Khan, R.; Mirza, J.; Siong, H.C.; Salmiati, S. Sustainable clean pervious concrete pavement production incorporating palm oil fuel ash as cement replacement. J. Clean. Prod. 2018, 172, 1476–1485. [Google Scholar] [CrossRef] 10. Bilal, H.; Chen, T.; Ren, M.; Gao, X.; Su, A. Influence of silica fume, metakaolin & SBR latex on strength and durability performance of pervious concrete. Constr. Build. Mater. 2021, 275, 122124. [Google Scholar] 11. Anwar, F.H.; El-Hassan, H.; Hamouda, M.A. A Meta-Analysis on the Performance of Pervious Concrete with Partial Cement Replacement by Supplementary Cementitious Materials; ZEMCH: Dubai, United Arab Emirates, 2021. [Google Scholar] 12. Chen, X.; Wang, H.; Najm, H.; Venkiteela, G.; Hencken, J. Evaluating engineering properties and environmental impact of pervious concrete with fly ash and slag. J. Clean. Prod. 2019, 237, 117714. [Google Scholar] [CrossRef] 13. Presuel-Moreno, F.J.; Tang, F. Corrosion Propagation of Rebar Embedded in Low w/c Binary Concrete Blends Exposed to Seawater; CORROSION: Phoenix, AZ, USA, 2018; Available online: https:// onepetro.org/NACECORR/proceedings-abstract/CORR18/All-CORR18/NACE-2018-11167/126508 (accessed on 10 July 2022). 14. Elango, K.; Vivek, D.; Prakash, G.K.; Paranidharan, M.; Pradeep, S.; Prabhukesavaraj, M. Strength and permeability studies on PPC binder pervious concrete using palm jaggery as an admixture. Mater. Today Proc. 2020, 37, 2329–2333. [Google Scholar] [CrossRef] 15. Adil, G.; Kevern, J.T.; Mann, D. Influence of silica fume on mechanical and durability of pervious concrete. Constr. Build. Mater. 2020, 247, 118453. [Google Scholar] [CrossRef] 16. Zhang, H.; Hadi, M.N.S. Geogrid-confined pervious geopolymer concrete piles with FRP-PVCconfined concrete core: Concept and behaviour. Constr. Build. Mater. 2019, 211, 12–25. [Google Scholar] [ 17. Hadi, M.N.S.; Farhan, N.A.; Sheikh, M.N. Design of geopolymer concrete with GGBFS at ambient curing condition using Taguchi method. Constr. Build. Mater. 2017, 140, 424–431. [Google Scholar] [ CrossRef] [Green Version] 18. Anwar, F.H.; Malami, S.I.; Baba, Z.B.; Farouq, M.M.; Labbo, M.S.; Aliyu, D.S.; Umar, A.B. Compressive Strength of Lightweight Concrete and Benefit of Partially Replacing Cement by Animal Bone Aah (ABA). J. Emerg. Technol. Innov. Res. 2019, 6, 554–560. [Google Scholar] 19. Panagiotopoulou, C.; Tsivilis, S.; Kakali, G. Application of the Taguchi approach for the composition optimization of alkali activated fly ash binders. Constr. Build. Mater. 2015, 91, 17–22. [ Google Scholar] [CrossRef] 20. Garces, J.I.T.; Dollente, I.J.; Beltran, A.B.; Tan, R.R.; Promentilla, M.A.B. Life cycle assessment of self-healing geopolymer concrete. Clean. Eng. Technol. 2021, 4, 100147. [Google Scholar] [ 21. Sun, Z.; Lin, X.; Vollpracht, A. Pervious concrete made of alkali activated slag and geopolymers. Constr. Build. Mater. 2018, 189, 797–803. [Google Scholar] [CrossRef] 22. Zaetang, Y.; Wongsa, A.; Sata, V.; Chindaprasirt, P. Use of coal ash as geopolymer binder and coarse aggregate in pervious concrete. Constr. Build. Mater. 2015, 96, 289–295. [Google Scholar] [ 23. Ganesh, A.C.; Deepak, N.; Deepak, V.; Ajay, S.; Pandian, A. Utilization of PET bottles and plastic granules in geopolymer concrete. Mater. Today Proc. 2020, 42, 444–449. [Google Scholar] [ 24. Chen, X.; Guo, Y.; Ding, S.; Zhang, H.Y.; Xia, F.Y.; Wang, J.; Zhou, M. Utilization of red mud in geopolymer-based pervious concrete with function of adsorption of heavy metal ions. J. Clean. Prod. 2018, 207, 789–800. [Google Scholar] [CrossRef] 25. Ganesh, A.C.; Kumar, M.V.; Devi, R.K.; Srikar, P.; Prasad, S.; Sarath, R. Pervious Geopolymer Concrete under Ambient Curing. Mater. Today Proc. 2021, 46, 2737–2741. [Google Scholar] [CrossRef] 26. Rahman, S.S.; Khattak, M.J. Roller compacted geopolymer concrete using recycled concrete aggregate. Constr. Build. Mater. 2021, 283, 122624. [Google Scholar] [CrossRef] 27. Soundararajan, E.K.; Vaiyapuri, R. Geopolymer binder for pervious concrete. J. Croat. Assoc. Civ. Eng. 2021, 73, 209–218. [Google Scholar] 28. El-Hassan, H.; Ismail, N. Effect of process parameters on the performance of fly ash/GGBS blended geopolymer composites. J. Sustain. Cem. Mater. 2017, 7, 122–140. [Google Scholar] [CrossRef] 29. Malayali, A.B.; Chokkalingam, R.B. Mechanical properties of geopolymer pervious concrete. Int. J. Civ. Eng. Technol. 2018, 9, 2394–2400. [Google Scholar] 30. Taiwo, A.E.; Madzimbamuto, T.N.; Ojumu, T.V. Optimization of process variables for acetoin production in a bioreactor using Taguchi orthogonal array design. Heliyon 2020, 6, e05103. [Google Scholar] [CrossRef] 31. Ahmad, S.; Alghamdi, S.A. A Statistical Approach to Optimizing Concrete Mixture Design. Sci. World J. 2014, 2014, 7. [Google Scholar] [CrossRef] [Green Version] 32. El-Mir, A.; El-Hassan, H.; El-Dieb, A.; Alsallamin, A. Development and Optimization of Geopolymers Made with Desert Dune Sand and Blast Furnace Slag. Sustainability 2022, 14, 7845. [Google Scholar] [CrossRef] 33. Şimşek, B.; Uygunoğlu, T. Multi-response optimization of polymer blended concrete: A TOPSIS based Taguchi application. Constr. Build. Mater. 2016, 117, 251–262. [Google Scholar] [CrossRef] 34. Sharifi, E.; Sadjadi, S.J.; Aliha, M.; Moniri, A. Optimization of high-strength self-consolidating concrete mix design using an improved Taguchi optimization method. Constr. Build. Mater. 2019, 236, 117547. [Google Scholar] [CrossRef] 35. ASTM C618; Standard Specification for Coal Fly Ash and Raw or Calcined Natural Pozzolan for Use in Concrete. ASTM International: West Conshohocken, PA, USA, 2014. 36. El-Hassan, H.; Elkholy, S. Enhancing the performance of Alkali-Activated Slag-Fly ash blended concrete through hybrid steel fiber reinforcement. Constr. Build. Mater. 2021, 311, 125313. [Google Scholar] [CrossRef] 37. ASTM C33; Standard Specification for Concrete Aggregates. ASTM International: West Conshohocken, PA, USA, 2018. 38. Dhemla, P.; Somani, P.; Swami, B.; Gaur, A. Optimizing the design of sintered fly ash light weight concrete by Taguchi and ANOVA analysis. Mater. Today Proc. 2022, 62, 495–503. [Google Scholar] [ 39. Ismail, N.; El-Hassan, H. Development and Characterization of Fly Ash–Slag Blended Geopolymer Mortar and Lightweight Concrete. J. Mater. Civ. Eng. 2018, 30, 04018029. [Google Scholar] [CrossRef] 40. El-Hassan, H.; Elkholy, S. Performance Evaluation and Microstructure Characterization of Steel Fiber–Reinforced Alkali-Activated Slag Concrete Incorporating Fly Ash. J. Mater. Civ. Eng. 2019, 31, 04019223. [Google Scholar] [CrossRef] 41. El-Hassan, H.; Shehab, E.; Al-Sallamin, A. Effect of curing regime on the performance and microstructure characteristics of alkali-activated slag-fly ash blended concrete. J. Sustain. Cem. Mater. 2021, 10, 289–317. [Google Scholar] [CrossRef] 42. Ling, Y.; Wang, K.; Li, W.; Shi, G.; Lu, P. Effect of slag on the mechanical properties and bond strength of fly ash-based engineered geopolymer composites. Compos. Part B Eng. 2019, 164, 747–757. [Google Scholar] [CrossRef] 43. Younis, K.H. Influence of sodium hydroxide (NaOH) molarity on fresh properties of self-compacting slag-based geopolymer concrete containing recycled aggregate. Mater. Today Proc. 2021, 56, 1733–1737. [Google Scholar] [CrossRef] 44. Ekmen, S.; Mermerdaş, K.; Algın, Z. Effect of oxide composition and ingredient proportions on the rheological and mechanical properties of geopolymer mortar incorporating pumice aggregate. J. Build. Eng. 2020, 34, 101893. [Google Scholar] [CrossRef] 45. ACI Committee 522. 522R-10: Report on Pervious Concrete; Technical Documents; American Concrete Institute: Farmington Hills, MI, USA, 2010; p. 38. [Google Scholar] 46. BS EN 12390-3:2009; British Standard. Testing Hardened Concrete. Part 3: Compressive Strength of Test Specimens. British Standards Institution: London, UK, 2011. 47. Najm, O.; El-Hassan, H.; El-Dieb, A. Optimization of alkali-activated ladle slag composites mix design using taguchi-based TOPSIS method. Constr. Build. Mater. 2022, 327, 126946. [Google Scholar] 48. Hardjito, D.; Wallah, S.E.; Sumajouw, D.M.J. On the development of fly ash-based geopolymer concrete. ACI Mater. J. 2004, 101, 467–472. [Google Scholar] 49. Joseph, B.; Mathew, G. Influence of aggregate content on the behavior of fly ash based geopolymer concrete. Sci. Iran. 2012, 19, 1188–1194. [Google Scholar] [CrossRef] [Green Version] 50. El-Hassan, H.; Kianmehr, P. Pervious concrete pavement incorporating GGBS to alleviate pavement runoff and improve urban sustainability. Road Mater. Pavement Des. 2016, 19, 167–181. [Google Scholar] [CrossRef] 51. ASTM C131; Standard Test Method for Resistance to Degradation of Small-Size Coarse Aggregate by Abrasion and Impact in the Los Angeles Machine. ASTM International: West Conshohocken, PA, USA, 52. Phadke, M.S. Quality Engineering Using Robust Design, 1st ed.; Prentice-Hall: Englewood Cliffs, NJ, USA, 1989. [Google Scholar] 53. Dedania, H.V.; Shah, V.R.; Sanghvi, R.C. Portfolio Management: Stock Ranking by Multiple Attribute Decision Making Methods. Technol. Invest. 2015, 6, 141–150. [Google Scholar] [CrossRef] [Green 54. Kabir, S.M.A.; Alengaram, U.J.; Jumaat, M.Z.; Sharmin, A.; Islam, A. Influence of Molarity and Chemical Composition on the Development of Compressive Strength in POFA Based Geopolymer Mortar. Adv. Mater. Sci. Eng. 2015, 2015, 15. [Google Scholar] [CrossRef] [Green Version] 55. Fang, G.; Ho, W.K.; Tu, W.; Zhang, M. Workability and mechanical properties of alkali-activated fly ash-slag concrete cured at ambient temperature. Constr. Build. Mater. 2018, 172, 476–487. [ Google Scholar] [CrossRef] 56. El-Hassan, H.; Hussein, A.; Medljy, J.; El-Maaddawy, T. Performance of Steel Fiber-Reinforced Alkali-Activated Slag-Fly Ash Blended Concrete Incorporating Recycled Concrete Aggregates and Dune Sand. Buildings 2021, 11, 327. [Google Scholar] [CrossRef] 57. El-Hassan, H.; Medljy, J.; El-Maaddawy, T. Properties of Steel Fiber-Reinforced Alkali-Activated Slag Concrete Made with Recycled Concrete Aggregates and Dune Sand. Sustainability 2021, 13, 8017. [Google Scholar] [CrossRef] 58. El-Hassan, H.; Kianmehr, P.; Zouaoui, S. Properties of pervious concrete incorporating recycled concrete aggregates and slag. Constr. Build. Mater. 2019, 212, 164–175. [Google Scholar] [CrossRef] 59. Akkaya, A.; Çağatay, I.H. Investigation of the density, porosity, and permeability properties of pervious concrete with different methods. Constr. Build. Mater. 2021, 294, 123539. [Google Scholar ] [CrossRef] 60. Aliabdo, A.A.; Elmoaty, A.E.M.A.; Fawzy, A.M. Experimental investigation on permeability indices and strength of modified pervious concrete with recycled concrete aggregate. Constr. Build. Mater. 2018, 193, 105–127. [Google Scholar] [CrossRef] 61. Anwar, F.H.; El-Hassan, H.; Hamouda, M.; Hinge, G.; Mo, K.H. Meta-Analysis of the Performance of Pervious Concrete with Cement and Aggregate Replacements. Buildings 2022, 12, 461. [Google Scholar ] [CrossRef] 62. Chuah, S.; Duan, W.; Pan, Z.; Hunter, E.; Korayem, A.; Zhao, X.; Collins, F.; Sanjayan, J. The properties of fly ash based geopolymer mortars made with dune sand. Mater. Des. 2016, 92, 571–578. [ Google Scholar] [CrossRef] Figure 1. Images of the ( ) representative cubic sample, ( ) compression machine, ( ) permeability setup [ ], and ( ) abrasion machine. Figure 8. S/N ratios of Taguchi analysis for (a) compressive strength, (b) permeability, and (c) abrasion resistance. Figure 9. Contribution of factors toward Taguchi optimization of the mix for superior (a) compressive strength, (b) permeability, and (c) abrasion resistance. Component, Unit FA GBFS Dune Sand SiO[2], (wt.%) 48.0 27.8 64.9 CaO, (wt.%) 3.3 58.6 14.1 Al[2]O[3], (wt.%) 23.1 8.1 3.0 Fe[2]O[3], (wt.%) 12.5 1.3 0.7 MgO, (wt.%) 1.5 6.0 1.3 Na[2]O, (wt.%) 0.0 0.2 0.4 Others, (wt.%) 10.5 0.3 15.5 LOI, (wt.%) 1.1 0.9 0.0 Specific gravity 2.32 2.70 2.77 Unit weight, kg/m^3 1262 1209 1660 Factor Level 1 Level 2 Level 3 Binder content (kg/m^3) 400 450 500 Dune sand addition (wt.%) 0 10 20 AAS/B ratio 0.50 0.55 0.60 SH Molarity (M) 8 10 12 Mix No. Binder Content (kg/m^3) Dune Sand AAS/B Ratio SH Addition (wt.%) Molarity (M) 1 400 0 0.50 8 2 400 10 0.55 10 3 400 20 0.60 12 4 450 0 0.55 12 5 450 10 0.60 8 6 450 20 0.50 10 7 500 0 0.60 10 8 500 10 0.50 12 9 500 20 0.55 8 Mix Dune Sand AAS/B SH No. Binder Content (kg/m^3) Addition (wt.%) Ratio Molarity Compressive Strength (MPa) Permeability (mm/s) Abrasion Resistance (%) 1 400 0 0.50 8 12.7 7.23 14.4 2 400 10 0.55 10 16.5 6.62 21.2 3 400 20 0.60 12 26.4 5.15 35.3 4 450 0 0.55 12 22.1 5.63 28.1 5 450 10 0.60 8 22.8 5.41 29.6 6 450 20 0.50 10 40.7 3.02 55.0 7 500 0 0.60 10 21.4 5.76 27.5 8 500 10 0.50 12 32.2 4.25 43.5 9 500 20 0.55 8 38.6 3.23 50.0 Response Criterion Normalized Weights for Each Criterion Target Values Scenario 1 Scenario 2 Scenario 3 Scenario 4 Compressive Strength Larger is better 0.80 0.10 0.10 0.33 Permeability Larger is better 0.10 0.80 0.10 0.33 AbrasionResistance Larger is better 0.10 0.10 0.80 0.33 Mix No. S/N 1 S/N 2 S/N 3 (Compressive Strength) (Permeability) (Abrasion Resistance) 1 22.08 17.18 23.17 2 24.35 16.42 26.53 3 28.43 14.24 30.96 4 26.89 15.01 28.97 5 27.16 14.66 29.43 6 32.19 9.60 34.81 7 26.61 15.21 28.79 8 30.16 12.57 32.77 9 31.73 10.18 33.98 Mix No. Scenario 1 Scenario 2 Scenario 3 Scenario 4 1 0.1552 0.8903 0.1472 0.5035 2 0.2654 0.8690 0.3147 0.5589 3 0.6282 0.6127 0.6670 0.6305 4 0.4840 0.7096 0.5042 0.5971 5 0.5084 0.6651 0.5410 0.5933 6 0.8448 0.1097 0.8528 0.4965 7 0.4586 0.7347 0.4899 0.5976 8 0.7778 0.3992 0.8017 0.5839 9 0.8444 0.1308 0.8419 0.5015 Compressive Strength Permeability Abrasion Resistance α[0](A) 1.122 −0.179 1.687 α[1](B) 0.194 −0.035 0.447 α[2](C) −921.000 176.000 −1457.000 α[3](D) 0.800 0.220 4.400 α[4](A^2) −0.001 0.001 −0.002 α[5](B^2) 0.032 −0.004 0.036 α[6](C^2) 791.000 −154.000 1261.000 α[7](D^2) −0.015 −0.014 −0.168 R^2 0.99 0.98 0.99 RMSE 1.04 0.37 1.20 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Anwar, F.H.; El-Hassan, H.; Hamouda, M.; El-Mir, A.; Mohammed, S.; Mo, K.H. Optimization of Pervious Geopolymer Concrete Using TOPSIS-Based Taguchi Method. Sustainability 2022, 14, 8767. https:// AMA Style Anwar FH, El-Hassan H, Hamouda M, El-Mir A, Mohammed S, Mo KH. Optimization of Pervious Geopolymer Concrete Using TOPSIS-Based Taguchi Method. Sustainability. 2022; 14(14):8767. https://doi.org/ Chicago/Turabian Style Anwar, Faiz Habib, Hilal El-Hassan, Mohamed Hamouda, Abdulkader El-Mir, Safa Mohammed, and Kim Hung Mo. 2022. "Optimization of Pervious Geopolymer Concrete Using TOPSIS-Based Taguchi Method" Sustainability 14, no. 14: 8767. https://doi.org/10.3390/su14148767 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2071-1050/14/14/8767","timestamp":"2024-11-03T10:20:13Z","content_type":"text/html","content_length":"525256","record_id":"<urn:uuid:74fb3490-f86a-4aec-b192-5e59ea4bf184>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00161.warc.gz"}
Finding Patterns in Arrays Recently, my colleague Jeff asked me if I would look at some code he wrote to find a pattern of numbers in a larger array. Without looking at his code, I asked if he had tried using strfind, despite his data not being strings. He found that it solved his problem and was faster than his M-file. In the meantime, I wanted to see what it took for me to write a simple algorithm I was thinking of in an M-file. I show and discuss the results here. Simple Test Data Let me start off with really simple test data to be sure all algorithms are getting the right answers. a = [0 1 4 9 16 4 9]; b = double('The year is 2003.'); First Algorithm : findpattern Here's the first findpattern algorithm. type findpattern function idx = findpattern(in_array, pattern) %FINDPATTERN Find a pattern in an array. % K = FINDPATTERN(ARRAY, PATTERN) returns the starting indices % of any occurences of the PATTERN in the ARRAY. ARRAY and PATTERN % can be any mixture of character and numeric types. % Examples: % a = [0 1 4 9 16 4 9]; % b = double('The year is 2003.'); % findpattern(a, [4 9]) returns [3 6] % findpattern(a, [9 4]) returns [] % findpattern(b, '2003') returns 13 % findpattern(b, uint8('2003')) returns 13 % See also STRFIND, STRCMP, STRNCMP, STRMATCH. % Algorithm: % Find all of the occurrences of each number of the pattern. % For an n element pattern, the result is an n element cell array. The % i-th cell contains the positions in the input array that match the i-th % element of the pattern. % When the pattern exists in the input stream, there will be a sequence % of consecutive integers in consecutive cells. % As currently implemented, this routine has poor performance for patterns % with more than half a dozen elements where the first element in the % pattern matches many positions in the array. locations = cell(1, numel(pattern)); for p = 1:(numel(pattern)) locations{p} = find(in_array == pattern(p)); % Find instances of the pattern in the array. idx = []; for p = 1:numel(locations{1}) % Look for a consecutive progression of locations. start_value = locations{1}(p); for q = 2:numel(locations) found = true; if (~any((start_value + q - 1) == locations{q})) found = false; if (found) idx(end + 1) = locations{1}(p); You'll notice that Jeff chooses to store derived information on the pattern being present in a cell array, and then looks for consecutive locations. Here are some results using findpattern. First I set f to be a function handle to the function in question. Then I can reuse the same code for the other cases simply by redefining the function. f = @findpattern t(4) = false; t(1) = isequal(f(a, [4 9]), [3 6]); t(2) = isempty(f(a, [9 4])); t(3) = isequal(f(b, '2003'),13); t(4) = isequal(f(b, uint8('2003')),13); AOK = all(t==true) f = AOK = My Homebrew Algorithm : findPattern2 Here's my own algorithm. The idea here is to find possible pattern locations first, and winnow them out, marching through the pattern, which I am assuming is generally smaller, and often a lot smaller, than the data. type findPattern2 function start = findPattern2(array, pattern) %findPattern2 Locate a pattern in an array. % indices = findPattern2(array, pattern) finds the starting indices of % pattern within array. % Example: % a = [0 1 4 9 16 4 9]; % patt = [4 9]; % indices = findPattern2(a,patt) % indices = % 3 6 % Let's assume for now that both the pattern and the array are non-empty % VECTORS, but there's no checking for this. % For this algorithm, I loop over the pattern elements. len = length(pattern); % First, find candidate locations; i.e., match the first element in the % pattern. start = find(array==pattern(1)); % Next remove start values that are too close to the end to possibly match % the pattern. endVals = start+len-1; start(endVals>length(array)) = []; % Next, loop over elements of pattern, usually much shorter than length of % array, to check which possible locations are valid still. for pattval = 2:len % check viable locations in array locs = pattern(pattval) == array(start+pattval-1); % delete false ones from indices start(~locs) = []; Get results and time it. f = @findPattern2 t(1) = isequal(f(a, [4 9]), [3 6]); t(2) = isempty(f(a, [9 4])); t(3) = isequal(f(b, '2003'),13); t(4) = isequal(f(b, uint8('2003')),13); AOK = all(t==true) f = AOK = Using strfind Next I test using the same data with strfind. Despite its name, strfind can happily handle non-character datatypes, and particularly doubles and integers. f = @strfind t(1) = isequal(f(a, [4 9]), [3 6]); t(2) = isempty(f(a, [9 4])); t(3) = isequal(f(b, '2003'),13); t(4) = isequal(f(b, uint8('2003')),13); AOK = all(t==true) f = AOK = Use Case and Performance Jeff described the problem he was solving in more detail. He has a file with multiple images in it, with data stored as uint8. The images are separated by a particular bit pattern. Let me show you one of the images in the sequence, after processing and extracting the frames. load forloren image(X(:,:,:,17)), axis off whos X Name Size Bytes Class Attributes X 4-D 26726400 uint8 Now let me show and time finding the pattern in the raw data. The data contain 29 images. load imrawdata pattern = [254 255 0 224]; f = @()findpattern(rawdata, pattern); tfind = timeit(f); f = @()findPattern2(rawdata, pattern); tfind(2) = timeit(f); f = @()strfind(rawdata, pattern); tfind(3) = timeit(f) Name Size Bytes Class Attributes rawdata 1x1259716 1259716 uint8 tfind = 0.80941 0.011273 0.019194 Puzzle and Next Steps In the case of the larger dataset, strfind is not the fastest algorithm, though I found with much smaller data, strfind outperformed findPattern2. Some possible reasons why findPattern2 is the fastest of the three algorithms: it is not as general purpose and does no error checking, it was only written to work on vectors, it does nothing to handle cases where |NaN|s might be involved. If I find out why, I will let you know. In the meantime, if you have any thoughts to add, please comment here. To leave a comment, please click here to sign in to your MathWorks Account or create a new one.
{"url":"https://blogs.mathworks.com/loren/2008/09/08/finding-patterns-in-arrays/?s_tid=blogs_rc_1","timestamp":"2024-11-14T08:46:46Z","content_type":"text/html","content_length":"175269","record_id":"<urn:uuid:0eb8ee51-bcc6-485f-9c66-54e7be3c43ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00039.warc.gz"}
Quick Modular Calculations (Part 3) This article concludes the 3-part series. Cassio Neri presents a new algorithm that also works for 64-bit operands. The first two instalments of this series [ Neri19 ] and [ Neri20 ] showed three algorithms, minverse , mshift and mcomp , for evaluating expressions of the form n % d ⋚ r , where d is known by the compiler and ⋚ denotes any of == , != , < , <= , > or >= . While minverse is restricted to expressions where ⋚ is either == or != , mshift and mcomp are not. However, the last two must perform intermediate calculations in domains larger than their input. Specifically, for 32-bit data, computations are done in 64-bit registers. What if the input is already large? Will we still need it when it is 64? Yes, but these algorithms will not send you a Valentine, birthday greetings or bottle of wine. This article presents another algorithm that overcomes this limitation. I shall refer to it as new_algo since, to the best of my knowledge, it is original. ^ 1 All algorithms, including new_algo , are implemented in [ qmodular ]. Recall once again that the intention is not to ‘beat’ the compiler but, on the contrary, to help it. The hope is that compiler writers will consider incorporating these algorithms into their products for the benefit of all programmers. As I reported in [ Neri19 ], minverse has been implemented by GCC since version 9.1 but the implementation falls back to a less efficient algorithm for certain values of r . Clang 9.0 also uses minverse (only when r == 0 but its trunk version extends the usage for all other remainders). (See [ Godbolt ].) Other major compilers do not implement minverse and none implements any of the other algorithms presented in this series. Recall and warm up Figure 1 ^ 2 graphs the time taken by different algorithms to check whether each element of an array of 65,536 uniformly distributed unsigned 64-bit dividends in the interval [0, 10 ^ 6 ] leaves a remainder less than 5 when divided by 7. As usual, built_in corresponds to n % 7 < 5 as emitted by the compiler. (As a motivational note, when n is the number of days since a certain Monday, n % 7 < 5 is a check for weekdays.) Bars labelled mshift and mcomp correspond to algorithms covered in [ Neri20 ]. Finally, new_algo is the subject of this article. Observe that some previously seen algorithms are absent from Figure 1. As explained, minverse cannot be used with < and efficient implementations of mshift_promoted and mcomp_promoted for 64-bit inputs require full hardware support for 128-bit calculations, which is not provided by x86_64 CPUs. Recall that mshift and mcomp have preconditions and yield wrong results when n is above a certain threshold. Therefore, although very fast, the lack of generality forces the compiler to discard them. The time taken to scan the array of dividends is used as unit of time. All measurements encompass this time. They are ^ 3 3.70 for built_in , 1.78 for mshift , 1.66 for mcomp and 2.46 for new_algo . Subtracting the scanning time and taking results relatively to built_in ’s yields 0.78 / 2.70 ≈ 0.29 for mshift, 0.66 / 2.70 ≈ 0.24 for mcomp and 1.46 / 2.70 ≈ 0.54 for new_algo . These numbers, however, depend on the divisor. Listing 1 contrasts the code generated by GCC 8.2.1 with -O3 for built_in and new_algo . Finally, recall that we are interested in modular expressions where the divisor is a compile time constant and the dividend is a runtime variable. The value compared to the remainder can be either. They all have the same unsigned integer type which implements modulus 2 w arithmetic . (Typically, w = 32 or w = 64.) We focus on GCC 8.2.1 for the x86_64 target but some ideas might also apply to other platforms. The new_algo The fractional part of n / d corresponds to the remainder of the division. Indeed, Euclidean division states that any integer n can be uniquely written as n = q ∙ d + r , where q and r are integers with 0 ≤ r < d . Dividing this equality by d gives that q and r / d are, respectively, the integer and fractional parts of n / d . Hence, knowing the fractional part of n / d , or an approximation of it, is enough to identify r . Since d is known by the compiler, an approximation M of 1 / d is precomputed at compile time and only the cheaper multiplication n ∙ M is performed at runtime. The multiplication has the effect of increasing the error and when n is large enough the result is unreliable to allow the identification of r . The last paragraph’s arguments supported the works of mshift and mcomp [ Neri20 ] and they equally support new_algo . The novelty is that this algorithm, rather than accepting the approximation error until the result becomes unreliable, takes steps to reduce the error. As a consequence, the algorithm’s applicability is extended. As usual in this series, we shall present new_algo ’s main ideas by means of examples. (Deeper mathematical proofs of correctness can be seen in [ qmodular ] and references therein.) Although a rigorous proof is out of scope, the fundamental idea behind new_algo ’s error reduction has elementary school level ^ 4 : the periodicity of decimal expansions of rational numbers. For example, 1 / 3 = 0.333… and the sequence of 3s goes on indefinitely. Also, 1 / 7 = 0.142857… and 142857 repeats over and over. Some readers might object and point to terminating expansions like 1 / 2 = 0.5 or, even more obvious, 1 / 1 = 1. Nevertheless, a terminating expansion can be identified with a periodic one by appending an infinity of trailing 0s. For instance, 0.5 = 0.5000… and 1 = 1.000…. Furthermore, a terminating expansion is also identified with yet another periodic representation ending in 9s. Indeed, recall (or try to convince yourself) that 0.5 = 0.4999… and 1 = 0.999…. More generally, periodicity occurs for any base and, in particular, in binary expansions. For instance, 1 / 7 = (0.001001...) [ 2 ] with repeating 001 bits. Reality kicks in again to remind us that CPUs have finite precision. In practice n , d and r are 32 or 64 bits long but, for ease of exposition, we assume the number of bits is w = 10. Hence, truncation at the 10 ^ th bit after the binary point yields 1 / 7 ≈ (0.0010010010) [ 2 ] . Keeping the example of the previous section in mind, we set d = 7 and the approximation M = (0.0010010010) [ 2 ] of 1 / 7. Table 1 contrasts, for all n ∈ {0, …, 8}, the binary expansions of n / 7 and n ∙ M . Bits are grouped in triples to highlight the period. Observe that for n ≤ 7, multiplication by 1 / 7 and by M can be done separately on each triple, since the result of one group does not spill to its left. (Take notice that the 2 ^ nd column shows n / 7 = (0.111111…) [ 2 ] which is the binary analogon of 1 = 0.999…. This exemplifies the relevance for new_algo of the periodic representation of terminating expansions.) n (n/7) [2 ] (n ∙ M) [2 ] 0 0. 000 000 000 000 ... 0. 000 000 000 0 ... 1 0. 001 001 001 001 ... 0. 001 001 001 0 ... 2 0. 010 010 010 010 ... 0. 010 010 010 0 ... 3 0. 011 011 011 011 ... 0. 011 011 011 0 ... 4 0. 100 100 100 100 ... 0. 100 100 100 0 ... 5 0. 101 101 101 101 ... 0. 101 101 101 0 ... 6 0. 110 110 110 110 ... 0. 110 110 110 0 ... 7 0. 111 111 111 111 ... 0. 111 111 111 0 ... 8 1. 001 001 001 001 ... 1. 001 001 000 0 ... Table 1 The row for n = 8 is the first where the fractional parts of n / 7 and n ∙ M , up to the 9 ^ th bit, differ. (The relevant triple of bits is emphasised.) To understand the origin of this difference, we observe that this row can be obtained from the one for n = 1 by multiplication by 8 or, equivalently, by left shift by 3. The 2 ^ nd column illustrates infinite precision and the periodicity ensures that any triple of bits after the binary point has a replica on its right which is also left-shifted. This contrasts to the 3 ^ rd column, where the bits feeding the left shift at the rightmost position are 0s. Having realised that there is an error coming from the right, we shall see how new_algo reduces it. The previous paragraph pointed out a discrepancy between the fractional parts of n / 7 and n ∙ M for n = 8. Observe now the disparity between the integral parts. It turns out the two divergences compensate each other and by uniting the two parts we can correct the error of division. Figure 2 illustrates the steps of the process (grey 0-bits are included for clarity) as applied to n = 8: right shift the integer part of n ∙ M by 9 bits and add the result to n ∙ M . Comparing the outcome with 8 / 7 (shown in Table 1) we realise it is much closer than the original value 8 ∙ M is. The procedure is very effective in making the fractional parts of the result for n = 8 identical to that for n = 1, that is, (0.001 001 001 0) [ 2 ] . The fact that multiplication by n = 8 is equivalent to left shift by 3 bits makes clear that the quantity on the left of the binary point is the exact amount required to correct the error on the right. For other values of n , this property might be more difficult to see but it still holds. For instance, consider n = 15, which is the next dividend with remainder 1. Then n ∙ M = (10. 001 000 111 0) [ 2 ] and the correction process in shown in Figure 3. Again, the result’s fractional part matches the one obtained for n = 8. Therefore, the outcomes of the correction for n = 1, n = 8 and n = 15 have all the same fractional part. Unfortunately, the process is not so good for all n . Indeed, for n = 519 we have n ∙ M = (1 001 001. 111 111 111 0) [ 2 ] and Figure 4 shows that the fractional part of the outcome is off by deficiency when compared to the one obtained for n = 1. The disparity appears at the 9 ^ th bit (emphasised) and thus, the error is still small. By proximity, the result suggests that the remainder is 1, which is correct. It is worth noticing that n = 519 is not the smallest value for which the correction attempt does not zero the error out. Indeed, the row for n = 7 of Table 1 shows that the integer part of n ∙ M is 0 and thus, the correction attempt does not provoke any change to the fractional part of n ∙ M = (0. 111 111 111 0) [ 2 ] . Moreover, the outcome is quite far from the one for n = 0. This sounds like a showstopper, given that proximity is the key to recognise remainders and it has failed to hold here. Fortunately, this is more an annoyance than a real issue. Important points to retain follow. As n takes increasing values with the same remainder r > 0, the fractional part of the outcome f ( n ) starts, for n = r , at f ( r ) = r ∙ M and, at each stage, it either stays the same or decreases by a tiny amount. As long as f ( n ) does not fall enough to reach f ( r - 1), we are sure the remainder is r . Furthermore, when r is large enough, f ( n ) does not change at all, that is, f ( n ) = r ∙ M for all n with remainder r in the range of interest. Therefore, for n in a certain range, the remainder of n divided by d is r if, and only if, ( r - 1) ∙ M < f ( n ) ≤ r ∙ M or, equivalently, r ∙ M < f ( n ) + M ≤ ( r + 1) ∙ M . The analysis of the case r = 0 is a bit trickier but the same result holds. It also follows that the remainder of n divided by d is less than r if, and only if, 0 < f ( n ) + M ≤ r ∙ M . To finish this section, a very important limitation of new_algo must be mentioned: it is not available for all divisors. Indeed, it is easy to see that, for the correction to work, at least one full period must fit in 10 bits but, as it turns out, the period of 1 / 13 in binary has length 12. Therefore, new_algo cannot be used for d = 13 in our idealised CPU. In a real 64-bit machine the smallest divisor with this issue is d = 67. (The period of 1 / 67 has length 66.) Towards an implementation The presentation so far has evolved around the idea of splitting numbers into their integer and fractional parts. We shall see now how to turn this idea into a working implementation based on unsigned integers values only. Again, for ease of exposition, we assume that these numbers and CPU registers are 10-bits long. The algorithm’s first step is calculating n ∙ M where n is an integer and M has 10 bits after the binary point. To bring the product to the realm of integers, the multiplicand M is substituted by M ∙ 2 ^ 10 . To keep the notation simple, the latter quantity is still denoted M . Hence, in our example we set M = (0010010010) [ 2 ] . Another practical issue remains. Now n and M are 10 bits long and thus, the product n ∙ M has up to 20 bits. How can a 10-bit CPU calculate such number? In the real world, the question is how can a x86_64 CPU compute the 128-bit product of two 64-bit operands? The mul instruction (see Listing 1) does exactly that, by splitting the 128-bit product into its 64-bit higher and lower parts and storing them in registers rdx and rax , respectively. Coming back to our exposition, we assume that our imaginary 10-bit CPU provides a similar mul instruction. 0: movabs $0x2492492492492493,%rdx a: mov %rdi,%rax d: mul %rdx 10: mov %rdi,%rax 13: sub %rdx,%rax 16: shr %rax 19: add %rax,%rdx 1c: shr $0x2,%rdx 20: lea 0x0(,%rdx,8),%rax 28: sub %rdx,%rax 2b: sub %rax,%rdi 2e: cmp $0x4,%rdi 32: setbe %al 35: retq 0: movabs $0x2492492492492492,%rcx a: mov %rdi,%rax d: mul %rcx 10: add %rcx,%rax 13: lea (%rax,%rdx,2),%rdx 17: movabs $0xb6db6db6db6db6da,%rax 21: cmp %rax,%rdx 24: setbe %al 27: retq Listing 1 Notwithstanding the change in the definition of M , figures 2, 3 and 4, still illustrate the correction with little differences. Previously, the small dot symbolised the binary point but now it separates the higher and lower parts. To correctly align the bits of the higher part to those of the lower one, the former should be left shifted by k = 1. Finally, we were originally interested in the fractional part of the outcome but now it is the lower part that we care about. In particular, the addition does not need to be carried over to the higher part, it can be performed in modulus 2 ^ 10 arithmetic. Putting all pieces together, a C++ implementation of new_algo to evaluate n % d < r looks like this: bool has_remainder_less(uint_t n, uint_t r) { auto [high, low] = mul(M, n); uint_t f = low + (high << k); return f + M <= r * M; where mul(M, n) returns a pair of uint_t with the higher and lower parts of M * n . The last line is the condition 0 < f ( n ) + M ≤ r ∙ M in simplified form since it can be shown that 0 < f ( n ) always holds. For readers accustomed to x86_64 assembly, it should not be difficult to recognise the C++ code above in Listing 1. (With compile time constants M = 0x2492492492492492 , k = 1 and r * M = 5 * M = 0xb6db6db6db6db6da ). A naïve implementation of new_algo for n % d == r follows: bool has_remainder(uint_t n, uint_t r) { auto [high, low] = mul(M, n); uint_t f = low + (high << k); uint_t fpM = f + M; return r * M < fpM && fpM <= (r + 1) * M; The last line comes from r ∙ M < f ( n ) + M ≤ ( r + 1) ∙ M . This code contains many inefficiencies (e.g., the branch implied by && ) and is shown for exposition only. A faster implementation is provided in [ qmodular ]. Depending on a number of factors, many optimisations are possible. For instance, for small values of k , the addition and left shift in the second line can be combined in a single lea instruction. (See Listing 1.) Also, as we have seen, for larger values of r the only condition to be tested is f ( n ) = r ∙ M . The important point here is that new_algo ’s final form depends on several aspects that have a visible impact on the performance, as we shall see in the next section. Performance analysis As in the warm up, all measurements shown in this section concern the evaluation of modular expressions for 65,536 uniformly distributed unsigned 64-bit dividends in the interval [0, 10 ^ 6 ]. Charts show divisors on the x -axis and time measurements, in nanoseconds, on the y -axis. Timings are adjusted to account for the time of array scanning. For clarity, we restrict divisors to [1, 50] which suffices to spot trends. (Results for divisors up to 1,000 are available in [ qmodular ].) In addition, we filter out divisors that are powers of two since the bitwise trick (see [ Warren13 ]) is undoubtedly the best algorithm for them. The timings were obtained with the help of Google Benchmark [ Google ] running on an AMD Ryzen 7 1800X Eight-Core Processor @ 3600Mhz; caches: L1 Data 32K (x8), L1 Instruction 64K (x8), L2 Unified 512K (x8), L3 Unified 8192K (x2). Figure 5 concerns the evaluation of n % d == 0 . Readers might already be familiar with minverse ’s zigzag and its great performance. Although mcomp and mshift are even faster and have a pretty regular performance across divisors (a good feature on its own), recall they are not available for all values of n . They are shown here for the sake of completeness but in practice a compiler cannot use them. Looking at new_algo , we observe that its performance changes considerably across divisors depending on the availability of different micro-optimisations. Actually, new_algo is not very performant here and given the limitations of mcomp and mshift , we conclude that minverse is the best option. Figure 6 shows the evaluation of n % d == 1 . Due to mshift ’s and mcomp ’s limitation, they have now been excluded from this picture. The situation changed considerably with respect to the previous case. Indeed, new_algo beats the built_in algorithm for all divisors shown and for a handful of them (e.g., d = 14) it even beats minverse . Finally, Figure 7 considers the expression n % d > 1 . Recall that minverse cannot evaluate this expression. It is fair to say that new_algo beats the built_in algorithm for most of the divisors shown in the picture. We presented a new algorithm, designated here as new_algo , for the evaluation of certain modular expressions. It overcomes limitations of other algorithms previously seen in this series [ Neri19 ] and [ Neri20 ]. Specifically, minverse cannot be used for expressions like n % d < r and mshift and mcomp cannot be efficiently implemented in 64-bit CPUs. Alas, the new_algo has its own limitation: it is not available for all divisors. Like mshift and mcomp , new_algo operates on an approximation of n / d. which contains an error that increases with the numerator. Contrarily to the others, new_algo performs steps to delay the error growth by using the periodicity of binary expansions of rational numbers. In essence, errors on the right side of the truncated expansion can be corrected using bits appearing on the left. Performance analysis shows that, in some cases, new_algo can be faster than others. However, it is worth mentioning that no algorithm seen in this series beats all others in all circumstances. Therefore, a compiler aiming to emit the most efficient code for modular expressions needs to implement all these algorithms and carefully pick the one that is best for the particular case in hand. Amongst other aspects, this decision must consider the value of the divisor, the type of the expression (e.g., n % d == r as opposed to n % d > r ), the size of operands (32 versus 64 bits). A particularly interesting point about new_algo is that to emit efficient code just for this one algorithm, the compiler (writer) has already to deal with many choices of micro-optimisations. This article brings this series to an end but more research is needed. To compiler writers: “I don’t know why you say goodbye, I say hello.” I am deeply thankful to Fabio Fernandes for the incredible advice he provided during the research phase of this project. I am equally grateful to Lorenz Schneider and the Overload team for helping improve the manuscript. [Godbolt] https://godbolt.org/z/xsMLeP [Google] https://github.com/google/benchmark [Neri19] Cassio Neri, ‘Quick Modular Calculations (Part 1)’, Overload 154, pages 11–15, December 2019. https://accu.org/index.php/journals/2722 [Neri20] Cassio Neri, ‘Quick Modular Calculations (Part 2)’, Overload 155, pages 14–17, January 2020. https://accu.org/index.php/journals/2748 [QuickBench] http://quick-bench.com/ oF3Bm1mHz3_pbSuLHV4NdqY1edw [qmodular] https://github.com/cassioneri/qmodular [Warren13] Henry S. Warren, Jr., Hacker’s Delight , Second Edition, Addison Wesley, 2013. 1. I would be grateful if a well-informed reader could point me towards a previous work on the same algorithm. 2. Powered by quick-bench.com. For readers who are C++ programmers and do not know this site, I strongly recommend checking it out. In addition, I politely ask all readers to consider contributing to the site to keep it running. (Disclaimer: apart from being a regular user and donor, I have no other affiliation with this site.) 3. YMMV, reported numbers were obtained by a single run in quick-bench.com using GCC 8.2 with -O3 and -std=c++17 [ QuickBench ]. I do not know details about the platform it runs on, especially, the has a PhD in Applied Mathematics from Université de Paris Dauphine. He worked as a lecturer in Mathematics before moving to the financial industry.
{"url":"https://www.accu.org/journals/overload/28/156/neri_2773/","timestamp":"2024-11-11T17:14:28Z","content_type":"text/html","content_length":"58553","record_id":"<urn:uuid:a741dcbc-93ef-4dda-81c9-99a9948f66a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00411.warc.gz"}
The figure below shows a shaded region and a nonshaded region. Angles in the figure that appear to be right angles are right angles.What is the area, in square feet, of the shaded region?Enter your answer in the box. square feetWhat is the area, in square feet, of the nonshaded region?Enter your answer in the box. square feet 1. Home 2. General 3. The figure below shows a shaded region and a nonshaded region. Angles in the figure that appear to b...
{"url":"http://math4finance.com/general/the-figure-below-shows-a-shaded-region-and-a-nonshaded-region-angles-in-the-figure-that-appear-to-be-right-angles-are-right-angles-what-is-the-area-in-square-feet-of-the-shaded-region-enter-your-answer-in-the-box-square-feetwhat-is-the-area","timestamp":"2024-11-10T14:40:38Z","content_type":"text/html","content_length":"30418","record_id":"<urn:uuid:4a47751b-5c25-461b-9c42-40d95120feae>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00869.warc.gz"}
How to handle spatial indexing in Python assignments for efficient search operations? | Pay Someone To Do My Python Assignment How to handle spatial indexing in Python assignments for efficient search operations? I am new toPython & I have come a long way in 3 years.I have been following the new Python assignment guide of PEP 7. For the next page I am posting my experience based on the results, and the implementation of the functions I received. I saw an interesting tutorial mentioned here with examples at: https:// docs.python.org/4.5/tutorials/ch06.html. But as I was studying on the homepage I saw this particular section in a book but now I have read over 16 hours to try my help on how to implement. After all that was going well?Thank you in advance. I tried to address each question in my blog, and given the good books, I guess I’m making the problem out of my own work.So to demonstrate the problem(for ease of question) I have a series of objects.ObjectID(u) ObjectID(u.U) : U {u:f} U(x) {e:f, u:g} and after seeing some examples on my web see this site I was wondering if I should be right at least show the key. ObjectID(u) ObjectID(u.I) : I have only obtained the objects from get_objects(), but already the result was perfect, objectID(u.I) was correctly defined! I have also been doing some trial and error figuring out the problem for a few hours now.I read the paper and have had special info interesting insights! I have also been doing trial and error after testing on two different browsers and saw that ObjectID(u.I) objects have been retrieved and returned from get_objects() in HTML formater’s : Object id : U(u.I) h ” ” « « The object ID is actually an empty null object with the following key :How to handle spatial indexing in Python assignments for efficient search operations? This post is a tutorial on spatial indexing in Python. Get Coursework Done Online To show some interesting strategies to handle this, I am going through the following three questions that I have been doing intensive to get my head around the problem: What is a good facility for complex spatial indexing problem? When it is not true that from this source single spatial indexing function requires a lot of algebraic operations, understand that sparse-indexing-function is probably a sub-monomorphism using tensor, vector-relation, etc. I am trying to find a way to handle this Discover More Here the `concat` type and I am using it all the time. But, I’m still learning about tensor (and tensor-quoting by extension) and I think it is mainly an algebraic extension problem. So, there is an effective way to handle these spatial indexing problems. Update: Since I have just started with the implementation, here are the functions taken from example [1]: def getStructures(indexes): “””Given a list of all the possible indices of a spatial indexing object. The values to return are the values of the indexing object. The underlying indices are of the form [X, Y, Z]. The keys are the indexing object, the objects is the result of the operations that have to be performed by each object, and the keys are the factors. The indices are typically sorted by their appearance and type. The output of objects like x, y, and z is [x + y, “x + y + “, y + website link inputArray = [[1, 3], [4, 3], [8, 1]] outputArray = [] x,y,z = baskn(inputArray, x, y) z = [x + [[1, 4]], y + [[6, 7]], z ] for index in z: z.append(index) return [‘x’,’y’,’z’] If you have a few datasets (numbers, numbers, and images) and few functions, you can implement the resulting (n) as a 1:1 vector for a spatial indexing. If you have the required indices, the resulting set is [index_var1, index_var2,…, index_varn,…, and] are respectively assigned (after computing the dimensions). Because scalar scalar matrices are defined only for unordered 1-to-1 lists, for example (int) [1, 2, 3]. Cheating On Online Tests Though, if they are read-only instead of scalars : (int) [1, 3, 4, 6], (long), etc, and you don’t really need them (i.e., the data does not have to be read-write-only), you can also store the fields in a dict whose keys are as usual: [a, b, c, d, e, f, g, h, i] : [a, b, c, d, e, f, g, h, ii] So, for example: one() example: import math as pr inputArray = [[1, 3, 2], [2, 4, 4], [4, 8, find out resultArray = [[1, 4, 8, 3][8, “2”], [4, 8, 3], [4, 2]], [1, 1, 2, 5, 6, 7] first = [“one”,”one”, “a”,”b”,”c”,”d”,”e”,”f”,”gHow to handle spatial indexing in Python assignments for efficient search operations? Python programming language and naming convention issues. You can try to remove this one example until you find enough examples of that to tackle the rest. Thanks in advance to Daniel Hello, I have been reading somewhere about spatial indexing and can’t see how this can be done in Python or Objective C. The current SO blog article has plenty of material to reference on this. Here are some examples from their part of the workup: http://acme.oxfordjournals.com/content/5/6/63880.abt Here are all of their papers in this related topic. Or if you prefer a topic focused on special domains, either reference or blog post which covers all the continue reading this publications, or are interested in general articles, perhaps they can save you some time. Maybe it is more personal than just Python. check over here I bet some of the references you guys keep in the subject would be relevant for a number of years. I could be wrong, but that’s just a point of discussion outside of science that I’m unaware of. If you want to know more about this topic I can refer you to the article by: http://www.ecma-international.org/publications/res_0123/ref_view.do f, but don’t go beyond the article that just mentions the referenced references. I really think you should include a rephrasing of that article about the topic. For reference, I was referring to this issue and they mention that if you want to solve the problem, you must have a class that implements some method. Can You Pay Someone To Help You Find A Job? That useful source that if you want the class to have access to some sort of instance variable, then you use this method. For a reference C/C++ code would still use polymorphism in its usage. While this is site here on I’ll post an overall explanation and more examples on how to access the class instance methods on some types. Thanks for looking
{"url":"https://pythonhomework.com/how-to-handle-spatial-indexing-in-python-assignments-for-efficient-search-operations","timestamp":"2024-11-11T10:13:37Z","content_type":"text/html","content_length":"96354","record_id":"<urn:uuid:c551041d-b84c-4bee-9640-f139b83c7493>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00205.warc.gz"}
Confidence value of measurement d = distance(kalmanFilter,zmatrix) computes a distance between the location of a detected object and the predicted location by the Kalman filter object. This distance computation takes into account the covariance of the predicted state and the process noise. The distance function can only be called after the predict function. Use the distance function to find the best matches. The computed distance values describe how a set of measurements matches the Kalman filter. You can thus select a measurement that best fits the filter. This strategy can be used for matching object detections against object tracks in a multiobject tracking problem. This distance computation takes into account the covariance of the predicted state and the process noise. Track Location of An Object Track the location of a physical object moving in one direction. Generate synthetic data which mimics the 1-D location of a physical object moving at a constant speed. detectedLocations = num2cell(2*randn(1,40) + (1:40)); Simulate missing detections by setting some elements to empty. detectedLocations{1} = []; for idx = 16: 25 detectedLocations{idx} = []; Create a figure to show the location of detections and the results of using the Kalman filter for tracking. hold on; Create a 1-D, constant speed Kalman filter when the physical object is first detected. Predict the location of the object based on previous states. If the object is detected at the current time step, use its location to correct the states. kalman = []; for idx = 1: length(detectedLocations) location = detectedLocations{idx}; if isempty(kalman) if ~isempty(location) stateModel = [1 1;0 1]; measurementModel = [1 0]; kalman = vision.KalmanFilter(stateModel,measurementModel,'ProcessNoise',1e-4,'MeasurementNoise',4); kalman.State = [location, 0]; trackedLocation = predict(kalman); if ~isempty(location) plot(idx, location,'k+'); d = distance(kalman,location); title(sprintf('Distance:%f', d)); trackedLocation = correct(kalman,location); title('Missing detection'); legend('Detected locations','Predicted/corrected locations'); Remove Noise From a Signal Use Kalman filter to remove noise from a random signal corrupted by a zero-mean Gaussian noise. Synthesize a random signal that has value of 1 and is corrupted by a zero-mean Gaussian noise with standard deviation of 0.1. x = 1; len = 100; z = x + 0.1 * randn(1,len); Remove noise from the signal by using a Kalman filter. The state is expected to be constant, and the measurement is the same as state. stateTransitionModel = 1; measurementModel = 1; obj = vision.KalmanFilter(stateTransitionModel,measurementModel,'StateCovariance',1,'ProcessNoise',1e-5,'MeasurementNoise',1e-2); z_corr = zeros(1,len); for idx = 1: len z_corr(idx) = correct(obj,z(idx)); Plot results. figure, plot(x * ones(1,len),'g-'); hold on; legend('Original signal','Noisy signal','Filtered signal'); Input Arguments kalmanFilter — Kalman filter object zmatrix — Location of a detected object N-column matrix Location of a detected object, specified as an N-column matrix. Each row matrix contains a measurement vector. The distance function returns a row vector where each distance element corresponds to the measurement input. More About Distance Equation $d\left(z\right)={\left(z-Hx\right)}^{T}{\sum }_{}^{-1}\left(z-Hx\right)+\mathrm{ln}|\sum |$ Where $\Sigma =HP{H}^{T}+R$ and $|\Sigma |$ is the determinant of $\Sigma$. You can then find the best matches by examining the returned distance values. Version History Introduced in R2012b
{"url":"https://nl.mathworks.com/help/vision/ref/vision.kalmanfilter.distance.html","timestamp":"2024-11-07T10:46:59Z","content_type":"text/html","content_length":"86439","record_id":"<urn:uuid:8ed6b553-93be-4139-9eac-e8c3fd6f5da2>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00809.warc.gz"}
My current best guess on how to aggregate forecasts — EA Forum Bots In short: I still recommend using the geometric mean of odds as the default aggregation method, but I give my best guess on exceptions to this rule. Since writing about how the geometric mean of odds compares to other forecasting aggregation methods I have received many comments asking for a more nuanced approach to choosing how to aggregate forecasts. I do not yet have a full answer to this question, but here I am going to outline my current best guess to help people with their research and give a chance to commenters to prove me wrong. In short, here is my current best guess in the form of a flowchart: Flowchart to choose a forecast aggregation method. Start from the yellow box and follow the arrows. You can right click the image and select "open image in new tab" for a bigger version. Some explanations are in order: • I currently believe that the geometric mean of odds should be the default option for aggregating forecasts. In the two large scale empirical evaluations I am aware of [1] [2], it surpasses the mean of probabilities and the median (*). It is also the only method that makes the group aggregate behave as a Bayesian, and (in my opinion) it behaves well with extreme predictions. • If you are not aggregating all-considered views of experts, but rather aggregating models with mutually exclusive assumptions, use the mean of probabilities. For example, this will come up if you first compute your timelines for Transformative AI assuming it will be derived from transformer-like methods, and then assuming it will come from emulated beings, etc. In this case, . • When the data includes poorly calibrated outliers, if it's possible exclude them and take the geometric mean. If not, we should use a pooling method resistant to outliers. The median is one such popular aggregation method. • If there is a known bias in the community of predictors you are polling for predicting positive resolution of binary questions, you can consider correcting for this. One correction that worked on metaculus data is taking the geometric mean of the probabilities (this pulls the aggregate towards zero compared to the geometric mean of odds). Better corrections are likely to exist. [EDIT: I rerun the test recently and the geo mean of probabilities does no longer outcompete the geo mean of odds. Consequently, I no longer endorse using the geometric mean of probabilities] • If there is a track record of underconfidence in past aggregate predictions from the community, consider extremizing the final outcome. This has been common practice in academia for a while. For example, (Satopää et al, 2014) have found good performance using extremized logodds. To choose an extremizing factor I suggest experimenting with what extremizing factors would have given you good performance in past predictions from the same community (EDIT: Simon M lays out a case against extremizing, EDIT 2: I explain here a more robust way of choosing an extremizing factor). • Lastly, it seems that empirically the weighting you use for the predictions matters much more than the aggregation method. I do not have yet great recommendations on how to do weighting, but it seems that weighting by recency of the prediction and by track record of the predictor works well at least in some cases. There are reasons to believe I will have a better idea of which aggregation methods work best in a given context in a year. For example, it is not clear to me how to detect and deal with outliers, none of the current aggregation methods give consistent answers when annualizing probabilities and there is a huge unexplored space of aggregation functions that we might tap into with machine learning methods. In conclusion and repeating myself: for the time being I would recommend people to stick to the geometric mean of odds as a default aggregate. I also encourage emphasizing the 10%, 50% and 90% percentile of predictions as well as the number of predictions to summarize the spread. If you have a good pulse on a problem with the data, above I suggest some solutions you can try. But beware applying them blindly and choosing the outcome you like best. Thanks to Simon M for his analysis of aggregation and weighting methods on Metaculus data. I thank Eric Neyman, Ozzie Gooen and Peter Wildeford for discussion and feedback. (*) (Seaver, 1978) also performs different experiments comparing different pooling methods, and founds similar performance between the mean of probabilities and geometric mean of odds. However I suspect this is because the aggregated probabilities in the experiments were in a range where both methods give similar results. (I think it seems in principle promising to try to setup toy models of tail prediction, but I'm unsure what the right toy model is here.) Simon_M 23 It's not clear to me that "fitting a Beta distribution and using one of it's statistics" is different from just taking the mean of the probabilities. I fitting a beta distribution to Metaculus forecasts and looked at: • Median forecast • Mean forecast • Mean log-odds / Geometric mean of odds • Fitted beta median • Fitted beta mean Scattering these 5 values against each other I get: We can see fitted values are closely aligned with the mean and mean-log-odds, but not with the median. (Unsurprising when you consider the ~parametric formula for the mean / median). The performance is as follows: brier log_score questions geo_mean_odds_weighted 0.116 0.37 856 beta_median_weighted 0.118 0.378 856 median_weighted 0.121 0.38 856 mean_weighted 0.122 0.391 856 beta_mean_weighted 0.123 0.396 856 My intuition for what is going on here is that the beta-median is an extremized form of the beta-mean / mean, which is an improvement Looking more recently (as the community became more calibrated), the beta-median's performance edge seems to have reduced: brier log_score questions geo_mean_odds_weighted 0.09 0.29 330 median_weighted 0.091 0.294 330 beta_median_weighted 0.091 0.297 330 mean_weighted 0.094 0.31 330 beta_mean_weighted 0.095 0.314 330 Hmm good question. For a quick foray into this we can see what would happen if we use our estimate the mean of the max likelihood beta distribution implied by the sample of forecasts p1,...,pN. The log-likelihood to maximize is then The wikipedia article on the Beta distribution discusses this maximization problem in depth, pointing out that albeit no closed form exists if α and β can be assumed to be not too small the max likelihood estimate can be approximated as ^α≈12+^GX2(1−^GX−^G1−X) and ^β≈12+^G1−X2(1−^GX−^G1−X), where GX=∏ip1/Ni and G1−X=∏i(1−pi)1/N. The mean of a beta with these max likelihood parameters is ^α^α+^β=(1−G1−X)(1−GX)+(1−G1−X). By comparison, the geometric mean of odds estimate is: Here are two examples of how the two methods compare aggregating five forecasts I originally did this to convince myself that the two aggregates were different. And they seem to be! The method seems to be close to the arithmetic mean in this example. Let's see what happens when we extremize one of the predictions: We have made p3 one hundred times smaller. The geometric mean is suitable affected. The maximum likelihood beta mean stays close to the arithmetic mean, unperturbed. This makes me a bit less excited about this method, but I would be excited about people poking around with this method and related ones! What do you mean exactly? None of these maps have a discontinuity at .5 (speculating) The key property you are looking for IMO is to which degree people are looking at different information when making forecasts. Models that parcel reality into neat little mutually exclusive packages are more amenable , while forecasts that obscurely aggregate information from independent sources will work better with geomeans. In any case, this has little bearing on aggregating welfare IMO. You may want to check out geometric rationality as an account that lends itself more to using geometric aggregation of welfare. I found some revelant discussion in the EA Forum about extremizing in footnote 5 of this post. The aggregation algorithm was elitist, meaning that it weighted more heavily forecasters with good track-records who had updated their forecasts more often. In these slides, Tetlock describes the elitism differently: He says it gives weight to higher-IQ, more open-minded forecasters. The extremizing step pushes the aggregated judgment closer to 1 or 0, to make it more confident. The degree to which they extremize depends on how diverse and sophisticated the pool of forecasters is. The academic papers on this topic can be found here and here. Whether extremizing is a good idea is controversial; according to one expert I interviewed, more recent data suggests that the successes of the extremizing algorithm during the forecasting tournament were a fluke. After all, a priori one would expect extremizing to lead to small improvements in accuracy most of the time, but big losses in accuracy some of the time. The post in general is quite good, and I recommend it. Thank you. Feel free to check this for more context. Sorted by Click to highlight new comments since: Vasco Grilo🔸 5 Hi Jaime, How would you aggregate forecasts for extinction risk, spanning many orders of magnitude, which are based on fitting the same data to different probability distributions? I suspect your diagram would suggest using the mean, as each distribution relies on mutually exclusive assumptions, but I am not confident. I am asking in the context of this and this analyses, where I used the median (see discussion). Feel free to share your thoughts there. Interesting case. I can see the intuitive case for the median. I think the mean is more appropriate - in this case, what this is telling you is that your uncertainty is dominated by the possibility of a fat tail, and the priority is ruling it out. I'd still report both for completeness sake, and to illustrate the low resilience of the guess. Very much enjoyed the posts btw Vasco Grilo🔸 2 Thanks for the feedback! I am still standing by the median, but it would be nice to empirically investigate similar cases, and see which method performs better! eca 10 I wonder how these compare with fitting a Beta distribution and using one of its statistics? I’m imagining treating each forecast (assuming they are probabilities) as an observation, and maximizing the Beta likelihood. The resulting Beta is your best guess distribution over the forecasted variable. It would be nice to have an aggregation method which gave you info about the spread of the aggregated forecast, which would be straightforward here. MichaelA🔸 9 Thanks for this post - I think this was a very useful conversation to have started (at least for my own work!), even if I'm less confident than you in some of these conclusions (both because I just feel confused and because I've heard other people give good-sounding arguments for other conclusions). In the two large scale empirical evaluations I am aware of [1] [2], it surpasses the mean of probabilities and the median (*). But it seems worth noting that in one of those cases the geometric mean of probabilities outperformed the geometric mean of odds. You later imply that you think this is at least partly because of a specific bias among Metaculus forecasts. But I'm not sure if you think it's fully because of that or whether that's the right explanation (I only skimmed the linked thread). And in any case the basic fact that geometric mean of probabilities performed best in this dataset seems worth noting if you're using performance in that dataset as evidence for some other aggregation method. Jaime Sevilla 10 Thanks for this post - I think this was a very useful conversation to have started (at least for my own work!), even if I'm less confident than you in some of these conclusions Thank you for your kind words! To dismiss any impression of confidence, this represents my best guesses. I am also quite confused. I've heard other people give good-sounding arguments for other conclusions I'd be really curious if you can dig these up! You later imply that you think [the geo mean of probs outperforming the geo mean of odds] is at least partly because of a specific bias among Metaculus forecasts. But I'm not sure if you think it's fully because of that or whether that's the right explanation I am confident that the geometric mean of probs outperformed the geo mean of odds because of this bias. If you change the coding of all binary questions so that True becomes False and viceversa then you are going to get worse performance that the geo mean of odds. This is because the geometric mean of probabilities does not map consistently predictions and their complements. With a basic example, suppose that we have p1=0.01,p2=0.3. Then √p1∗p2+√(1−p1)∗(1−p2)≈ So the geometric mean of probabilities in this sense it's not a consistent probability - it doesn't map the complement of probabilities to the the complement of the geometric mean as we would expect (the geometric mean of odds, the mean of probabilities and the median all satisfy this basic property). So I would recommend viewing the geometric mean of probabilities as a hack to adjust the geometric mean of odds down. This is also why I think better adjustments likely exist, since this isn't a particularly well motivated adjustment. It does however seem to slighly improve Metaculus predictions, so I included it in the flowchart. To drill this point even more, here is what we would get if we aggregated the predictions in the last 860 resolved metaculus binary questions by mapping each prediction to their complement, taking the geo mean of probs and taking the complement again: The complement of the geometric mean of complement probabilities is called comp_geo_mean As you can see, this change (that would not affect the other aggregates) significantly weakens the geo mean of probs. Nathan Young 2 Is there a map which doesn't have a discontinuity at .5? Vasco Grilo🔸 2 Hi Jaime, If you are not aggregating all-considered views of experts, but rather aggregating models with mutually exclusive assumptions, use the mean of probabilities. Models can have more or less mutually exclusive assumptions? I guess the less they do, the more it makes sense to rely on the median or geometric mean of odds instead of the mean. In practice, how do you decide? I am asking following a discussion about how to aggregate welfare ranges. In addition, there is not a strong distinction between all-considered views and the outputs of quantitative models, as the judgements of people are models themselves. Moreover, one should presumably prefer the all-considered views of the modellers over the models, as the former account for more information? Hi Jaime, Do you have any thoughts on the best way to aggregate forecasts of quantities which are not between 0 and 1 (e.g. global number of deaths during 2030)? Depends on whether you are aggregating distributions or point estimates. If you are aggregating distributions, I would follow the same procedure outlined in this post, and use the continuous version of the geometric mean of odds I outline in footnote 1 of this post. If you are aggregating point estimates, at this point I would use the procedure explained in this paper, which is taking a sort of extremized average. I would consider a log transform depending on the quantity you are aggregating. (though note that I have not spent as much time thinking about how to aggregate point estimates) I am aggregating arrays of Monte Carlo samples which have N samples each. There is a sense in which each sample is one point estimate, but for large N (I am using 10^7) I guess I can fit a distribution to each of the arrays. Without more context, I'd say that fit a distribution to each array and then aggregate them using a weighted linear aggregate of the resulting CDFs, assigning a weight proportional to your confidence on the assumptions that produced the array.
{"url":"https://forum-bots.effectivealtruism.org/posts/acREnv2Z5h4Fr5NWz/my-current-best-guess-on-how-to-aggregate-forecasts","timestamp":"2024-11-13T04:31:38Z","content_type":"text/html","content_length":"1049040","record_id":"<urn:uuid:fe2537cc-7c52-4c46-91aa-739b7a84557d>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00245.warc.gz"}
What is Ansatz In quantum computing, the term "Ansatz" refers to a trial wavefunction or trial state used as a starting point for approximations or optimizations. It's a parameterized quantum state that serves as an educated guess for the solution to a particular problem, such as finding the ground state of a quantum system. The Ansatz is a central concept in variational algorithms like the Variational Quantum Eigensolver (VQE). An Ansatz is typically represented by a parameterized quantum circuit, where the parameters are adjustable variables that define the state. The structure of the Ansatz can be chosen based on prior knowledge of the problem or physical intuition. The parameters are then optimized using classical or quantum techniques to approximate the desired quantum state as closely as possible. In variational algorithms, the Ansatz is used to prepare a trial state that approximates the solution to the problem at hand. By iteratively adjusting the parameters and evaluating the resulting state, the algorithm converges towards the optimal solution. The quality of the Ansatz, including its structure and initial parameters, can significantly impact the efficiency and accuracy of the Choosing an appropriate Ansatz is a critical step in variational algorithms. The Ansatz must be expressive enough to capture the essential features of the target state but also simple enough to be efficiently implemented on a quantum computer. The choice may be guided by physical insights, problem-specific constraints, or empirical testing. Designing an effective Ansatz can be challenging, especially for complex or poorly understood problems. Research in this area focuses on developing new Ansatz structures, understanding the trade-offs between expressibility and efficiency, and devising methods to automate or guide the selection of the Ansatz. Beyond quantum computing, the concept of an Ansatz is also used in other areas of physics and mathematics, where it represents an assumed form of a solution to an equation or a system. It's a foundational concept in many approximation methods and numerical techniques. The Ansatz is a fundamental concept in quantum computing, particularly in variational algorithms. It represents a bridge between physical intuition, mathematical modeling, and computational implementation. The design and optimization of the Ansatz are central to the success of variational approaches and continue to be areas of active research and innovation. What is Ansatz In a Statistics How To article titled “Ansatz: Simple Definition, Examples, Comparison to Hypothesis” a definition is provided that is not specific to quantum computing. Mathematically, an ansatz can be thought of as a starting point. We have to start somewhere. Therefore, the initial parameters of the circuit can be thought of as a guess. The circuit itself is carefully selected, but the parameters are just guesses until a couple of rounds of measurements are taken and the parameters begin to be optimized deliberately. The initial guesses find the pathway toward the optimal The article offers an analogy with hypotheses. Similar to how a hypothesis is stated and tested, an ansatz is prepared and measured. In the case of an ansatz, however, it is presumed to be wrong in the beginning, almost definitely needing adjustments to find optimal solutions. For additional reading, the article highlights three ansatzes: • The Bethe ansatz has utility in solving the quantum inverse scattering problem, the Heisenberg spin chain, and the Hubbard model • The coupled cluster ansatz approximates the ground states of many-body quantum systems, including atoms as well as molecules • The variational autoencoder learns the structures of datasets in a manner that enables additional training data, similar to the original, to be generated As hinted above, the term “ansatz” has variations that are not necessarily applicable to quantum computing. For example, the term “ansatz method” applies to solving differential equations or systems of equations by reducing complex problems into simpler problems. The term “ansatz differential equations” has the same definition, but applied specifically to solving differential equations. Ansatz in Quantum Algorithms Anecdotally, a quantum ansatz is a great mystery. The term is often found in books, papers, tutorials, and articles without a definition, and its meaning is not intuitive without such a definition. A student can therefore discover all the parts of an algorithm, even run examples of an algorithm, while still not knowing what an “ansatz” is. The quantum ansatz is the parameterized quantum circuit. The number of qubits, the number of operations, and the types of operations are all carefully selected based on the problem to be solved. The latter refers to both single-qubit and multi-qubit operations. The parameters then determine the quantum state represented by the overall circuit. When looking at the use cases of quantum computing, as well as the state of current hardware, it makes sense that such parameterized circuits can be found almost everywhere. Two of the major classifications of use cases are simulation and optimization, both of which are the primary applications of variational quantum algorithms. Variational Quantum Algorithms Variational quantum algorithms (VQA) are a class of quantum algorithms that find approximate solutions to optimization and simulation problems. Their popularity is due to them being designed for Noisy Intermediate-Scale Quantum (NISQ) devices, which have relatively few qubits and can only execute shallow quantum circuits. The key components of a VQA are: • The quantum ansatz is the parameterized quantum circuit that provides the range of quantum states available by adjusting the parameters • The Hamiltonian or objective function that represents the quantum system to be simulated or the optimization problem to be solved, respectively • Repeated measurements of the quantum states provide a statistical estimate of the objective function’s expectation value • Classical optimization, via artificial neural networks, adjusts the ansatz’s parameters so as to minimize the objective function’s expectation value • Together, measurements are taken and parameters are updated in an iterative manner until either a minimum is found or a maximum number of iterations is reached • Convergence indicates that either the ground state energy, for simulation problems, or an approximation of an optimal solution, for optimization problems, has been found Implementing VQAs poses three challenges: • Each ansatz has to be carefully selected based on the problem to be solved, and the discovery of novel ansatzes is probably necessary • Each parameter optimization strategy has to be carefully selected not only to minimize runtime but also to avoid common machine-learning pitfalls • As more parameters are added to an ansatz, the process of optimizing those parameters becomes an optimization problem in its own right It’s worth noting that popular VQAs, particularly the Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization Algorithm (QAOA), have their own variations. Therefore, the ansatzes vary, the optimization strategies vary, and the algorithms themselves vary.
{"url":"https://www.quera.com/glossary/ansatz","timestamp":"2024-11-14T17:41:45Z","content_type":"text/html","content_length":"62877","record_id":"<urn:uuid:57bc3718-b538-4456-880d-4be9bf1777bb>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00771.warc.gz"}
Inference in Neural Networks using an Explainable Parameter Encoder Network A Parameter Encoder Neural Network (PENN) (Pfitzinger 2021) is an explainable machine learning technique that solves two problems associated with traditional XAI algorithms: 1. It permits the calculation of local parameter distributions. Parameter distributions are often more interesting than feature contributions — particularly in economic and financial applications — since the parameters disentangle the effect from the observation (the contribution can roughly be defined as the demeaned product of effect and observation). 2. It solves a problem of biased contributions that is inherent to many traditional XAI algorithms. Particularly in the setting where neural networks are powerful — in interactive, dependent processes — traditional XAI can be biased, by attributing effect to each feature independently. At the end of the tutorial, I will have estimated the following highly nonlinear parameter functions for a simulated regression with three variables: A Github version of the code can be found here. A brief look at the maths The PENN architecture consists of an inference network and a locally parameterized regression likelihood. The inference network is a generative neural network that uses input data to generate local parameter distributions for a locally-linear regression (or logistic regression). The local parameters are evaluated using the likelihood function and the network is trained using variational inference techniques that are well-known, for instance, from the field of variational autoencoders. The problem of estimating local parameter distributions can be stated in the variational Bayesian setting as a minimization of the divergence (in this case the Kullback-Leibler Divergence) between and a true and an approximating parameter distribution: \[ D_{KL}\left(q_{\boldsymbol{\theta}}(\boldsymbol{\beta}_i|\boldsymbol{x}_i) || p(\boldsymbol{\beta}_i|\boldsymbol{x}_i,y_i)\right) = -\int q_{\boldsymbol{\theta}}(\boldsymbol{\beta}_i|\boldsymbol {x}_i) \log \frac{p(\boldsymbol{\beta}_i|\boldsymbol{x}_i,y_i)}{q_{\boldsymbol{\theta}}(\boldsymbol{\beta}_i|\boldsymbol{x}_i)} d\boldsymbol{\beta}_i, \] where \(q_{\boldsymbol{\theta}}(\cdot)\) is the approximating function, also called the inference network, and \(p(\cdot)\) is the true parameter distribution. Applying Bayes’ rule, summing over the observation dimension \(i\), and moving terms around — the details of which can be found in … —, the above KL divergence yields an expectation lower bound (ELBO) that serves as the PENN loss function: \[ \mathbb{E}_{\boldsymbol{\beta}\sim q_{\boldsymbol{\theta}}(\boldsymbol{\beta}|\boldsymbol{x})} \log p(y|\boldsymbol{\beta}, \boldsymbol{x}) - \sum_{i = 1}^N D_{KL}\left(q_{\boldsymbol{\theta}}(\ boldsymbol{\beta}_i|\boldsymbol{x}_i) || p(\boldsymbol{\beta}_i|\boldsymbol{x}_i)\right). \] The ELBO contains the two elements discussed at the onset: the left-hand term is a parameterized linear likelihood, where local parameters are generated by \(q_{\boldsymbol{\theta}}(\cdot)\) — the inference network. The inference network is represented using a deep neural network architecture with fully-connected layers. The right-hand term is another KL divergence between the generated approximating distribution and a conditional prior. The conditional prior introduces a stability requirement: given two similar location vectors in \(\boldsymbol{x}\), the generated parameter distribution must also be similar. In the PENN loss, the prior is implemented using a \(k\)-nearest neighbors (KNN) approach. The prior is taken to be the KNN regression result of \(\boldsymbol{\beta}_i\) conditional on \(\boldsymbol \[ p(\boldsymbol{\beta}_i|\boldsymbol{x}_i) = \mathbb{E}\left[\boldsymbol{\beta}_i\sim q_{\boldsymbol{\theta}}(\cdot)|\boldsymbol{x}\right]. \] Below, I express this as a loss function that can be used for neural network training. Example data We will use a simulated data set with k = 3 features in x and a continuous target y, with \[ y = \beta_1^{\boldsymbol{x}}x_1 + \beta_2^{\boldsymbol{x}}x_2 + \beta_3^{\boldsymbol{x}}x_3 + \epsilon \] and \(\beta_k^{\boldsymbol{x}}\) represented by a nonlinear function. \(\beta_1^{\boldsymbol{x}}\) has the shape of a sine-curve, \(\beta_2^{\boldsymbol{x}}\) has no effect on the output (i.e. this is simply a correlated nuisance term), and \(\beta_3^{\boldsymbol{x}}\) has a threshold shape with 3 different regimes: import numpy as np # For reproducibility k = 3 n = 1000 Sigma = [[1.0, 0.3, 0.3], [0.3, 1.0, 0.3], [0.3, 0.3, 1.0]] x = np.random.multivariate_normal([0.0]*k, Sigma, n) eps = np.random.normal(size=n) betas = np.zeros((n,k)) betas[:,0] = np.sin(x[:,0]) * 5.0 betas[x[:,2]>0.75,2] = 5.0 betas[x[:,2]<-0.75,2] = -5.0 y = (x * betas).sum(axis=1) + eps The simulated data result in the following contributions, which are defined for the \(i\)th observation and the \(k\)th feature as: \[ \hat{\phi}_{ik} = \beta_{ik}x_{ik} - \mathbb{E}\left[\beta_{ik}x_{ik}\right]: \] phi = betas * x phi = phi - phi.mean(axis=0) Building a Parameter Encoder NN from scratch The following code chunks construct a PENN model using keras to estimate beta. The separate functions are combined into a PENN class in PENN.py. I begin by loading the necessary modules below: # Load backend functions from keras import backend as b # Neural net building blocks from keras.layers import Dense, Input, Lambda, Multiply, Add from keras.regularizers import l2 from keras.optimizers import Adam from keras import Model # Necessary for the calculation of the beta-prior from scipy.spatial import distance_matrix import tensorflow as tf # For the loss function to work, we need to switch off eager execution # For reproducibility Next, I construct the inference network with 2 layers and 10 hidden nodes in each layer. I use a sigmoid activation function. The inference network is completed by the output nodes for the parameter distributions, mu and sigma. The PENN uses variational inference to obtain posteriors of the local parameters. Assuming normally distributed local parameters, mu and sigma parameterize the local posteriors. A prediction is generated by sampling from the posterior: def build(k, n, mc_draws=100, size=10, l2_penalty=0.001): # 1. Model inputs input_inference_nn = Input(k, name='input_inference_nn') input_model = Input(k, name='input_model') input_knn_prior = Input(batch_shape=(n, n), name='input_knn_prior') input_mc = Input(tensor=b.random_normal((n, mc_draws, k)), name='input_mc') inputs = [input_inference_nn, # 2. Inference Network encoder_layer_1 = Dense(size, encoder_layer_2 = Dense(size, # ---- Parameter layers mu = Dense(k, kernel_regularizer=l2(l2_penalty), name='mu')(encoder_layer_2) sigma_squared = Dense(k, activation='exponential', sigma = Lambda(lambda i: b.sqrt(i), name='sigma')(sigma_squared) # 3. Posterior sample sample = Multiply()([sigma, input_mc]) sample = Add()([sample, mu]) # 4. Generate predictions output = Multiply()([sample, input_model]) output = Lambda(lambda i: b.sum(i, axis=2, keepdims=True), output_shape=(n, mc_draws, 1))(output) # 5. Build model model = Model(inputs, output) return model With the model function defined, I can build the PENN model, as well as supporting models (used only for inference) that extract the parameters of the posterior: model = build(k, n) mu_model = Model(model.inputs, model.get_layer('mu').output) sigma_model = Model(model.inputs, model.get_layer('sigma').output) The most important component of the PENN is the loss function, which we define next. The loss function consists of two elements: the mean squared error of the local linear model, and a Kullback-Leibler penalty enforcing stability in the parameter distributions: def loss(y, y_pred): mse = b.mean(b.square(y_pred - y)) mu_ = model.get_layer('mu').output sigma_ = model.get_layer('sigma').output input_knn_prior_ = model.inputs[2] prior_mu = b.dot(input_knn_prior_, mu_) prior_sigma = b.dot(input_knn_prior_, sigma_) + b.dot(input_knn_prior_, b.square(mu_ - prior_mu)) kl = b.mean(b.mean((b.log(b.sqrt(sigma_)) - b.log(b.sqrt(prior_sigma))) - ((sigma_ + b.square(mu_ - prior_mu)) / (2 * prior_sigma)) + 0.5, axis=1)) return mse - kl * lam Finally, the KNN-prior requires a distance matrix, and we need to set hyperparameters: lam = 4 gam = 0.04 knn_prior = distance_matrix(x, x) gam = knn_prior[knn_prior>0.0].min() + gam * ( knn_prior[knn_prior > 0.0].max() - knn_prior[knn_prior>0.0].min() knn_prior /= gam idx = knn_prior < 1.0 knn_prior[idx] = 1.0 knn_prior[~idx] = 0.0 knn_prior = (knn_prior.T / knn_prior.sum(axis=1)).T With all the components in place, we can compile and fit the model: model.compile(loss=loss, optimizer=Adam(learning_rate=0.05, clipnorm=1, clipvalue=0.5)) data = { 'input_inference_nn': x, 'input_model': x, 'input_knn_prior': knn_prior, 'input_mc': np.zeros((n, 100, k)) # Note that y needs to be expanded over the sampling dimension (we are sampling 100 draws) y_expanded = np.repeat(y[:, np.newaxis, np.newaxis], 100, axis=1) model.fit(data, y_expanded, batch_size=n, epochs=1000, verbose=0) ## <keras.callbacks.History object at 0x7fecdeaeb850> We can extract the estimated parameters using the inference models: mu = mu_model.predict(data, batch_size=n) Let’s plot the posterior means against the values of x: …and the true parameters of the simulation against x: That looks pretty good! Comparing to shap An interesting question is how the explanations obtained from the PENN model compare with SHAP values. SHAP values are contributions, which we can obtain from the estimated parameters: phi = mu * x phi = phi - phi.mean(axis=0) Plotting the contributions: Now, I fit a random forest regressor on the data using sklearn: from sklearn.ensemble import RandomForestRegressor mod = RandomForestRegressor(n_estimators=100) mod.fit(x, y) In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org. The feature contributions can be obtained using shap.TreeExplainer: import shap expl = shap.TreeExplainer(mod) phi_shap = expl.shap_values(x, y) The plot below displays the Random Forest and PENN contributions, which match closely, as is to be expected in the case of independent features: The results for the two methods are virtually identical. In a follow-on post, I will show that while this holds for an independent process, in the dependent case, methods such as GAM and SHAP fail. Pfitzinger, Johann. 2021. “An Interpretable Neural Network for Parameter Inference.”
{"url":"https://unchartedml.com/index.php/predictive-analytics/inference-in-neural-networks-using-an-explainable-parameter-encoder-network","timestamp":"2024-11-11T18:01:35Z","content_type":"text/html","content_length":"344210","record_id":"<urn:uuid:db4bfdae-b265-41b8-a8a9-019366a40943>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00533.warc.gz"}
Many natural objects can be modelled by startlingly simple equations. For instance, there is a sextic surface (degree-6 polynomial equation in three variables) which resembles a heart. As such, it has been nicknamed the ‘heart surface’: Now that we’re approaching Hallowe’en, I decided to do a similar thing to create a pumpkin. Due to its approximate spherical shape, the spherical coordinate system seemed like a good idea. I then used trigonometric functions to create the undulations: Similarly, a cylindrical coordinate system is perfect for modelling surfaces such as hats. This time, I attempted to recreate an infamous hat worn by certain students at the Part IA lectures in the Cambridge Mathematical Tripos: If you can decipher the Mathematica code, you’ll see that the equation (in cylindrical coordinates) is z = (3/2) Exp[-r^4] + (1/12) r^2 Cos[3 theta]. This was derived by modifying the ordinary Gaussian distribution until it had the correct shape. The trigonometric term causes the rim of the hat to vary sinusoidally. I then coloured the hat depending on radius to give the appearance of the black band. 0 Responses to Surfaces 1. Hal Abelson wrote a book called “Turtle Geometry” which focuses on the characterization of shapes from the perspective of a Logo turtle drawing them. Presumably, the pumpkin and to a lesser extent the hat have similar local, generative rules to them – for example, the ripple at the edge of a hat (or some leaves) probably is caused by local patches having “too much area for their circumference”. Hats are often made using forms, though, which would argue against a local, generative rule. The form might be turned on a lathe, which would impose some symmetry to it, but the profile could be completely arbitrary. However, the pumpkin’s rule for “how to become pumpkin-shaped” are probably even simpler than these equations, though it might need a morphogenesis-specific vocabulary to see the simplicity. □ Yes, turtle geometry is rather interesting, especially when used in conjunction with L-systems to generate recursive patterns (some simple fractals can be defined in this way). As for specifying a hat based on local curvature, this is indeed possible. I think that crinkly leaves and the brim of the hat are actually hyperbolic, i.e. too much circumference for the area, rather than the other way around. People have crocheted hyperbolic planes in precisely this way; I believe I included a picture of a hyperbolic crocheted coral reef on an early CP4space Morphogenesis is very complicated, and it is astonishing that something so complex as a human being, for instance, can be specified by a genome of three billion base pairs (equivalent to 750 megabytes — roughly the contents of a CD). This entry was posted in Uncategorized. Bookmark the permalink.
{"url":"https://cp4space.hatsya.com/2012/10/17/surfaces/","timestamp":"2024-11-04T09:06:56Z","content_type":"text/html","content_length":"65351","record_id":"<urn:uuid:2201d29a-1ebc-48ec-8be0-ad5f4582bd4b>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00765.warc.gz"}
Basic Integral Calculus | Hire Someone To Do Calculus Exam For Me Basic Integral Calculus in An Intensity Calculus Lambert’s Calculus isn’t new. So it might sound a little strange (although maybe you accept it as necessary and optional?), but what is it? In a somewhat strange, perhaps metaphorical way. Is Integral Theory (or, if that’s the right word) a categorical theory? Does the reason for Integral Theory/Computation/Articulation/Calculus, which is where most of what is said about Mathematical Logic deals, relate to Math Cognitive Concepts and the way they actually relate to Analytic Logic? Or is the other way round? Are Mathematicians who first encountered these concepts all wrong, and that we see through other philosophy, and did nothing wrong? Actually, I don’t even know how to link a philosopher of mathematics to a philosopher of philosophy, merely a philosopher of science, and more specifically a philosopher of philosophy. And to recap: we’re talking about mathematicians, not philosophy. Or maybe what you’re going to call the Philosophers of Physics? If so, the philosophy of p– the philosophy of this thing is in this, for sure. If it’s in philosophy, it’s in fact in mathematics. If it’s in mathematics, in fact, it’s in physics. That’s what the Philosophers talk about. Which means… I can’t speak out all of this as I don’t fully understand what they’re talking about, I guarantee. So we have no particular meaning for this discussion, but it may be useful. Let me try to answer some of the questions you pose. Suppose we’re talking about Math Cognitive Concepts. Let’s say, are you really thinking about p, there’s some kind of elementary principle, sort of such as: Take out all the arrows on a line, and whenever you take a single arrow, the rest become countably many arrows. There’s this principle of choosing a line on p’ and choosing a point to take out if any. You think: “Let’s take out all the arrows on the line, and then we take the point $-A$.” Likewise what you think: “Is there a positive number $p(x),$ where $A$ is a piece of a positive real? After all, $0$, $-0,.. Pay Someone To Do University Courses Near Me ,….$?” You think: “Well let’s take $p$ rather than the “A” that’ll work.” You think: “Well there must be a positive number $a$ so far as we see.” You think: “But if we go beyond this point, we will see even three more.” Well, why wouldn’t that be zero? Have you ever seen so much more than this? Sure, what’s the point? What will you need to decide about this? And what do you think this something should do? The Point by Point Principle says lots of things, but it applies to so-called Newtonian Rime or the so-called Newton’s theory of Riemann sums. For the problem is, does the theory of sets contain the classical properties, as that of sets of integer countable numbers, or is the theory of sets containing real numbers, as Newtonian Rime would have us? Could we really say anything about sets containing riemann sums? So we sort of think about the fundamental point of all mathematics, it would be 1, 2, 3, 4 or 5. WhatBasic Integral Calculus “Dahlem” means “corrective” also means “corrective thought in law”. Basically these steps are used to generate a combinatorial problem, related to which should be solved given some conditions of convergence to a function. Note both “corrective” and “corrective thought” should be used and used as in modern mathematics that is often the essence of physics. The mathematical tools used to generate combinators are probably of the following types: “Concrete Proofs” that generalize the “corrective” idea; such as the concepts of probability, time, expectation, operator and Hilbert-Alberg, probability, probability, as well as that used on functions. Typically, the “corrective” ideas are for the example of calculus instead of number theory. The idea of “obvious” examples of combinators will be here; “corrective” goes the other way: those that are “obvious” and others that don’t. Biden thinks of the combinatorics as such: “The combinator is not just a form of calculus, it is a fact. There is but one method, more or less. A system of equations is called a proof hypothesis.” The problem most people concerned with mathematical combinatorics should solve is not likely to be solved by the logic of abstraction which is “differentials are an infinite set.. I Need Someone To Do My Homework For Me . and not just a chain with exponential time.” For this reason, many physicists don’t use abstract calculus for their mathematics. Similarly, many mathematicians forget that abstract theory is in fact “a property of mathematics.” Therefore, there are several problems problems that call for simplification: one which make the work of proofs be done less. But the “corrective” is not quite the “solution of the problem.” Instead it is a failure. this works in some way and in some way is true, however no one would expect this to work for us. Perhaps a similar question is about the reflection point of abstract calculus. This is something that some physicists actually solve using ‘obey not-so-obvious’ techniques. But not every mathematician is ready to solve this problem. There may be a time if more study will show the “corrective” (the results not in concept but in theory) methods work. If not, that is hard. Second, the proof problem is really that which generates combinators. The problem can ask about the conditions “must be in most” (even when making them the “corrective”, this is sometimes called “proof given”): if A b b. in D > T, then p(a,b) >0. In some sense we have: [A, B] >0 is a condition that must be any condition that a b b must have. Thus if possible we have: [A, B] =b p(A). And if one uses abstract calculus, one can apply the “obey” or the “rewrite” solutions to the problem. It will have to go through and fix a contradiction, some one or many at a time, before “right, right again” does or should work. Take My Test Online For Me To get some concrete proofs one can take that a proposition that a polynomial function must satisfy is one that is in principle either “a consequence of” a property of what it would be for this function, or something else. Moreover, not all problems are of this type: one has to have both a rigorous proof (such as some precise characterization of a prime) and a proof showing some property that can be done with an approximate proof: and here I’m looking at properties that can be done with an approximation procedure as any approximation procedure can. These problems are what make number theory so valuable for mathematical theory problems. Even for purely abstract things like geometry one has to search for browse this site non-optimal answer. 2nd, the very same idea is that for any function we can define: a power b (abcd ccd which may aswell be a “convex” function). which is its maximum upper bound but here it does not provide any “proof provided by the operator can”. Biden thinks of function only as “be the variable b which is the expression c and this expression b is a “Basic Integral Calculus. 1st edition, James Collins McPherson 1985, pp. 79–90, 31, 160–69. Peter Schildhaut’s paper “Integral Calculus” contains $45$ examples, including 4.4 functions, 4.5 functions, and 3.1 functions, in the $(a,b,c)$ plane, and therefore they belong to at most (and only) the same class as the well known integrals in the $(a,b,c)$ plane. See: I. L. Shaver (http://dx.doi.org/10.7517/21a93-0339-14f07-4 ), and see: W. West (http://dx. Does Pcc Have Online Classes? doi.org/10.7517/21a039-0994-8 ), in the case of the first step.1 Suppose the function $f(z)=(a+b+c)z^2=(a+b+c)z^3$. For the second step $(a+b+c)= (a+b+c)^2 + (a+b+c)^3$. If $f$ is a $n$-dimensional integral then $f$ is also a $n$-dimensional integral, but this holds only if or only if $n=p$ and $a+b-c=p$.2 We should mention that if $n>p$, then $f$ may be written as the real polynomial $f(-t)f (1+t)$ or $f(-1+t)f(1+t)$ with a first degree $n$, or in $f(z)$. The set $\{f\}$ is the function order of $z_1z_2$ with $z_1=z_2=0$ and $z_2=0$ and $z_1=z_2=1$ and $z_{n-p+2}=z_n=z_1=z_{p-n-1}+2$. For $n=p$, $\overline{f}\overline{f}=f(-t)f(1+t)$ and $\overline{f}\overline{f}(z_1+z_2)=f(z_1+z_2)$. Suppose that $\{f=z\}$ has no $n$-dimensional integral two-dimensional integral. Then $f$ is a $n$-dimensional integral if and only if $\overline{f}\overline{f}=f$ and $f(z)=f(2z)$. The polynomial classes $\{f=z\}$ are the class known as the Dedekind-type Integral Calculus. It is studied in [@Hofart:Japai:2002] and [@Hofart:Hjering:2003]. One general fact about the Dedekind-type Integral Calculus is that the formula has a formula is almost exact (in that it is not restricted to a rational number) for $j^{|i|+|j|+\frac{1}{2}}$ of order one $|\{x\}|_1$ for $z\in\overline{D}$, the length of the rational point in $D$. According to [@Hofart:Hjering:2003], the discriminant of the form $E(1-\epsilon) = (2\pi)|\{x\}|$ must be one, with the equality $E(1-\epsilon)=0$ for $\epsilon\leq |\{x\}:j^{|x|_1}-z_1|_1$ to be understood.2 One shows that there exists $\varepsilon$ such that $\{|x|_1\}:j^{|x|_1}-z_1|_1=\varepsilon|\{x\}|_1$ is a purely rational number such that $|j^{| i|+|j|+\frac{1}{2}}-z_1|_1$ is $|\
{"url":"https://hirecalculusexam.com/basic-integral-calculus","timestamp":"2024-11-05T00:45:52Z","content_type":"text/html","content_length":"105502","record_id":"<urn:uuid:f847ef66-5da7-4fe0-9646-8149bf2a470c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00885.warc.gz"}
Cauchy principal value This article is about a method for assigning values to improper integrals. For the values of a complex function associated with a single branch, see Principal value . For the negative-power portion of a Laurent series , see Principal part In mathematics, the Cauchy principal value, named after Augustin Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined. Depending on the type of singularity in the integrand f, the Cauchy principal value is defined as one of the following: 1) The finite number where b is a point at which the behavior of the function f is such that for any a < b and for any c > b (see plus or minus for precise usage of notations ±, ∓). 2) The infinite number and . In some cases it is necessary to deal simultaneously with singularities both at a finite number b and at infinity. This is usually done by a limit of the form 3) In terms of contour integrals of a complex-valued function f(z); z = x + iy, with a pole on a contour C. Define C(ε) to be the same contour where the portion inside the disk of radius ε around the pole has been removed. Provided the function f(z) is integrable over C(ε) no matter how small ε becomes, then the Cauchy principal value is the limit:^[1] In the case of Lebesgue-integrable functions, that is, functions which are integrable in absolute value, these definitions coincide with the standard definition of the integral. If the function f(z) is meromorphic, the Sokhotski–Plemelj theorem relates the principal value of the integral over C with the mean-value of the integrals with the contour displaced slightly above and below, so that the residue theorem can be applied to those integrals. Principal value integrals play a central role in the discussion of Hilbert transforms.^[2] Distribution theory Let be the set of bump functions, i.e., the space of smooth functions with compact support on the real line. Then the map defined via the Cauchy principal value as is a distribution. The map itself may sometimes be called the principal value (hence the notation p.v.). This distribution appears, for example, in the Fourier transform of the Heaviside step Well-definedness as a distribution To prove the existence of the limit for a Schwartz function, first observe that is continuous on , as and hence since is continuous and LHospitals rule applies. Therefore, exists and by applying the mean value theorem to , we get that As furthermore we note that the map is bounded by the usual seminorms for Schwartz functions. Therefore, this map defines, as it is obviously linear, a continuous functional on the Schwartz space and therefore a tempered distribution. Note that the proof needs merely to be continuously differentiable in a neighbourhood of and to be bounded towards infinity. The principal value therefore is defined on even weaker assumptions such as integrable with compact support and differentiable at 0. More general definitions The principal value is the inverse distribution of the function and is almost the only distribution with this property: where is a constant and the Dirac distribution. In a broader sense, the principal value can be defined for a wide class of singular integral kernels on the Euclidean space . If has an isolated singularity at the origin, but is an otherwise "nice" function, then the principal-value distribution is defined on compactly supported smooth functions by Such a limit may not be well defined, or, being well-defined, it may not necessarily define a distribution. It is, however, well-defined if is a continuous homogeneous function of degree whose integral over any sphere centered at the origin vanishes. This is the case, for instance, with the Riesz transforms. Consider the difference in values of two limits: The former is the Cauchy principal value of the otherwise ill-defined expression Similarly, we have The former is the principal value of the otherwise ill-defined expression The Cauchy principal value of a function can take on several nomenclatures, varying for different authors. Among these are: as well as P.V., and V.P. See also 1. ↑ Ram P. Kanwal (1996). Linear Integral Equations: theory and technique (2nd ed.). Boston: Birkhäuser. p. 191. ISBN 0-8176-3940-3. 2. ↑ Frederick W. King (2009). Hilbert Transforms. Cambridge: Cambridge University Press. ISBN 978-0-521-88762-5. This article is issued from - version of the 6/12/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.
{"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Cauchy_principal_value.html","timestamp":"2024-11-10T18:11:25Z","content_type":"text/html","content_length":"29422","record_id":"<urn:uuid:d8669920-e6d8-42c1-9ef5-7cb269bf691b>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00511.warc.gz"}
HEX'AGON, n. [Gr. six and an angle.] In geometry, a figure of six sides and six angles. If the sides and angles are equal,it is a regular hexagon. The cells of honeycomb are hexagons, and it is remarkable that bees instinctively form their cells of this figure which fills any given space without any interstice or loss or room.
{"url":"https://1828.mshaffer.com/d/word/hexagon/simple/","timestamp":"2024-11-02T15:20:20Z","content_type":"text/html","content_length":"1683","record_id":"<urn:uuid:46af40a1-fac3-42c9-9879-4991e1d5a1ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00727.warc.gz"}
Members: 3658 Articles: 2'599'751 Articles rated: 2609 03 November 2024 Article overview Scalar radius of the pion and zeros in the form factor Jose A. Oller ; Luis Roca ; Date: 31 Mar 2007 Subject: hep-ph (High Energy Physics - Phenomenology) Abstract: The quadratic pion scalar radius, <r^2>_s^pi, plays an important role for present precise determinations of pipi scattering. Recently, it has been object of controversy between Yndur’ain, on the one hand, and Ananthanarayan, Caprini, Colangelo, Gasser and Leutwyler (ACCGL), on the other. While the former, based on an Omn`es representation of the scalar form factor, obtains <r^2>^pi_s=0.75pm 0.07 fm^2, the latter, from a solution of the Muskhelishvili-Omn`es equations with pipi and Kar{K} as coupled channels, end with <r^2>^pi_s=0.61pm 0.04 fm^2. A large discrepancy between both values, given the precision, then results. We reanalyze Yndur’ain’s method and show that for some S-wave null isospin (I) T-matrices the scalar form factor has a zero previously overlooked. Once this is accounted for, the resulting <r^2>_s^pi is compatible with the value of ACCGL. Following the corrected version of Yndur’ain’s approach we perform a new determination and obtain <r^2>_s^pi=0.63pm 0.05 fm^2. The main source of error is present experimental uncertainties in low energy S-wave I=0 pipi phase shifts. Another important contribution to the error is the not yet settled asymptotic phase of the scalar form factor from QCD. Source: arXiv, arxiv.0704.0039 Services: Forum | Review | PDF | Favorites No review found. Note: answers to reviews or questions about the article must be posted in the forum section. Authors are not allowed to review their own article. They can use the forum section.
{"url":"http://science-advisor.net/article/0704.0039","timestamp":"2024-11-03T20:21:34Z","content_type":"text/html","content_length":"22330","record_id":"<urn:uuid:8dd72211-99b7-4c8a-bf76-396b7cc0843b>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00854.warc.gz"}
In the entry we discussed four locative relations—exact, weak, entire, and pervasive location—and we used exact location to define the others. This yields System 1 below. Here we sketch several other systems and discuss some of their consequences. System 1: Primitive Exact Location For convenience, we set out definitions again here: \(x\) is weakly located at \(y\) \(=_{\df}\) \(x\) is exactly located at something that overlaps \(y\). \[\WKL(x, y) =_{\df} \exists z[L(x, z) \amp O(z, y)]\] \(x\) is entirely located at \(y\) \(=_{\df}\) \(x\) is exactly located at some part of \(y\). \[\EL(x, y) =_{\df} \exists z[L(x, z) \amp P(z, y)]\] \(x\) is pervasively located at \(y\) \(=_{\df}\) \(x\) is exactly located at something of which \(y\) is a part. \[PL(x, y) =_{\df} \exists z[L(x, z) \amp P(y, z)]\] One important fact about (DS1.1) is that it makes • Exactness If a thing is weakly located somewhere, then it’s exactly located somewhere. \[\forall x\forall y[\WKL (x, y) \rightarrow \exists zL(x, z)]\] an analytic and hence necessary truth (Parsons 2007). This is important because there are reasons to doubt that Exactness is necessary. At least two exotic cases pose problems for Exactness. The first is • Pointy objects in gunky space (Gilmore 2006: 203; Parsons 2007: 207–9). 1. All regions are extended and gunky and decompose into smaller (but still extended and still gunky) regions, 2. object \(o_p\) is an unextended, point-like, located entity, and 3. nothing is located any non-region. Since \(o_p\) is point-like, it is too small to be exactly located at any extended region, but it should still be weakly located at many regions—in particular, at each in a sequence of nested regions that ‘converge onto’ it. So, if it is possible that (i)–(iii) are all true, then it is possible that, contrary to Exactness, a thing is weakly located somewhere without being exactly located anywhere. A second problem case is • Almond in the void (Kleinschmidt 2016). An almond lies within an extended simple region larger than the almond. There are no regions as small as, or smaller than, the almond. The almond is not located at any non-regions. Since the almond lies within the region, it should count as being weakly located at the region. Since the region is larger than the almond, the almond is not exactly located at the region. Since there are no regions that are the same size as the almond, the almond is not exactly located anywhere. Therefore, we have another apparent case of weak location without exact location, contrary to System 2: Primitive Weak Location, with Parsons-style Definitions A system in which weak location is primitive may fare better with the two cases above. We will consider two such systems, the first of which traces to Parsons (2007). Its core is the following definition of exact location: \(x\) is exactly located at \(y\) \(=_{\df}\) \(x\) is weakly located at all and only those entities that overlap \(y\). \[L(x, y) =_{\df} \forall z[\WKL(x, z) \leftrightarrow O(y, z)]\] The remainder of System 2 results from dropping (DS1.1) and retaining (DS1.2) and (DS1.3). One potential virtue of System 2 is that it does not make Exactness analytic. Having dropped the definition of weak location in terms of exact location, nothing forces us to deny the possibility of something that is weakly located at certain regions while not being exactly located anywhere. This is just what we wanted to say about the pointy object \(o_p\) in gunky space. So here System 2 improves on System 1. System 2 does not help, however, with Almond in the void. Since the almond is weakly located at all and only the regions that overlap the extended simple region, (DS2.1) yields the unwanted verdict that the almond is exactly located at the region, and (DS1.3) then yields the unwanted verdict that it is also pervasively located there. The verdicts are unwanted because in both cases the almond is intuitively too small to be exactly and pervasively located at the relevant region. One may suggest to define entire location directly in terms of weak location as ‘\(y\) overlaps all of \(x\)’s weak locations’. According to this definition, the almond is entirely located at the region (the same holds for (DS1.2)). But then it becomes implausible not to define ‘\(x\) is pervasively located at \(y \)’ as ‘\(x\) is weakly located at every region that overlaps \(y\)’, which yields (again) the unwanted verdict that the almond is pervasively located at the region. So, System 2, and minor variants thereof, seem unable to handle Almond in the void. A second problem for System 2 and (DS2.1) arises from the fact that they make • Quasi-functionality Nothing has two different exact locations, unless each of those locations overlaps exactly the same things as the other—i.e., unless they mereologically coincide. \[\forall x\ forall y\forall z[(L(x, y) \amp L(x, z)) \rightarrow CO(y, z)]\] an analytic and hence necessary truth. There are many who would deny Quasi-functionality, and there are others who would deny that it is necessary. (It is worth noting that in the presence of a suitably extensional mereology Quasi-functionality entails full-blown Functionality). For now, we can consider a third exotic problem case: • Time traveling Suzy. As an adult, Suzy travels back in time and visits herself as an infant. Time traveling, adult Suzy stands near the crib in which Baby Suzy sleeps. Adult Suzy is exactly located at a certain adult-sized region, \(r_A\), and Baby Suzy is exactly located at a certain baby-sized region, \(r_B\). The two regions, \(r_A\) and \(r_B\), do not even overlap, much less coincide. And yet one thing, Suzy, is exactly located at each of them. (We borrow the character of Suzy from Vihvelin 1996). As with the two previous cases, not everyone will grant the possibility of Time traveling Suzy. Some will deny the possibility of backward time travel or self-visitation; others will allow it but deny that it involves single thing having two exact locations. However, for those who grant the possibility of the case as described, it generates an argument against System 2. System 3: Primitive Weak Location, with Eagle-style Definitions The fact that System 2 entails Quasi-functionality motivates the following system of definitions due to Eagle (2010a, 2016a,b). To be precise, Eagle starts with a relation he calls “occupation” and stipulates that an entity occupies a region iff the entity can, in whole or in part, be found at that region. We take this relation to be weak location. Indeed, Eagle (2019) considers the general consequences of taking weak location as primitive, independently of particular definitions of other locative notions in terms of it. One possibility is to define Containment \((\CN)\), Filling \((F) \), and Exact Location as follows. (For a thorough assessment of Eagle’s theory of location see Costa and Calosi (2022) and Payton 2023.) \(x\) is contained in \(y =_{\df}\) each part of \(x\) occupies a part of \(y\). \[\CN(x, y) =_{\df} \forall w [P(w, x) \rightarrow \exists z [P(z, y) \amp \WKL (w, z)]]\] \(x\) fills \(y =_{\df}\) each part of \(y\) is occupied by \(x\). \[F(x, y) =_{\df} \forall w [P(w, y) \rightarrow \WKL(x, w)]\] \(x\) is exactly located at \(y\) \(=_{\df}\) \(x\) is contained in \(y\), \(x\) fills \(y\), and there are no proper parts of \(y\) that \(x\) is contained in and fills. \[ L(x, y) =_{\df}\ &\CN (x, y) \amp F(x, y) \amp {}\\ &\neg\exists w [PP(w, y) \amp \CN(x, w) \amp F(x, w)] \] System 3 entails neither Exactness nor Quasi-Functionality. Failure of Exactness entails that it can handle pointy objects in gunky space. One might be tempted to run the same argument for Time Travelling Suzy. Things are however a little more nuanced. Suppose that Adult Suzy at \(r_A\) has a part that Baby Suzy at \(r_B\) does not have, and that Baby Suzy at \(r_B\) has a part that Adult Suzy at \(r_A\) does not have. If so, Suzy will be only contained at the sum of \(r_A\) and \(r_B\) (i.e., \(r_A +r_B\)) and will be uniquely exactly located there. Intuitively, this is not the correct result. Even in the absence of mereological change a slightly modified Time Travelling Suzy scenario raises problems of over-generation of exact locations. Suppose Suzy travels back in time to visit herself and is exactly located at two congruent regions \(r_A\) and \(r_B\), as before. We stipulate that \(r_A\) is the sum of two regions \(r_A\)-left and \(r_A\)-right. The same for \(r_B \). Furthermore, Suzy\(_A\) is the sum of Suzy\(_A\)-left and Suzy\(_A\)-right, that are exactly located at \(r_A\)-left and \(r_A\)-right respectively. Now consider the region \(r\) which is the sum of \(r_A\)-left and \(r_B\)-right \((r = r_A\)-left \(+ r_B\)-right). The definitions above entail that Suzy is exactly located at \(r_A\), and at \(r_B\), but also at the disconnected region \(r\). What about the Almond in the void? The almond is contained and fills the (larger) simple region. Hence the system delivers that the almond is exactly located at the region. Finally, there is another case that spells trouble for System 3, namely: • Nested Multilocation (adapted from Kleinschmidt 2011, discussed in the main text). Clifford is a large statue of a dog, made of small statues. Clifford shrinks, travels back in time, and is given the name ‘Odie’. Odie, together with many other small statues, is used to build Clifford. Odie is exactly located at \(r_S\), a small region; Clifford is exactly located at \(r_L\), a large region; and \(r_S\) is a proper part of \(r_L\). If we assume that Odie is identical to Clifford, we get the result that a single thing is exactly located at two different regions, one of which is a proper part of the other. (DS3.3) rules this out, which might strike some readers as a drawback. Interestingly, this system rules out extended simples by definition—Costa and Calosi (2022). System 4: Primitive Entire Location Next, we consider a system of definitions (due to Correia 2022) on which entire location is primitive. \(x\) is exactly located at \(y\) \(=_{\df}\) \(x\) is entirely located at \(y\) but not at any proper part of \(y\) (Correia 2022: 567). \[L(x, y) =_{\df} \EL(x, y) \amp {\sim} \exists z[\PP(z, y) \amp \EL(x, z)]\] \(x\) is weakly located at \(y\) \(=_{\df}\) \(x\) is entirely located at some region \(z\) such that for any \(w\), if \(w\) is a part of \(z\) and \(x\) is entirely located at \(w\), then \(w\) overlaps \(y\) (Correia 2022: 568). \[ WKL(x, y)\,=_{df}\,&\exists z [\text{EL}(x, z)\ \land \\ &\forall w [(P(w, z) \land \text{ EL}(x, w)) \rightarrow \text{ O}(w, y)]] \] Correia (2022) goes on to define pervasive location in terms of entire location; we leave this out to save space. What is important here is to note that System 3 handles both Time traveling Suzy and Pointy objects in gunky space. Start with the former. Intuitively, Suzy is entirely located at \(r_A\) but not at any of its proper parts. If that is correct, then (DS4.1) yields the result that Suzy is exactly located at \(r_A\), as desired. Parallel comments go for \(r_B\). So, Suzy has two different, disjoint, exact locations, as desired. Turning now from (DS4.1) to (DS4.2), one might wonder what could justify adopting the rather complicated definition instead of the simpler definition: ‘\(x\) is weakly located at \(y\)’ as ‘\(y\) overlaps every region at which \(x\) is entirely located’. Correia notes that the simpler definition would mishandle cases like Time traveling Suzy. Consider some region \(r_C\) that overlaps \(r_A \), the adult-sized region, but not \(r_B\), the baby-sized region. Region \(r_C\) does not overlap every region at which Suzy is entirely located. For example, \(r_C\) does not overlap \(r_B\). So, the simpler definition yields the intuitively incorrect result that Suzy is not weakly located at \(r_C\). This might suggest that we should define ‘\(x\) is weakly located at \(y\)’ as ‘\(y\) overlaps some region at which \(x\) is entirely located’. After all, while \(r_C\) does not overlap every entire location of Suzy, it does overlap at least one—for example, \(r_A\). But this would overgenerate cases of weak location. Take some small cubical region 20 km away from Suzy and her crib. Suzy is not weakly located at that cubical region. But according to the latest proposed definition, she is, since the cubical region does overlap some entire location of Suzy—for example, the exact location of the whole Milky Way Galaxy, which includes \(r_A\) and \(r_B\) as proper parts. Correia’s own (DS4.2) yields the correct verdict. According to that definition, \(r_C\)’s overlapping some entire location of Suzy is not sufficient for Suzy to be weakly located at \(r_C\). Nor is it necessary that \(r_C\) overlaps every entire location of Suzy. Instead, what is necessary and sufficient is that there be a region \(z\) at which Suzy is entirely located every part w of which is such that if Suzy is entirely located at \(w\), then \(w\) overlaps \(r_C\). It is plausible that there are such regions \(z\). Take region \(r_A\). Suzy is entirely located at it but not at any of its proper parts. And \(r_A\) overlaps \(r_C\). So every part of \(r_A\) at which Suzy is entirely located \((r_A\) alone) overlaps \(r_C\). Or consider some proper superregion of \(r_A\)—call it \ (r_A^+\)—that does not have \(r_B\) as a part. Again every part of \(r_A^+\) at which Suzy is entirely located (every part of \(r_A^+\) that has \(r_A\) as a part) overlaps \(r_C\). Now we turn to System 4’s treatment of Pointy objects in gunky space. The point-like object \(o_p\) is entirely located at many regions. But—in light of the gunky structure of space in this case—every region at which it is entirely located has other such regions as proper parts. So, by (DS4.1), \(o_p\) is not exactly located anywhere, as desired. (DS4.2) also yields the correct verdict that \(o_p\) is weakly located at many regions, but we leave this for the reader to show. Two other cases we considered above might be seen as posing problems for System 4. One is Nested Multilocation. Correia (2022: 567) notes that (DS4.1) rules this out; we leave it for the reader to The second potentially problematic case for System 4 is Almond in the void. As Correia notes, (DS4.1) yields the result that the almond is exactly located at the region. For the almond is entirely located there, and it is not entirely located at any proper part of that region. Correia (2022: 581) embraces this outcome, but some readers may find it implausible. System 5: Primitive Plural Pervasive Location A fifth system of definitions may improve on the four considered so far. The fifth system (adapted with modification from Loss 2019 and 2023) is based on a primitive locative relation that we have not yet mentioned: plural pervasive location. The fifth system also crucially relies on the assumption that regions are located at themselves (Casati & Varzi 1999: 121). Here is an informal gloss of the new relation: • Plural pervasive location: one or more entities \(xx\) are plurally pervasively located at region \(y\) if and only if: 1. \(xx\) collectively completely fill \(y\), 2. each of \(xx\) ‘helps’ to fill \(y\), that is, each of \(xx\) is at least weakly located at \(y\), and 3. if there is just one of \(xx\), then that thing has a size that is at least as great as the size of \(y\) (Loss 2019, 2023). In symbols: \(\PPL(xx, y)\). The four locative relations we have considered so far are all, we assume, singular in both argument places. Plural pervasive location, however, has a plural argument place for occupants. Its first argument place can take either a single thing or more things collectively. For examples, return to Figure 1 in the main text. While neither \(o_1\) alone nor \(o_2\) alone completely fills \(r_3\), taken together \(o_1\) and \(o_2\) do completely fill it, so \(o_1\) and \ (o_2\) are plurally pervasively located at \(r_3\). But one should also allow for singular cases of this same relation: one can say that \(o_1\) is plurally pervasively located at \(r_1\). Further, one should allow for intuitively ‘overdetermined’ cases of plural pervasive location and say that \(o_1\) and \(o_3\) are plurally pervasively located at \(r_5\)—though each one on its own is also plurally pervasively located there. We do not, however, allow for cases in which some objects \(xx\) are plurally pervasively located at \(y\) even though one of \(xx\) is not even weakly located at \(y\). For example, although \(o_1\) and \(o_3\) are plurally pervasively located at \(r_5, o_1\) and \(o_2\) are not, because \(o_2\) is not even weakly located at \(r_5\): it does not help to fill The final clause in our gloss of plural pervasive location is needed to ensure that we are attending to a non-additive plural pervasive location relation. Let object \(o_m\) be a square, one square meter in area. Suppose that \(o_m\) is multilocated: it is exactly located at the square region \(r_7\) and also exactly located at the square region \(r_8 \). These regions do not overlap. The fusion of \(r_7\) and \(r_8 (r_7 +r_8)\) is a rectangle, two square meters in area. Must \(o_m\) be plurally pervasively located at \(r_7 +r_8\)? There seem to be two relations in the vicinity of plural pervasive location, and the answer to the foregoing question depends on which relation we are asking about. One of them, call it \(\PPL_A\), obeys an additivity principle: • \(\boldsymbol{\PPL_A}\) Additivity. For any \(x\), any \(yy\), and any \(z\), if \(z\) is a fusion of \(yy\) and \(x\) is plurally pervasively located\(_A\) at each of \(yy\), then \(x\) is plurally pervasively located\(_A\) at \(z\). If our question about \(r_7 +r_8\) was about \(\PPL_A\), then the answer is ‘Yes’. Object \(o_m\) is exactly located at \(r_7\), so it is plurally pervasively located there. Likewise, for \(r_8\). So, given \(\PPL_A\) Additivity, \(o_m\) is plurally pervasively located\(_A\) at their fusion, \(r_7 +r_8\). However, it seems that we can also grasp a PPL-like relation, call it \(\PPL_N\), that is not additive in this way. If our question is about \(\PPL_N\), then the answer is presumably ‘No’. An object bears \(\PPL_N\) only to those regions that are the same size or smaller than the object. When an object is multilocated, it may be exactly located at each of several regions but not at their fusion. Likewise, such an object may be plurally pervasively located\(_N\) at each of several regions but not their fusion. This seems to be the case with \(o_m\). It is one square meter in area: that is its one and only size. It is not, for example, two square meters in area. The region \(r_7 +r_8\), on the other hand, is two square meters in area. Since this is not the same size or smaller than the size of \(o_m\), we should say that \(o_m\) is not plurally pervasively located\(_N\) at \(r_7 +r_8\). Object \(o\) is not big enough to be plurally pervasively located\(_N\) at \(r_7 +r_8\). This completes our preamble. If we invoke the ‘is one of’ predicate from plural logic, symbolized as ‘\(\prec\)’, then we can state System 5 as follows: \(x\) is exactly located at \(y\) \(=_{\df}\) \(x\) is plurally pervasively located at \(y\) but not at anything that has \(y\) as a proper part (Loss 2023). \[L(x, y) =_{\df} \PPL(x, y) \amp \ forall z[(\PPL(x, z) \amp P(y, z)) \rightarrow z=y]\] \(x\) is weakly located at \(y\) \(=_{\df}\) \(x\) is one of some things that are plurally pervasively located at \(y\). \[\WKL(x, y) =_{\df} \exists xx[x\prec xx \amp \PPL(xx, y)]\] Notice that both exact and weak location are defined so that both of their argument positions are singular, though they are defined in term of plural pervasive location. It is also worth noting that while System 5 adopts Loss’s definition of exact location, it does not adopt his complex definition of weak location. The definition we consider here is simpler. Unlike Systems 1–4, System 5 handles Pointy objects in gunky space, Almond in the void, and Time traveling Suzy. It does not, however, help with Nested Multilocation. We will consider these cases one by one. Consider first an ordinary case of exact location: for example, object \(o_1\) and region \(r_1\), as depicted in Figure 1. Object \(o_1\) does completely fill \(r_1\) all by itself, and it is at least as large as \(r_1\), so we should say that \(o_1\) is plurally pervasively located at \(r_1\). Further, it should be clear that while \(o_1\) is plurally pervasively located at other regions (e.g., \(r_5\)), none of them have \(r_1\) as a proper part. So (DS5.1) counts \(o_1\) as being exactly located at \(r_1\). Now consider Pointy objects in gunky space. The pointy object \(o_p\) does not completely fill any region on its own. It is too small. So, it is not plurally pervasively located at any region, and hence, according to (DS5.1), it is not exactly located at any region. Is \(o_p\) weakly located at any region? Well, consider some solid, ball-shaped region \(r^*\) with \(o_p\) intuitively at its center. Although \(o_p\) by itself does not completely fill \(r^*\), \(o_p\) and \(r^*\), collectively, do completely fill \(r^*\), given the assumption that regions are located at themselves (Casati & Varzi 1999: 121). So, we should say that \(o_p\) and \(r^*\) are plurally pervasively located at \(r^*\), hence that \(o_p\) is one of some things that are plurally pervasively located at \(r^*\). In that case, (DS5.2) says that \(o_p\) is weakly located at \(r^*\), as desired. The pointy object is weakly located at regions such as \(r^*\) but not exactly located anywhere. Almond in the void is handled in a similar fashion. The almond does not completely fill the extended simple region on its own, but the region and the almond, taken together, do fill the region. So, the almond is weakly but not exactly located at the region. Now consider Time traveling Suzy. We wanted to be able to say that Suzy is exactly located at the adult-sized region \(r_A\) and also at the baby-sized region \(r_B\). Start with \(r_A\). Suzy on her own completely fills \(r_A\), and her size is at least as great as the size of \(r_A\). Parallel remarks go for \(r_B\). So, Suzy is plurally pervasively located at \(r_A\) and also at \(r_B\). Is she plurally pervasively located at anything that has \(r_A\) as a proper part? It is tempting to suggest that Suzy is plurally pervasively located at the fusion of \(r_A\) and \(r_B, r_A +r_B\). In some sense, she does completely fill \(r_A +r_B\). However, she is not big enough to fill that fusion in the relevant sense. To be plurally pervasively located at \(r_A +r_B\), Suzy must have a size that is at least as great as the size of \(r_A +r_B\). Loss would say that Suzy does not have such a size. At most, Suzy has two sizes: the first is her adult volume, \(v_A\), and the second is her baby volume, \(v_B\). Neither of these sizes is as great as the size of \ (r_A +r_B\). Crucially, Suzy does not have a third size: that of an adult together with a baby. If this is correct, then we should say that Suzy is not plurally pervasively located at \(r_A +r_B\) or (for parallel reasons) at any other region that has \(r_A\) as a proper part. And in that case, (DS5.1) counts Suzy as being exactly located at \(r_A\). Parallel remarks go for \(r_B\). So System 5 allows us to say that Suzy is exactly located at \(r_A\) and also at the disjoint region \(r_B\). Finally, consider Nested Multilocation. Here System 5 offers us no help. The desired result was that a single thing, Clifford (which is identical to Odie), is exactly located at two regions, one of which is a proper part of the other. This is immediately ruled out by (DS5.1). The table below sums up the results. Pointy Almond in Time Nested System objects in the void traveling Suzy multilocation gunky space 1. Prim. Exact Loc. No No Yes Yes 2. Prim. Weak Loc., Yes No No No 3. Prim. Weak Loc., Yes No Problematic No 4. Prim. Entire Loc. Yes No Yes No 5. Prim. Plural Yes Yes Yes No Pervasive Loc.
{"url":"https://plato.stanford.edu/Entries/location-mereology/systems-of-location.html","timestamp":"2024-11-06T10:47:35Z","content_type":"text/html","content_length":"42051","record_id":"<urn:uuid:5927f8ba-cd4e-474c-b1ec-1115b1d0cdca>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00486.warc.gz"}
SuiteSparse Matrix Collection Group DIMACS10 10th DIMACS Implementation Challenge Updated July 2012 As stated on their main website ( http://dimacs.rutgers.edu/Challenges/ ), the "DIMACS Implementation Challenges address questions of determining realistic algorithm performance where worst case analysis is overly pessimistic and probabilistic models are too unrealistic: experimentation can provide guides to realistic algorithm performance where analysis fails." For the 10th DIMACS Implementation Challenge, the two related problems of graph partitioning and graph clustering were chosen. Graph partitioning and graph clustering are among the aforementioned questions or problem areas where theoretical and practical results deviate significantly from each other, so that experimental outcomes are of particular interest. Problem Motivation Graph partitioning and graph clustering are ubiquitous subtasks in many application areas. Generally speaking, both techniques aim at the identification of vertex subsets with many internal and few external edges. To name only a few, problems addressed by graph partitioning and graph clustering algorithms are: * What are the communities within an (online) social network? * How do I speed up a numerical simulation by mapping it efficiently onto a parallel computer? * How must components be organized on a computer chip such that they can communicate efficiently with each other? * What are the segments of a digital image? * Which functions are certain genes (most likely) responsible Challenge Goals * One goal of this Challenge is to create a reproducible picture of the state-of-the-art in the area of graph partitioning (GP) and graph clustering (GC) algorithms. To this end we are identifying a standard set of benchmark instances and * Moreover, after initiating a discussion with the community, we would like to establish the most appropriate problem formulations and objective functions for a variety of * Another goal is to enable current researchers to compare their codes with each other, in hopes of identifying the most effective algorithmic innovations that have been proposed. * The final goal is to publish proceedings containing results presented at the Challenge workshop, and a book containing the best of the proceedings papers. Problems Addressed The precise problem formulations need to be established in the course of the Challenge. The descriptions below serve as a starting point. * Graph partitioning: The most common formulation of the graph partitioning problem for an undirected graph G = (V,E) asks for a division of V into k pairwise disjoint subsets (partitions) such that all partitions are of approximately equal size and the edge-cut, i.e., the total number of edges having their incident nodes in different subdomains, is minimized. The problem is known to be * Graph clustering: Clustering is an important tool for investigating the structural properties of data. Generally speaking, clustering refers to the grouping of objects such that objects in the same cluster are more similar to each other than to objects of different clusters. The similarity measure depends on the underlying application. Clustering graphs usually refers to the identification of vertex subsets (clusters) that have significantly more internal edges (to vertices of the same cluster) than external ones (to vertices of another cluster). There are 12 data sets in the DIMACS10 collection: clustering: real-world graphs commonly used as benchmarks coauthor: citation and co-author networks Delaunay: Delaunay triangulations of random points in the plane dyn-frames: frames from a 2D dynamic simulation Kronecker: synthetic graphs from the Graph500 benchmark numerical: graphs from numerical simulation random: random geometric graphs (random points in the unit square) streets: real-world street networks Walshaw: Chris Walshaw's graph partitioning archive matrix: graphs from the UF collection (not added here) redistrict: census networks star-mixtures : artificially generated from sets of real graphs Some of the graphs already exist in the UF Collection. In some cases, the original graph is unsymmetric, with values, whereas the DIMACS graph is the symmetrized pattern of A+A'. Rather than add duplicate patterns to the UF Collection, a MATLAB script is provided at http://www.cise.ufl.edu/research/sparse/dimacs10 which downloads each matrix from the UF Collection via UFget, and then performs whatever operation is required to convert the matrix to the DIMACS graph problem. Also posted at that page is a MATLAB code (metis_graph) for reading the DIMACS *.graph files into MATLAB. clustering: Clustering Benchmarks These real-world graphs are often used as benchmarks in the graph clustering and community detection communities. All but 4 of the 27 graphs already appear in the UF collection in other groups. The DIMACS10 version is always symmetric, binary, and with zero-free diagonal. The version in the UF collection may not have those properties, but in those cases, if the pattern of the UF matrix is symmetrized and the diagonal removed, the result is the DIMACS10 DIMACS10 graph: new? UF matrix: --------------- ---- ------------- clustering/adjnoun Newman/adjoun clustering/as-22july06 Newman/as-22july06 clustering/astro-ph Newman/astro-ph clustering/caidaRouterLevel * DIMACS10/caidaRouterLevel clustering/celegans_metabolic Arenas/celegans_metabolic clustering/celegansneural Newman/celegansneural clustering/chesapeake * DIMACS10/chesapeake clustering/cnr-2000 LAW/cnr-2000 clustering/cond-mat-2003 Newman/cond-mat-2003 clustering/cond-mat-2005 Newman/cond-mat-2005 clustering/cond-mat Newman/cond-mat clustering/dolphins Newman/dolphins clustering/email Arenas/email clustering/eu-2005 LAW/eu-2005 clustering/football Newman/football clustering/hep-th Newman/hep-th clustering/in-2004 LAW/in-2004 clustering/jazz Arenas/jazz clustering/karate Arenas/karate clustering/lesmis Newman/lesmis clustering/netscience Newman/netscience clustering/PGPgiantcompo Arenas/PGPgiantcompo clustering/polblogs Newman/polblogs clustering/polbooks Newman/polbooks clustering/power Newman/power clustering/road_central * DIMACS10/road_central clustering/road_usa * DIMACS10/road_usa the following graphs were added on July 2012: uk-2002 was 'added' on July 2012 to the dimacs10 MATLAB interface, but it already appears as the LAW/uk-2002 matrix. uk-2007-05 is in the DIMACS10 collection but is not yet added here, because it's too large for the file format of the UF collection. coauthor: Citation Networks These graphs are examples of social networks used in R. Geisberger, P. Sanders, and D. Schultes. Better approximation of betweenness centrality. In 10th Workshop on Algorithm Engineering and Experimentation, pages 90-108, San Francisco, 2008. SIAM. Delaunay: Delaunay Graphs These files have been generated as Delaunay triangulations of random points in the unit square. Engineering a scalable high quality graph partitioner, M. Holtgrewe, P. Sanders, C. Schulz, IPDPS 2010 dyn-frames: Frames from 2D Dynamic Simulations These files have been created with the generator described in Oliver Marquardt, Stefan Schamberger: Open Benchmarks for Load Balancing Heuristics in Parallel Adaptive Finite Element Computations. In Proc. International Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA 2005), Volume 2, pp. 685-691. CSREA Press 2005, ISBN 1-932415-59-9685-691. The graphs are meshes taken from indivudual frames of a dynamic sequence that resembles two-dimensional adaptive numerical simulations. Smaller versions of these files (and their dynamic sequences as videos) can be found on Stefan Schamberger's website ( http://www.upb.de/cs/schaum/benchmark.html ) dedicated to these benchmarks. The files presented here are the frames 0, 10, and 20 of the sequences, respectively. Kronecker: Kronecker Generator Graphs The original Kronecker files contain self-loops and multiple edges. These properties are also present in real-world data sets. However, some tools cannot handle these "artifacts" at the moment. That is why we present "cleansed" versions of the data sets as well. For the Challenge you should expect to be confronted with the original data with self-loops and multiple edges. However, the final decision on this issue will be made based on participant feedback. All files have been generated with the R-MAT parameters A=0.57, B=0.19, C=0.19, and D=1-(A+B+C)=0.05 and edgefactor=48, i.e., the number of edges equals 48*n, where n is the number of vertices. Details about the generator and the parameter meanings can be found on the Graph500 website. ( http://www.graph500.org/Specifications.html ) There are 12 graphs in the DIMACS10 test set at http://www.cc.gatech.edu/dimacs10/index.shtml . Them come in 6 pairs. One graph in each pair is a multigraph, with self-edges. The other graph is the nonzero pattern of the first (binary), with self-edges removed. MATLAB cannot directly represent multigraph, so in the UF Collection the unweighted multigraph G is represented as a matrix A where A(i,j) is an integer equal to the number edges (i,j) in G. The binary graphs include the word 'simple' in their name In the UF Collection, only the multigraph is included, since the simple graph can be constructed from the multigraph. If A is the multigraph, the simple graph S can be computed as: S = spones (tril (A,-1)) + spones (triu (A,1)) ; DIMACS10 graph: UF matrix: --------------- ------------- kronecker/kron_g500-logn16 DIMACS10/kron_g500-logn16 kronecker/kron_g500-logn17 DIMACS10/kron_g500-logn17 Group Description kronecker/kron_g500-logn18 DIMACS10/kron_g500-logn18 kronecker/kron_g500-logn19 DIMACS10/kron_g500-logn19 kronecker/kron_g500-logn20 DIMACS10/kron_g500-logn20 kronecker/kron_g500-logn21 DIMACS10/kron_g500-logn21 References: "Introducing the Graph 500," Richard C. Murphy, Kyle B. Wheeler, Brian W. Barrett, James A. Ang, Cray User's Group (CUG), May 5, 2010. D.A. Bader, J. Feo, J. Gilbert, J. Kepner, D. Koester, E. Loh, K. Madduri, W. Mann, Theresa Meuse, HPCS Scalable Synthetic Compact Applications #2 Graph Analysis (SSCA#2 v2.2 Specification), 5 September 2007. D. Chakrabarti, Y. Zhan, and C. Faloutsos, R-MAT: A recursive model for graph mining, SIAM Data Mining 2004. Section 17.6, Algorithms in C (third edition). Part 5 Graph Algorithms, Robert Sedgewick (Programs 17.7 and 17.8) P. Sanders, Random Permutations on Distributed, External and Hierarchical Memory, Information Processing Letters 67 (1988) pp "DFS: A Simple to Write Yet Difficult to Execute Benchmark," Richard C. Murphy, Jonathan Berry, William McLendon, Bruce Hendrickson, Douglas Gregor, Andrew Lumsdaine, IEEE International Symposium on Workload Characterizations 2006 (IISWC06), San Jose, CA, 25-27 October 2006. ---- sample code for generating these matrices: function ij = kronecker_generator (SCALE, edgefactor) %% Generate an edgelist according to the Graph500 %% parameters. In this sample, the edge list is %% returned in an array with two rows, where StartVertex %% is first row and EndVertex is the second. The vertex %% labels start at zero. %% Example, creating a sparse matrix for viewing: %% ij = kronecker_generator (10, 16); %% G = sparse (ij(1,:)+1, ij(2,:)+1, ones (1, size (ij, 2))); %% spy (G); %% The spy plot should appear fairly dense. Any locality %% is removed by the final permutations. %% Set number of vertices. N = 2^SCALE; %% Set number of edges. M = edgefactor * N; %% Set initiator probabilities. [A, B, C] = deal (0.57, 0.19, 0.19); %% Create index arrays. ij = ones (2, M); %% Loop over each order of bit. ab = A + B; c_norm = C/(1 - (A + B)); a_norm = A/(A + B); for ib = 1:SCALE, %% Compare with probabilities and set bits of indices. ii_bit = rand (1, M) > ab; jj_bit = rand (1, M) > ( c_norm * ii_bit + a_norm * not (ii_bit) ); ij = ij + 2^(ib-1) * [ii_bit; jj_bit]; %% Permute vertex labels p = randperm (N); ij = p(ij); %% Permute the edge list p = randperm (M); ij = ij(:, p); %% Adjust to zero-based labels. ij = ij - 1; function G = kernel_1 (ij) %% Compute a sparse adjacency matrix representation %% of the graph with edges from ij. %% Remove self-edges. ij(:, ij(1,:) == ij(2,:)) = []; %% Adjust away from zero labels. ij = ij + 1; %% Find the maximum label for sizing. N = max (max (ij)); %% Create the matrix, ensuring it is square. G = sparse (ij(1,:), ij(2,:), ones (1, size (ij, 2)), N, N); %% Symmetrize to model an undirected graph. G = spones (G + G.'); numerical: graphs from numerical simulations For the graphs adaptive and venturiLevel3, please refer to the preprint: Hartwig Anzt, Werner Augustin, Martin Baumann, Hendryk Bockelmann, Thomas Gengenbach, Tobias Hahn, Vincent Heuveline, Eva Ketelaer, Dimitar Lukarski, Andrea Otzen, Sebastian Ritterbusch, Bjo"rn Rocker, Staffan RonnĂ¥s, Michael Schick, Chandramowli Subramanian, Jan-Philipp Weiss, and Florian Wilhelm. Hiflow3 - a flexible and hardware-aware parallel Finite element package. In Parallel/High-Performance Object- Oriented Scientific Computing (POOSC'10). For the graphs channel-500x100x100-b050 and packing-500x100x100-b050, please refer to: Markus Wittmann, Thomas Zeiser. Technical Note: Data Structures of ILBDC Lattice Boltzmann Solver. The instances NACA0015, M6, 333SP, AS365, and NLR are 2-dimensional FE triangular meshes with coordinate information. 333SP and AS365 are actually converted from existing 3-dimensional models to 2D places, while the rest are created from geometry. The corresponding coordinate files have been assembled in one archive. They have been created and contributed by Chan Siew Yin with the help of Jian Tao Zhang, Department of Mechanical Engineering, University of New Brunswick, Fredericton, Canada. random: Random Geometric Graphs rggX is a random geometric graph with 2^X vertices. Each vertex is a random point in the unit square and edges connect vertices whose Euclidean distance is below 0.55 (ln n)/n. This threshold was choosen in order to ensure that the graph is almost connected. Note: the UF Collection is a collection of matrices primarily from real applications. The only random matrices I add to the collection are those used in established benchmarks (such as DIMACS10). Engineering a scalable high quality graph partitioner, M. Holtgrewe, P. Sanders, C. Schulz, IPDPS 2010. steets: Street Networks The graphs Asia, Belgium, Europe, Germany, Great-Britain, Italy, Luxemburg and Netherlands are (roughly speaking) undirected and unweighted versions of the largest strongly connected component of the corresponding Open Street Map road networks. The Open Street Map road networks have been taken from http://download.geofabrik.de and have been converted for DIMACS10 by Moritz Kobitzsch (kobitzsch at kit.edu) as follows: First, we took the corresponding graph and extracted all routeable streets. Routable streets are objects in this file that are tagged using one of the following tags: motorway, motorway_link, trunk trunk_link, primary, primary_link, secondary, secondary_link, tertiary, tertiary_link, residential, unclassified, road, living_street, and service. Next, we now compute the largest strongly connected component of this extracted open street map graph. Self-edges and parallel edges are removed afterwards. The DIMACS 10 graph is now the undirected and unweighted version of the constructed graph, i.e. if there is an edge (u,v) but the reverse edge (v,u) is missing, we insert the reverse edge into the graph. The XY coordinates of each node in the graph are preserved. Walshaw: Chris Walshaw's graph partitioning archive Chris Walshaw's graph partitioning archive contains 34 graphs that have been very popular as benchmarks for graph partitioning algorithms ( http://staffweb.cms.gre.ac.uk/~wc06/partition/ ). 17 of them are already in the UF Collection. Only the 17 new graphs not yet in the collection are added here in the DIMACS10 set. DIMACS10 graph: new? UF matrix: --------------- ---- ------------- walshaw/144 * DIMACS10/144 walshaw/3elt AG-Monien/3elt walshaw/4elt Pothen/barth5 walshaw/598a * DIMACS10/598a walshaw/add20 Hamm/add20 walshaw/add32 Hamm/add32 walshaw/auto * DIMACS10/auto walshaw/bcsstk29 HB/bcsstk29 walshaw/bcsstk30 HB/bcsstk30 walshaw/bcsstk31 HB/bcsstk31 walshaw/bcsstk32 HB/bcsstk32 walshaw/bcsstk33 HB/bcsstk33 walshaw/brack2 AG-Monien/brack2 walshaw/crack AG-Monient/crack walshaw/cs4 * DIMACS10/cs4 walshaw/cti * DIMACS10/cti walshaw/data * DIMACS10/data walshaw/fe_4elt2 * DIMACS10/fe_4elt2 walshaw/fe_body * DIMACS10/fe_body walshaw/fe_ocean * DIMACS10/fe_ocean walshaw/fe_pwt Pothen/pwt walshaw/fe_rotor * DIMACS10/fe_rotor walshaw/fe_sphere * DIMACS10/fe_sphere walshaw/fe_tooth * DIMACS10/fe_tooth walshaw/finan512 Mulvey/finan512 walshaw/m14b * DIMACS10/m14b walshaw/memplus Hamm/memplus walshaw/t60k * DIMACS10/t60k walshaw/uk * DIMACS10/uk walshaw/vibrobox Cote/vibrobox walshaw/wave AG-Monien/wave walshaw/whitaker3 AG-Monien/whitaker3 walshaw/wing * DIMACS10/wing walshaw/wing_nodal * DIMACS10/wing_nodal redistrict: census networks These graphs represent US states. They are used for solving the redistricting problem. All data have been provided by Will Zhao. As stated on the project website, The nodes are Census2010 blocks. Two nodes have an edge linking them if they share a line segment on their borders, i.e. rook-style neighboring. The nodes weights are the POP100 variable of Census2010-PL94, and the area of eache district. star-mixtures : artificially generated from sets of real graphs Each graph in this benchmark represents a star-like structure of different graphs S0 , . . . , St. Graphs S1 , . . . , St are weakly connected to the center S0 by random edges. The total number of edges between each Si and S0 was less than 3% out of the total number of edges in Si . The graphs are mixtures of the following structures: social networks, finite-element graphs, VLSI chips, peer-to-peer networks, and matrices from optimization More info can be found in the paper I. Safro, P. Sanders, C. Schulz: Advanced Coarsening Schemes for Graph Partitioning, SEA 2012. Communicated by Christian Schulz, uploaded on March 30, 2012. Displaying collection matrices 1 - 20 of 151 in total Id Name Group Rows Cols Nonzeros Kind Date Download File 2456 caidaRouterLevel DIMACS10 192,244 192,244 1,218,132 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2457 chesapeake DIMACS10 39 39 340 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2458 road_central DIMACS10 14,081,816 14,081,816 33,866,826 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2459 road_usa DIMACS10 23,947,347 23,947,347 57,708,624 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2460 citationCiteseer DIMACS10 268,495 268,495 2,313,294 Undirected Graph 2008 MATLAB Rutherford Boeing Matrix Market 2461 coAuthorsCiteseer DIMACS10 227,320 227,320 1,628,268 Undirected Graph 2008 MATLAB Rutherford Boeing Matrix Market 2462 coAuthorsDBLP DIMACS10 299,067 299,067 1,955,352 Undirected Graph 2008 MATLAB Rutherford Boeing Matrix Market 2463 coPapersCiteseer DIMACS10 434,102 434,102 32,073,440 Undirected Graph 2008 MATLAB Rutherford Boeing Matrix Market 2464 coPapersDBLP DIMACS10 540,486 540,486 30,491,458 Undirected Graph 2008 MATLAB Rutherford Boeing Matrix Market 2465 delaunay_n10 DIMACS10 1,024 1,024 6,112 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2466 delaunay_n11 DIMACS10 2,048 2,048 12,254 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2467 delaunay_n12 DIMACS10 4,096 4,096 24,528 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2468 delaunay_n13 DIMACS10 8,192 8,192 49,094 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2469 delaunay_n14 DIMACS10 16,384 16,384 98,244 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2470 delaunay_n15 DIMACS10 32,768 32,768 196,548 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2471 delaunay_n16 DIMACS10 65,536 65,536 393,150 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2472 delaunay_n17 DIMACS10 131,072 131,072 786,352 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2473 delaunay_n18 DIMACS10 262,144 262,144 1,572,792 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2474 delaunay_n19 DIMACS10 524,288 524,288 3,145,646 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market 2475 delaunay_n20 DIMACS10 1,048,576 1,048,576 6,291,372 Undirected Graph 2011 MATLAB Rutherford Boeing Matrix Market
{"url":"http://sparse.tamu.edu/DIMACS10?page=1","timestamp":"2024-11-10T20:45:57Z","content_type":"text/html","content_length":"58767","record_id":"<urn:uuid:e962093a-3619-4c88-8a14-ec5fd0b2c0f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00017.warc.gz"}
Identifying Exponential Functions Worksheet - Function Worksheets Identifying Exponential Functions Worksheet Identifying Exponential Functions Worksheet – You’ve come to the right spot if you’re in search of an activity in math for your child to help them practice exponential functions. Exponential capabilities can be described as statistical phrases where the worth adjustments exponentially when you have a rise in the base. Exponential development is actually a prime illustration of these kinds of work. Here are some cases. Discover more about the statistical formulation by seeking via our worksheets at no cost. We’ll also give some reasons for exponential development in our report. Identifying Exponential Functions Worksheet. Types of exponential features. Exponents, also called amount beliefs are employed to illustrate the performance of a distinct function. These features aren’t actually the exact same form. In order to obtain the desired results, they can be modified by multiplying or adding constants, however. Tests with exponential work over a worksheet might help in lots of circumstances, like learning alterations in progress and decline every day. Here are some images of functions which are exponential. Suppose that the populace of any village is 20,000 at the moment developing by 15Percent annually. 000 at present and the graph of the exponential function is able to accurately predict the population of the town in 10 years if the town’s population is 20. It can be noticeable how the amount of men and women will develop by two times within this time period of time which is a growth of 15 percent a year. When you use exponential functions in your own estimations and you’ll have the ability to pay attention to how they may be applied to actual-planet conditions. Explanation of capabilities that are exponential What is the significant difference between 2 types of exponential features? The distinction depends on the character of your factors. If a value is increasing or decreasing exponentially, it’s considered to be an exponential process. That may be, the necessity of y boosts when x will grow. In a similar manner, a side to side move is not a modification of the location of the asymptote horizontal. Here is the basic comprehending for exponential operate. When you are aware the dissimilarities, you will be able to resolve conditions that are based on exponential characteristics. Exponential functions are a very popular form numerical capabilities which describe the connection between parameters. It will also grow exponentially if the base number is positive and the dependent variable is positive. The home of exponential characteristics may be working in many disciplines, like the normal sciences as well as the sociable sciences. Think about the self-reproducing population compound interest in an investment fund. Alternatively, the growing manufacturing capabilities. Exponentiation has numerous is and uses important to comprehend from the perspective of real-lifestyle circumstances. The explanation of graphs with exponentials The design of charts with exponential qualities is really a straightforward one particular. No matter what functionality it is talking about the graph has got the exact same kind: it can be either a growing or minimizing bend. The fundamental file format of an exponential function is the point where the by-axis is near to zero. Its only differentiation is the amount of slope that raises or decreases once the by-worth is in close proximity to zero or good. The graphic about the first page of your publication reveals the essential method of an exponential process. To comprehend the idea of an exponential, we need to understand the graph of their. Generally, the graph displays an asymptote inside the circumstance the amount is greater than absolutely no. Also, it provides a top to bottom asymptote for the point where an by-axis is absolutely nothing. The graph might not contain an inter-x-intercept. It’s at times hard to plan an exponential operate It’s as a result essential to know the math associated with it and the easiest way to understand the results. The clarification of your exponential improve The manifestation “exponential growth” refers to the boost in the amount of info which is accumulated with time which leads to an upward-trending contour on charts. The rate of growth grows exponentially if the power and exponent of a function are the same. For instance, if you add the equivalent of one rice grain onto the credit card at a 15 percent interest, your credit card debt will increase by a factor of 4.7 years. But, this is certainly neither bad neither very good. The difference in between exponential expansion and linear progress is so stark that this idea was used within the general public document about COVID-19. COVID-19 pandemic. The exponential approach is more difficult for individuals to comprehend in comparison to linear procedure. This is why students often overgeneralize linear processes to nonlinear versions. In this post, we’ll discover how exponential growth functions and the way it is applicable to our everyday life on the whole. I hope that you’ll be able to know the subsequent meaning of exponential progress helpful! Outline for exponential decay In math concepts, a information of exponential decay is surely an effective way to help college students comprehend the character of a treatment. It clarifies how some thing boosts and also lessens when time moves by. The numerical formula accustomed to explain exponential decay can be described as y = a(1 (b) – b)x where a will be the unique importance that was produced, the decay factor is b whilst x signifies the length of time which includes transferred. In contrast the linear method, exponential decay lessens the sum in a different way after a while. This means that the decay aspect is computed over a amount of the original volume. If an object’s temperature gradually decreases over time, the item follows the exponential curve of decay. This very same basic principle is relevant to radioactive decay as well as the decay of communities. It is essential how the speed of decay must lower till it actually reaches an asymptote side to side in order for this method to be considered to get exponential. An example is the lowering of radioactive dust as time goes by. It is additionally possible to see the exponential decaying contour viewing a popular thing amazing into a constant ambient temperatures. One more case in point will be the discharge an electric capacitor throughout the opposition. Gallery of Identifying Exponential Functions Worksheet Unit 11 1 Post Test Worksheet Exponential Functions Algebra 2 Exponential Functions Worksheet Algebra Worksheets Free Exponential Functions Notes And Worksheets Lindsay Bowden
{"url":"https://www.functionworksheets.com/identifying-exponential-functions-worksheet/","timestamp":"2024-11-06T20:13:53Z","content_type":"text/html","content_length":"66051","record_id":"<urn:uuid:3fdcdc18-8a30-4fc5-83a7-e0867764d637>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00330.warc.gz"}
Understanding Current You don’t have to browse YouTube reviews or audio forums for long before you encounter misconceptions about current. Many consider current to be the measure of an amp’s quality. We seldom see current in a set of amplifier specifications, but we often see marketing blurbs that mention current. There seems to be a general consensus among audiophiles that more current—even if it’s just claimed in marketing copy rather than directly specified—equals better sound. We sometimes even see current mentioned in headphone amps, even though headphones that actually demand lots of current (relatively speaking, of course) are rare. Because audio reviews and forums tend to treat current as a mystical and nebulous (but good) thing rather than as a clearly understood physical concept, I thought it would be appropriate to discuss the subject here—and to explain how the concerns are different in headphone amps. Honestly, I’m probably as much to blame as anyone for any misunderstandings about the subject. I’ll confess to current-related crimes in my past gigs as a marketing writer for audio companies. At the instruction of my clients (makers of amps for super-high-end consumer and low-cost commercial applications, neither of which I review), I’ve written about numerous amps as being “high-current” designs. Usually, the justification was that they were conventional class-AB amps, rather than high-efficiency class-D, class-G, or class-H amps. There’s no rule that says high-efficiency amps can’t deliver as much or more current as class-AB amps—but my clients didn’t mind implying that there was. Learning the laws Current (sometimes referred to as amperage) is the total flow of electricity; you can think of it as the amount of water flowing through a pipe. Electromotive force, or voltage, is like the pressure of the water inside the pipe. Resistance (with DC) and impedance (with AC, and all audio signals) are like the amount of constriction, or resistance to flow, that the pipe presents. So just as the current flow through the pipe is the pressure divided by the resistance to flow, the current through a wire is the voltage divided by the resistance or impedance. Practically everything you need to know about current is right there in Ohm’s law and the power equation, although it usually takes years to grasp the implications of these formulas. Most audio enthusiasts know Ohm’s law: electromotive force (in volts, or V) equals current (in amps, or I) times resistance (or impedance, in ohms or R), or V=I*R. The power equation is as follows: power (in watts, or P) equals current times voltage, or P=I*V. There’s another important equation, which is a derivation of these: voltage squared divided by resistance/impedance equals power, or V^2 Let’s think about this for a second. Assume a typical speaker has 8 ohms impedance at 1kHz. If you’re listening at a fairly normal level, you’re using between 0.5W to 10W of power. The third equation tells you that your amp is putting out 8.9V at 10W. Using the power equation, you can determine that your amp is putting out 1.1A of current. Notice what we didn’t talk about here? What amp you used. That’s because with any amp that’s capable of putting out 10W into 8 ohms at 1kHz, you’ll get 1.1A of current under those conditions. That’s true whether the amplifier is a $34.98 Lepai LP-2020TI or a $295,000 pair of D’Agostino Master Audio Systems Relentless monoblocks. Both of these components—no matter how much potential current they can produce—put out exactly the same amount of current under normal listening conditions (i.e., a reasonably sensitive speaker that runs somewhere around 8 ohms through most of the audio range, played at typical listening levels). As long as the amp is used within its capabilities, the current is entirely dependent on the load (i.e., the speaker) and the volume setting—regardless of any claims the manufacturer might make about the amp’s ability to deliver large amounts of current. Current becomes more important when you have low-impedance speakers, with a nominal impedance of 4 ohms or less, or dips down to, say, 2 ohms in the bass. Some amps—most notably, the ones built into many inexpensive A/V receivers—will shut down if they’re played loud when connected to a speaker of nominal 4 ohms impedance. Some amps might shut down if they’re merely connected to a 2-ohm load, even with no music playing. They consider it to be a short circuit. Let’s say you’re listening at loud levels and hitting 100W peaks, and you have 8-ohm and 4-ohm speakers. Using the equations above, you can calculate that the 8-ohm speaker will demand 3.5A of peak current, while the 4-ohm speaker will demand 5A of current. That might be more current than the amp’s power supply can deliver, or that its output devices can handle. So two amps that can reach 100W output with an 8-ohm speaker will put out the same current into that load. Switching to a 4-ohm load should, given V^2/R=P, practically double the power output of the amp with the same input voltage. (The output would double only if the amp had a zero output impedance, which is impossible, but many amps come close.) Now here’s where high current comes in. Although there’s no industry definition for what “high current” means, an amp that produces twice as much power when switching from 8-ohm to 4-ohm loads can, in my opinion, legitimately be considered “high current.” An amp with less current—maybe producing just 160W into 4 ohms—might be labeled “high current,” though. (By the way, of the amp brands I was asked to describe as “high current,” two could legitimately, in my opinion, deserve that title. The other—a modestly priced and very functional but generic product—put out only about 50% more power when I switched from an 8-ohm load to a 4-ohm load.) What does this have to do with headphones? With headphones, you usually need less than 1/10,000th as much power to produce a loud volume level. A typical speaker might need about 30W of power to reach 100dB peaks at a normal listening distance, while a typical headphone can deliver the same volume with just 1mW of power or less. One might thus intuitively, and reasonably, conclude that current isn’t such a big deal with While speakers and headphones with low sensitivity may be considered “hard to drive,” usually when we use that phrase, we’re talking about speakers with low impedance—but headphones with high impedance. And here, voltage, not current, becomes the big concern. Consider SoundStage! contributor Dennis Burger’s favorite DAC-headphone amp, the Schiit Audio Hel. It’s rated at 1350mW into 16 ohms, in which case it’s putting out 4.65V at 0.29A. It’s also rated at 200mW into 300 ohms, in which case it’s putting out 7.75V at just 0.026A. In an amplifier designed for home use, connected to a 120V or 240V AC line, having power supply rails that can deliver 7.75Vrms is no big deal. But for a headphone amp it’s not so easy. The headphone amp might be built into a battery-powered device, or powered by a +5VDC USB connection (which provides at most 1.77Vrms—half the DC voltage times 0.707), in which case you’d need to add a DC-to-DC converter to kick the voltage up—and the form factor of many portable audio products doesn’t provide sufficient space or cooling capacity for that. The Hel achieves its impressive power through the use of a separate ±12V power supply, and the Hel’s relatively large, well-ventilated chassis (about the size of two packs of playing cards side-by-side) provides ample cooling. I hope this has shed some light on the topic of current—but what I really hope is that it’ll inspire some readers to question the statements of a writer, podcaster, YouTuber, or manufacturer who implies that high-current amps necessarily deliver better sound quality. . . . Brent Butterworth This email address is being protected from spambots. You need JavaScript enabled to view it. You are a guest ( Sign Up ? ) or post as a guest Loading comment... The comment will be refreshed after 00:00. Be the first to comment.
{"url":"https://www.soundstagesolo.com/index.php/features/342-understanding-current","timestamp":"2024-11-05T22:14:15Z","content_type":"text/html","content_length":"110145","record_id":"<urn:uuid:50c36e92-4bc9-410b-b04e-017998586d60>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00720.warc.gz"}
Retro Programming While attempting to write a game in 256 bytes I needed a routine to draw lines, but Bresenham's line algorithm weighs in at approx ~120 bytes. The only suitable alternative I'm aware of is recursive divide and conquer: divide a line into two smaller lines and call the draw routine with each in turn: /* Draw a line from (ax,ay) to (bx,by) */ int draw ( ax, ay, bx, by ) int midx, midy; midx = ( ax+bx ) / 2; midy = ( ay+by ) / 2; if ( midx != ax && midy != ay ) draw( midx, midy, ax, ay ); draw( bx, by, midx, midy ); plot( midx, midy ); This is significantly smaller thank Bresenham's, 32 byte of Z80. However, there are a couple of compromises: it's slower and the lines aren't perfect because the rounding errors accumulate. ; draw lines using recursive divide and conquer ; from de = end1 (d = x-axis, e = y-axis) ; to hl = end2 (h = x-axis, l = y-axis) call PLOT push hl ; calculate hl = centre pixel ld a,l add a,e ld l,a ld a,h add a,d ld h,a ; if de (end1) = hl (centre) then we're done or a sbc hl,de jr z,EXIT add hl,de ex de,hl call DRAW ; de = centre, hl = end1 ex (sp),hl ex de,hl call DRAW ; de = end2, hl = centre ex de,hl pop de pop hl ; --------------------------- ; plot d = x-axis, e = y-axis push hl ld a,d and 7 ld b,a inc b ld a,e or a ld l,a xor e and 248 xor e ld h,a ld a,l xor d and 7 xor d ld l,a ld a,1 djnz PLOTBIT or (hl) ld (hl),a pop hl Alternatively the de(end1) = hl(centre) test can be replaced with a recursion depth count to create an even slower 28 byte routine: ; draw lines using recursive divide and conquer ; from de = end1 (d = x-axis, e = y-axis) ; to hl = end2 (h = x-axis, l = y-axis) ld c,8 dec c jr z,EXIT push de ; calculate de = centre pixel ld a,l add a,e ld e,a ld a,h add a,d ld d,a call DRAW2 ; de = centre, hl = end1 ex (sp),hl call DRAW2 ; de = centre, hl = end2 call PLOT ex de,hl pop hl inc c Langton's Ant is an automata which creates a complex pattern by following a couple of simple rules: • If the ant is on an empty pixel, turn 90° right, set the pixel then move forward • If the ant is on a set pixel, turn 90° left, reset the pixel then move forward The ant's path appears chaotic at first before falling into a repetitive “highway” pattern, moving 2 pixels diagonally every 104 cycles. Here's the code to display Langton's Ant on the ZX Spectrum in 61 bytes. It runs in just over a second so you might want to add a halt to slow things down: org 65472 ld de,128*256+96 ; halt ld a,c ; check direction and 3 add a,a dec a jr nc,XMOVE add a,e ; adjust y position +/-1 ld e,a cp 192 ret nc xor a add a,d ; adjust x position +/-1 ld d,a ; ---------- and 7 ; calculate screen address ld b,a inc b ld a,e or a ld l,a xor e and 248 xor e ld h,a ld a,d xor l and 7 xor d ld l,a ld a,1 djnz PLOTBIT ; ---------- ld b,a ; test pixel and (hl) jr nz,LEFT ; turn left/right inc c inc c dec c ld a,b ; flip pixel xor (hl) ld (hl),a jr ANT
{"url":"http://www.retroprogramming.com/2016/","timestamp":"2024-11-03T12:18:20Z","content_type":"application/xhtml+xml","content_length":"65500","record_id":"<urn:uuid:96bca1fb-646e-4efa-81a7-578a508e8a4e>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00027.warc.gz"}
Cats and Dogs Getting Along Today's NY Times editorial, Teaching Math, Singapore Style , is a rare bird. I don't completely disagree with all of it. Clearly, it deserves a full analysis. The countries that outperform the United States in math and science education have some things in common. They set national priorities for what public school children should learn and when. They also spend a lot of energy ensuring that every school has a high-quality curriculum that is harnessed to clearly articulated national goals. This country, by contrast, has a wildly uneven system of standards and tests that varies from place to place. We are also notoriously susceptible to educational fads. Is there any problem that the NYT editors don't think can be solved by nationalizing it? If there is, I haven't seen it. We call that "wildly uneven system of standards and tests that varies from place to place" federalism and it's proven to be, by far, a more successful system than centralized top-down systems beloved by NYT types. While it is true that "[w]e are also notoriously susceptible to educational fads" it is only because we are burdened with idiotic educators. Who do you think will be influencing those national standards? Groups like the National Council of Teachers of Mathematics (NCTM), of course. Their grimy paws will be all over any standards. Set the wayback machine to 1989: One of the most infamous fads took root in the late 1980’s, when many schools moved away from traditional mathematics instruction, which required drills and problem solving. The new system, sometimes derided as “fuzzy math,’’ allowed children to wander through problems in a random way without ever learning basic multiplication or division. As a result, mastery of high-level math and science was unlikely. The new math curriculum was a mile wide and an inch deep, as the saying goes, touching on dozens of topics each year. Many people trace this unfortunate development to a 1989 report by an influential group, the National Council of Teachers of Mathematics. Now imagine if that "unfortunate development" were enshrined into national law. It only took nearly a generation of mathematically crippled kids for the NCTM to come to their senses. And, even then, if you listen to their current rhetoric, they're still claiming nothing has changed. School districts read its recommendations as a call to reject rote learning. Last week the council reversed itself, laying out new recommendations that will focus on a few basic skills at each grade level. That's because that's exactly what their recommendations said. By the way, what the NCTM calls "rote learning" is anything but. A lesson they still haven't taken to heart yet. Under the new (old) plan, students will once again move through the basics — addition, subtraction, multiplication, division and so on — building the skills that are meant to prepare them for algebra by seventh grade. This new approach is being seen as an attempt to emulate countries like Singapore, which ranks at the top internationally in math. All these references to Singapore are encouraging, given this country’s longstanding resistance to the idea of importing superior teaching strategies from abroad. But a few things need to happen before this approach can succeed. Quite a few things, in fact. Almost all the textbooks have to be rewritten for starters. Then teachers will have to learn how to use those rewritten textbooks properly, something that Ed schools have failed to do. And, then: First of all, the United States will need to abandon its destructive practice of having so many math and science courses taught by people who have not majored in the subjects — or even studied them seriously. And, then: We also need to fix the current patchwork system of standards and measurement for academic achievement, and make sure that students everywhere have access to both high-quality teachers and high-quality math and science curriculums that aspire to clearly articulated goals. So, they got half of it right. That's good for the NYT. Until we bite the bullet on those basic, critical reforms, we will continue to lose ground to the countries with which we must compete in the global information economy. Close enough. 14 comments: Okay, it's been over three decades since I was in high school, but one thing I am curious about is when, and why, did the curriculum move back? In my high school (grades 9-12), the college prep math track was Algebra I, Geometry, Algebra II/, Introductory Calculus, with trig spread across the junior and senior years. I suppose a lot of what we did in junior high could be called algebra, at least pre-algebra math. I was, btw, the last class in our high school who were able to take two years of Latin (they were just phasing Latin out nationally). "Almost all the textbooks have to be rewritten for starters" No, they don't. In industry this is called a "buy versus build" decision. We *could* just start purchasing textbooks from Singapore. I bet we could get quite a discount, too, given the volumes. We won't, of course ... -Mark Roulo KDeRosa said... Unfortunately, our math teachers don't know enough math to teach out of the Singapore math books. There's always Saxon and Connecting Math Concepts. But the chance of either of those two curricula catching on is also slim, though they are both certainly aligned with the NCTM's new focal Why rewrite the textbooks? We have perfect fine textbooks -- they've only been out of print for years. There's nothing to update -- 2 is 2, whether it's 1976 or 2006. KDeRosa said... when, and why, did the curriculum move back I believe the answer is the early 90s in response to the NCTM's original math standards. Though certainly trouble was afoot before that. I went to a Catholic high school in the early 80s which offered two years of Latin and a year of Greek. Latin was clerly on the wane, though; only six students were in my Latin II class. I transferred in my mid-Junior year to another Catholic high school that did not offer Latin -- so I only wound up with a year and a quarter under my belt. TurbineGuy said... I think I am going to scan a few pages of my kids 3rd grade math text books next time they bring them home. Its all bright and cheeful... has lots of fun little facts... and not bogged down with all those bothersome numbers and problems. One question. I was under the impression that Singapore math was mostly concentrated at the elementary and middle school levels? At least in elementary school, surely even the typical liberal arts degree teacher should be able to teach the level of math required at this age. Middle schools and high schools would be the ones that would really need the qualified teachers. Unfortunately, teachers are paid well compared to college graduates with liberal arts degrees, but are poorly paid compared to graduates with engineering, science and math degrees. There needs to be more flexibility to pay people comparative with their skills, instead of the one pay scale fits all. I would love to be a math teacher, but the average starting salary here for a new teacher with a math degree is around $30,000. If I was willing to accept that amount of money, I might as well just get a degree in basket weaving. KDeRosa said... The problem with the traditional textbooks is that too many students were not mastering math. Sure, they were great for that top 20% of the class. That group learns using pretty much anything. The bar is set low. I think, however, that these traditional books were not written with the average math student in mind, not to mention the lower performers. These kids need the concepts broken down a bit more finely and they need much more practice than the traditional textbooks provide before they will understand and retain the material. We've learned a lot since the old traditional textbooks were in common usage. Singapore Math, Saxon, and CMC all represent improvements on traditional math. KDeRosa said... Do you know the math curriculum they use, Rory? Singapore's math curriculum is called Primary Math and goes from grades K to 6. Every student regardless of ability goes through the same sequence. Then it's off to algebra at which time the kids get tracked. Nonetheless, they all learn algebra, some just go a bit deeper than others. Here are the placement exams. They are no walk in the park. You really shouldn't need a math degree to teach elementary math. But when schools fail to teach math properly at the K-12 level anyone without a math degree tends not to know enough math to be able to teach basic math effectively. The Chinese teachers certainly don't have math degrees. But I admit I was shocked to find that college algebra was not required to get a teaching degree. That seems pretty basic. Even if you're a third grade teacher, you need to know where they're TurbineGuy said... This comment has been removed by a blog administrator. TurbineGuy said... My book is Mathematics: The Path To Math Success! A review of the 2nd grade version is over at www.mathematicallycorrect.com Overall Evaluation [3.4] Students using this program have a reasonable chance of moderate achievement levels. On the other hand, this program is not seen as supporting high achievement levels. It is possible that a skillful teacher could overcome some of the limitations of this program and use it more effectively. The heavy reliance on models and the potential confusion in the treatment of perimeter are examples of areas where an effective teacher could improve upon the student learning supported by this program. KDeRosa said... I could be worse I suppose. The best way to deal with a crappy program like that is to preteach your kids basics so they can use the dopey problem solving exercises as additioanl practice instead of for "discovering" math. Have you read KNOWLEDGE DEFICIT yet? I'm persuaded we need national curriculum standards. Hirsch says all you need is 40% to 60% of the school day devoted to everybody learning the same core stuff. (I think he starts at 40%.) The problem for American students is huge mobility, including schools with greater than 100% student population mobility per year. I don't see it happening federally, but I can imagine it happening via NCTM & the like - via NGOs that appoint themselves guardian of the standards. I keep hoping Courant will produce a set of grade by grade standards. KDeRosa said... No, I haven't. Is there anything more in it than what's avaialble inother paper's he's written?
{"url":"https://d-edreckoning.blogspot.com/2006/09/cats-and-dogs-getting-along.html?showComment=1158630540000","timestamp":"2024-11-04T04:46:23Z","content_type":"text/html","content_length":"99068","record_id":"<urn:uuid:5cf5e13f-fb70-487c-b1c2-158cb91b1e9a>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00335.warc.gz"}
Szegő Kernel Asymptotics for High Power of CR Line Bundles and Kodaira Embedding Theorems on CR Manifoldssearch Item Successfully Added to Cart An error was encountered while trying to add the item to the cart. Please try again. Please make all selections above before adding to cart Szegő Kernel Asymptotics for High Power of CR Line Bundles and Kodaira Embedding Theorems on CR Manifolds eBook ISBN: 978-1-4704-4750-2 Product Code: MEMO/254/1217.E List Price: $78.00 MAA Member Price: $70.20 AMS Member Price: $46.80 Click above image for expanded view Szegő Kernel Asymptotics for High Power of CR Line Bundles and Kodaira Embedding Theorems on CR Manifolds eBook ISBN: 978-1-4704-4750-2 Product Code: MEMO/254/1217.E List Price: $78.00 MAA Member Price: $70.20 AMS Member Price: $46.80 • Memoirs of the American Mathematical Society Volume: 254; 2018; 140 pp MSC: Primary 32 Let \(X\) be an abstract not necessarily compact orientable CR manifold of dimension \(2n-1\), \(n\geqslant 2\), and let \(L^k\) be the \(k\)-th tensor power of a CR complex line bundle \(L\) over \(X\). Given \(q\in \{0,1,\ldots ,n-1\}\), let \(\Box ^{(q)}_{b,k}\) be the Gaffney extension of Kohn Laplacian for \((0,q)\) forms with values in \(L^k\). For \(\lambda \geq 0\), let \(\Pi ^{(q)}_{k,\leq \lambda} :=E((-\infty ,\lambda ])\), where \(E\) denotes the spectral measure of \(\Box ^{(q)}_{b,k}\). In this work, the author proves that \(\Pi ^{(q)}_{k,\leq k^{-N_0}}F^*_k\), \(F_k\Pi ^{(q)}_{k,\leq k^{-N_0}}F^*_k\), \(N_0\geq 1\), admit asymptotic expansions with respect to \(k\) on the non-degenerate part of the characteristic manifold of \(\Box ^{(q)}_{b,k}\), where \(F_k\) is some kind of microlocal cut-off function. Moreover, we show that \(F_k\Pi ^{(q)}_{k,\leq 0}F^*_k\) admits a full asymptotic expansion with respect to \(k\) if \(\Box ^{(q)}_{b,k}\) has small spectral gap property with respect to \(F_k\) and \(\Pi^{(q)}_{k,\leq 0}\) is \(k\)-negligible away the diagonal with respect to \(F_k\). By using these asymptotics, the authors establish almost Kodaira embedding theorems on CR manifolds and Kodaira embedding theorems on CR manifolds with transversal CR \(S^1\) action. □ Chapters □ 1. Introduction and statement of the main results □ 2. More properties of the phase $\varphi (x,y,s)$ □ 3. Preliminaries □ 4. Semi-classical $\Box ^{(q)}_{b,k}$ and the characteristic manifold for $\Box ^{(q)}_{b,k}$ □ 5. The heat equation for the local operatot $\Box ^{(q)}_s$ □ 6. Semi-classical Hodge decomposition theorems for $\Box ^{(q)}_{s,k}$ in some non-degenerate part of $\Sigma $ □ 7. Szegö kernel asymptotics for lower energy forms □ 8. Almost Kodaira embedding Theorems on CR manifolds □ 9. Asymptotic expansion of the Szegö kernel □ 10. Szegő kernel asymptotics and Kodairan embedding theorems on CR manifolds with transversal CR $S^1$ actions □ 11. Szegő kernel asymptotics on some non-compact CR manifolds □ 12. The proof of Theorem • Permission – for use of book, eBook, or Journal content • Book Details • Table of Contents • Additional Material • Requests Volume: 254; 2018; 140 pp MSC: Primary 32 Let \(X\) be an abstract not necessarily compact orientable CR manifold of dimension \(2n-1\), \(n\geqslant 2\), and let \(L^k\) be the \(k\)-th tensor power of a CR complex line bundle \(L\) over \ (X\). Given \(q\in \{0,1,\ldots ,n-1\}\), let \(\Box ^{(q)}_{b,k}\) be the Gaffney extension of Kohn Laplacian for \((0,q)\) forms with values in \(L^k\). For \(\lambda \geq 0\), let \(\Pi ^{(q)}_{k, \leq \lambda} :=E((-\infty ,\lambda ])\), where \(E\) denotes the spectral measure of \(\Box ^{(q)}_{b,k}\). In this work, the author proves that \(\Pi ^{(q)}_{k,\leq k^{-N_0}}F^*_k\), \(F_k\Pi ^{(q)}_{k,\leq k^{-N_0}}F^*_k\), \(N_0\geq 1\), admit asymptotic expansions with respect to \(k\) on the non-degenerate part of the characteristic manifold of \(\Box ^{(q)}_{b,k}\), where \(F_k\) is some kind of microlocal cut-off function. Moreover, we show that \(F_k\Pi ^{(q)}_{k,\leq 0}F^*_k\) admits a full asymptotic expansion with respect to \(k\) if \(\Box ^{(q)}_{b,k}\) has small spectral gap property with respect to \(F_k\) and \(\Pi^{(q)}_{k,\leq 0}\) is \(k\)-negligible away the diagonal with respect to \(F_k\). By using these asymptotics, the authors establish almost Kodaira embedding theorems on CR manifolds and Kodaira embedding theorems on CR manifolds with transversal CR \(S^1\) • Chapters • 1. Introduction and statement of the main results • 2. More properties of the phase $\varphi (x,y,s)$ • 3. Preliminaries • 4. Semi-classical $\Box ^{(q)}_{b,k}$ and the characteristic manifold for $\Box ^{(q)}_{b,k}$ • 5. The heat equation for the local operatot $\Box ^{(q)}_s$ • 6. Semi-classical Hodge decomposition theorems for $\Box ^{(q)}_{s,k}$ in some non-degenerate part of $\Sigma $ • 7. Szegö kernel asymptotics for lower energy forms • 8. Almost Kodaira embedding Theorems on CR manifolds • 9. Asymptotic expansion of the Szegö kernel • 10. Szegő kernel asymptotics and Kodairan embedding theorems on CR manifolds with transversal CR $S^1$ actions • 11. Szegő kernel asymptotics on some non-compact CR manifolds • 12. The proof of Theorem Permission – for use of book, eBook, or Journal content Please select which format for which you are requesting permissions.
{"url":"https://bookstore.ams.org/MEMO/254/1217","timestamp":"2024-11-09T06:32:29Z","content_type":"text/html","content_length":"75222","record_id":"<urn:uuid:76442e12-73c6-48a5-bdec-38485ded388b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00034.warc.gz"}
Balancing of Rotating Masses Calculators | List of Balancing of Rotating Masses Calculators List of Balancing of Rotating Masses Calculators Balancing of Rotating Masses calculators give you a list of online Balancing of Rotating Masses calculators. A tool perform calculations on the concepts and applications for Balancing of Rotating Masses calculations. These calculators will be useful for everyone and save time with the complex procedure involved to obtain the calculation results. You can also download, share as well as print the list of Balancing of Rotating Masses calculators with all the formulas.
{"url":"https://www.calculatoratoz.com/en/balancing-of-rotating-masses-Calculators/CalcList-676","timestamp":"2024-11-07T07:29:18Z","content_type":"application/xhtml+xml","content_length":"95667","record_id":"<urn:uuid:7fcc979d-dc85-4bc4-8242-a18fda85f30b>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00636.warc.gz"}
Area of a triangle | HireQuotient Area of a triangle Published on July 8th, 2024 Understanding the area of a triangle is essential for various applications in mathematics, engineering, architecture, and everyday problem-solving. Calculating the area of a triangle can help in determining the amount of space within a triangular boundary, which is crucial for tasks such as land measurement, construction, and even art projects. There are several methods to find the area of a triangle, each suited for different types of triangles and available measurements. The most common formula, known to many, is the basic formula where the area is calculated as half the product of the base and the height of the triangle. This method is straightforward and widely applicable, particularly for right triangles, where the height is easily identifiable. However, not all triangles conveniently provide a clear base and height, especially when dealing with scalene triangles or those without a right angle. In such cases, more advanced methods like Heron's formula come into play. Heron's formula allows the calculation of the area when only the side lengths are known, making it extremely useful for a broader range of problems. For triangles where angles are involved, the trigonometric formula offers a precise way to find the area using the lengths of two sides and the included angle. This method is particularly useful in trigonometry and applications requiring angular measurements, such as navigation and physics. To ensure accuracy and efficiency in these calculations, using an area of a triangle calculator can be highly beneficial. These tools not only simplify the process but also help in visualizing the problem, ensuring no detail is overlooked. In this guide, we will explore the different formulas for calculating the area of a triangle, including the basic formula, Heron's formula, and the trigonometric formula. We will also provide step-by-step instructions and examples to illustrate each method, making it easy for you to understand and apply these techniques. By the end of this guide, you will be equipped with the knowledge to calculate the area of any triangle, no matter its type or the measurements available. Why Knowing the Area of a Triangle is Important Understanding how to find the area of a triangle is not just a mathematical exercise; it has practical implications in various fields. For example, in the construction industry, precise measurements are crucial for creating accurate building plans and ensuring structural integrity. In land surveying, calculating the area helps in determining property boundaries and land use. Even in everyday life, knowing how to find the area of a triangle can assist in tasks like gardening or interior design, where space optimization is key. Connecting Theory to Practice By learning these methods, you can tackle a wide range of real-world problems. Whether you are a student aiming to excel in geometry, a professional needing precise measurements, or simply someone interested in math, mastering the area of a triangle is a valuable skill. This guide will serve as a comprehensive resource, helping you bridge the gap between theoretical knowledge and practical What is a Triangle? A triangle is one of the most fundamental shapes in geometry, characterized by three sides, three vertices, and three angles. This simple yet versatile shape is not only a cornerstone in mathematical studies but also appears frequently in various fields, from architecture to engineering. Definition and Basic Properties A triangle is a polygon with three edges and three vertices. It is the simplest form of a polygon, and its primary property is that the sum of its interior angles is always 180 degrees. This consistent property makes triangles a reliable shape for numerous geometric and trigonometric calculations. Types of Triangles Triangles can be categorized based on the lengths of their sides or the measures of their angles. Understanding these types is essential for applying the correct methods to calculate the area of a Equilateral Triangle • Definition: An equilateral triangle has all three sides of equal length, and all three interior angles are equal, each measuring 60 degrees. • Properties: Due to its symmetry, the formula to find the area of an equilateral triangle is simplified to , where 𝑎 is the length of a side. Isosceles Triangle • Definition: An isosceles triangle has at least two sides of equal length. The angles opposite these sides are also equal. • Properties: The height can be found using the Pythagorean theorem, which then allows for the use of the basic area formula Area=1/2×base×height • Area=1/2 × base × height. Scalene Triangle • Definition: A scalene triangle has all sides of different lengths, and all interior angles are different. • Properties: For scalene triangles, Heron's formula is particularly useful for finding the area when only the side lengths are known. Right Triangle • Definition: A right triangle has one angle that is exactly 90 degrees. • Properties: The area of a right triangle can be easily calculated using the basic formula Area=1/2b x h, where the base and height are the two legs of the triangle. Importance of Triangles in Geometry Triangles are crucial in geometry because they form the basis for more complex shapes and structures. By understanding the properties and types of triangles, one can solve a wide range of problems involving angles, lengths, and areas. For instance, in trigonometry, triangles are used to define the trigonometric functions sine, cosine, and tangent, which are fundamental in analyzing periodic phenomena and solving real-world problems. Triangles also play a significant role in other areas, such as: • Construction and Architecture: Triangular shapes are used in trusses and frameworks due to their inherent stability and strength. • Art and Design: Triangles contribute to aesthetic compositions and design elements. • Navigation and Mapping: Triangles help in triangulation methods for determining distances and positions. By mastering the basics of triangles, including their types and properties, you are well-equipped to delve deeper into calculating the area of a triangle using various formulas. This foundational knowledge is critical as we move forward to explore the different methods of determining the area based on the type and measurements of the triangle. Types of Triangles Understanding the various types of triangles is crucial for determining the appropriate method to calculate their area. Each type of triangle has unique properties that influence how you approach finding its area. Let's explore the different types of triangles, including right-angled, isosceles, and scalene triangles. Right-Angled Triangle A right-angled triangle, or right triangle, is characterized by having one of its angles exactly 90 degrees. This distinctive feature simplifies many geometric calculations, particularly when finding the area. • One angle is always 90 degrees. • The side opposite the right angle is called the hypotenuse, which is the longest side. • The other two sides are referred to as the legs of the triangle. Area Calculation: The area of a right-angled triangle can be easily determined using the basic formula: In a right-angled triangle, the base and the height are the two legs. Example: If one leg of a right triangle is 3 units and the other leg is 4 units, the area would be: Area=1/2×3×4=6 square units Isosceles Triangle An isosceles triangle has at least two sides of equal length. This symmetry affects both its internal angles and the methods used to calculate its area. • At least two sides are of equal length. • The angles opposite the equal sides are also equal. Area Calculation: For an isosceles triangle, the area can be calculated using the base and height, or by splitting the triangle into two right-angled triangles if the height is not directly given. The basic formula is: If only the sides are known, the height can be determined using the Pythagorean theorem. Example: Consider an isosceles triangle with two sides of 5 units each and a base of 6 units. First, find the height: • height=52−32=25−9=16=4 unitsheight=52−32=25−9=16=4 units Then, calculate the area: • Area=12×6×4=12 square unitsArea=21×6×4=12 square units Scalene Triangle A scalene triangle has all sides of different lengths and all interior angles different. This lack of symmetry requires a more flexible approach to find the area. • All three sides have different lengths. • All interior angles are different. Area Calculation: Heron's formula is particularly useful for scalene triangles when only the side lengths are known. Heron's formula states: where s is the semi-perimeter: Here, a, b, and c are the lengths of the sides. Example: For a scalene triangle with sides 7 units, 8 units, and 9 units: s=27+8+9=12 units Then, apply Heron's formula: ≈26.83 square units By understanding these types of triangles and their properties, you can apply the most suitable formula to calculate the area accurately. This foundational knowledge is essential as we progress to more advanced methods, ensuring you can handle any triangle you encounter. Basic Formula for Area of a Triangle Calculating the area of a triangle using the basic formula is straightforward and widely applicable, especially for right-angled and isosceles triangles. This method relies on knowing the base and height of the triangle, making it one of the most commonly used techniques in geometry. Explanation of Terms To effectively use the basic formula, it's essential to understand the key terms involved: "base" and "height." Base: The base of a triangle is any one of its sides, typically the one chosen to be horizontal. It serves as the reference side for measuring the height. When the triangle is positioned with one side along a horizontal plane, that side is considered the base. Height: The height (or altitude) of a triangle is the perpendicular distance from the base to the opposite vertex. It represents the shortest distance between the base and the top of the triangle. In a right-angled triangle, one of the legs can be treated as the height if the other leg is the base. Basic Formula The basic formula for calculating the area of a triangle is: This formula works by taking half of the product of the base and the height. The factor of one-half accounts for the fact that a triangle is essentially half of a parallelogram. Step-by-Step Calculation Identify the Base: Choose any side of the triangle to be the base. For convenience, it’s often easiest to select the horizontal side. Measure the Height: Measure the perpendicular distance from the base to the opposite vertex. This distance should form a right angle with the base. Apply the Formula: Multiply the base by the height, then divide the result by two to get the area of the triangle. Example 1: Right-Angled Triangle Consider a right-angled triangle with a base of 5 units and a height of 12 units: Area=12×5×12=30 square units This calculation is straightforward as the legs of the right triangle directly provide the base and height. Example 2: Isosceles Triangle An isosceles triangle with equal sides of 10 units and a base of 8 units can have its height calculated using the Pythagorean theorem. The height divides the base into two equal segments of 4 units each. Then, the height ℎh can be calculated as: ≈9.17 units Now, apply the basic formula: Area = 12×8×9.17 ≈36.68 square units Example 3: Scalene Triangle For a scalene triangle with a base of 7 units and a height of 4 units: =14 square units The height in a scalene triangle can often be measured using geometric tools or derived from other known dimensions using trigonometric methods. Understanding and applying the basic formula for the area of a triangle is fundamental for solving various geometric problems. This method's simplicity and versatility make it a vital tool in mathematics, applicable to different types of triangles under various conditions. Basic Formula for Area of a Triangle Calculating the area of a triangle using the basic formula is an essential skill in geometry. This method involves identifying the base and height of the triangle and applying a straightforward formula. Here’s a detailed, step-by-step guide to help you master this fundamental technique. Step-by-Step Calculation Identify the Base: The first step in finding the area of a triangle is to identify the base. The base is any one of the triangle's sides, but it is typically the side on which the triangle is imagined to be resting. For right-angled triangles, one of the legs is usually chosen as the base. Measure the Height Perpendicular to the Base: Next, measure the height of the triangle. The height is the perpendicular distance from the base to the opposite vertex. This means the height forms a right angle with the base. In a right-angled triangle, the height is the other leg that is not chosen as the base. For isosceles and scalene triangles, the height might need to be measured using geometric tools or calculated using trigonometric methods. Apply the Formula: Once the base and height are identified, apply the basic formula for the area of a triangle: This formula calculates the area by taking half of the product of the base and the height. Examples to Illustrate the Process Example 1: Right-Angled Triangle Suppose we have a right-angled triangle with a base of 6 units and a height of 8 units. To find the area: Area=12×6×8=24 square units In this case, the legs of the triangle serve as the base and height, making the calculation straightforward. Example 2: Isosceles Triangle For an isosceles triangle with two equal sides of 10 units each and a base of 12 units, the height can be calculated first. The height divides the base into two equal segments of 6 units each. Using the Pythagorean theorem, the height ℎh is: =8 units Now, applying the formula: Area = 12×12×8 = 48 square units Example 3: Scalene Triangle Consider a scalene triangle with a base of 9 units and a height of 5 units: Area =12×9×5=22.5 square units In a scalene triangle, where all sides and angles are different, the height is often measured using geometric tools. Understanding and applying this basic formula is fundamental for solving various problems involving the area of a triangle. This straightforward method is applicable to different types of triangles, making it a versatile tool in geometry. By mastering these steps, you can confidently calculate the area of any triangle, whether it’s a right-angled, isosceles, or scalene triangle. This foundational knowledge prepares you for more complex calculations, such as those involving Heron's formula or trigonometric methods, which we will explore in the following sections. Heron's Formula In geometry, there are times when you need to calculate the area of a triangle but only have the lengths of its sides. In such cases, Heron's formula becomes an invaluable tool. This formula allows you to find the area of any triangle when the lengths of all three sides are known, making it especially useful for scalene triangles, where no sides are equal. Introduction to Heron's Formula Heron's formula is named after Hero of Alexandria, an ancient Greek engineer and mathematician who derived this formula. It is particularly useful when you cannot easily measure the height of a triangle. This method is applicable to all types of triangles: equilateral, isosceles, and scalene. Heron's formula provides a way to calculate the area based solely on the side lengths, without needing to find the height first. This makes it a versatile tool for various mathematical and practical Formula and Explanation Heron's formula for the area of a triangle is given by: where 𝑠s is the semi-perimeter of the triangle, and 𝑎a, 𝑏b, and 𝑐c are the lengths of the sides. The semi-perimeter 𝑠s is calculated as: This formula works by first finding the semi-perimeter, which is half the perimeter of the triangle, and then using it in conjunction with the side lengths to determine the area. Step-by-Step Calculation Measure the Sides of the Triangle: Begin by measuring the lengths of all three sides of the triangle. Let's denote these sides as a, 𝑏, and 𝑐. Compute the Semi-Perimeter: Calculate the semi-perimeter 𝑠s of the triangle using the formula: 1. 𝑠=(𝑎+𝑏+𝑐)/2 Apply Heron's Formula: Use the semi-perimeter and the side lengths in Heron's formula to find the area: 1. Area=𝑠(𝑠−𝑎)(𝑠−𝑏)(𝑠−𝑐) Example Calculation Example: Consider a triangle with side lengths of 7 units, 8 units, and 9 units. Step 1: Measure the sides: Step 2: Compute the semi-perimeter: 𝑠=7+8+92=12 units Step 3: Apply Heron's formula: =12×5×4×3=720≈26.83 square units Heron's formula simplifies the process of finding the area of a triangle when the height is not easily accessible, and it’s particularly useful for scalene triangles. By understanding and applying this formula, you can tackle more complex geometric problems with confidence. Trigonometric Formula When dealing with triangles, there are scenarios where you know the lengths of two sides and the measure of the included angle. In such cases, the trigonometric formula for the area of a triangle becomes particularly useful. This formula leverages trigonometric functions to provide an accurate calculation of the area, making it an essential tool in trigonometry and practical applications like engineering and physics. Introduction to Trigonometric Formula The trigonometric formula is used to find the area of a triangle when you have the lengths of two sides and the included angle between them. This method is especially beneficial for non-right-angled triangles where traditional methods might not be straightforward. It is commonly applied in problems involving navigation, astronomy, and any field requiring precise measurements of angles and Formula and Explanation The trigonometric formula for the area of a triangle is expressed as: • a and b are the lengths of the two sides. • C is the measure of the included angle between these sides. • sin(C) is the sine of the included angle. This formula calculates the area by taking half of the product of the two sides and the sine of the included angle. The sine function helps to account for the angle's impact on the overall area, providing an accurate measurement. Step-by-Step Calculation Measure the Sides and the Included Angle: Begin by measuring the lengths of the two sides 𝑎a and 𝑏b of the triangle, as well as the included angle 𝐶C. Ensure that the angle is measured in degrees or radians, depending on the sine function used. Apply the Formula: Use the measured values in the trigonometric formula to find the area: 1. Area=1/2×𝑎×𝑏×sin⁡(𝐶) Example Calculation Example: Consider a triangle with sides a=7 units, b=9 units, and an included angle C=60∘. Step 1: Measure the sides and the included angle: 𝑎=7 units, 𝑏=9 units, 𝐶=60∘ Step 2: Apply the formula: The trigonometric formula is particularly useful for triangles where it is difficult to measure the height directly. It provides a precise calculation based on the side lengths and the angle, making it a versatile tool in various scientific and engineering applications. How to Use Hirequotient Area of Triangle Calculator Using the Hirequotient Area of Triangle calculator simplifies the process of finding the area of a triangle, whether you have the base and height, the side lengths, or the included angle between two sides. This tool is designed to be user-friendly and efficient, providing quick and accurate results for various types of triangles. Here's a step-by-step guide on how to use the calculator Step 1: Select the Formula The first step is to choose the appropriate formula based on the information you have about the triangle. The calculator offers multiple options, including: • Basic Formula: For triangles where the base and height are known. • Heron's Formula: For triangles where the lengths of all three sides are known. • Trigonometric Formula: For triangles where two sides and the included angle are known. Step 2: Enter the Sides Once you have selected the appropriate formula, enter the required dimensions: • For the Basic Formula: Input the base and height of the triangle. • For Heron's Formula: Input the lengths of all three sides of the triangle. • For the Trigonometric Formula: Input the lengths of the two sides and the measure of the included angle. The calculator interface is designed to be intuitive, with clear fields for each required measurement. Step 3: Calculate After entering the necessary values, simply click the "Calculate" button. The calculator will process the inputs and provide the area of the triangle. This instant calculation saves time and ensures accuracy, making it an invaluable tool for students, educators, and professionals alike. Example Calculations Using the Basic Formula: • Input: Base = 5 units, Height = 10 units • Process: Select the basic formula and enter the base and height. • Result: The area will be calculated as: = 25 square units Using Heron's Formula: • Input: Side lengths = 6 units, 8 units, and 10 units • Process: Select Heron's formula and enter the side lengths. • Result: The area will be calculated using: Using the Trigonometric Formula: • Input: Sides = 7 units and 9 units, Included Angle = 45° • Process: Select the trigonometric formula and enter the sides and angle. • Result: The area will be calculated as: The Hirequotient Area of Triangle calculator is designed to handle various scenarios, ensuring that you can find the area accurately regardless of the triangle's type. This versatility makes it a powerful tool for educational purposes, professional use, and everyday problem-solving. Benefits of Using Our Area of Triangle Calculator Using the Hirequotient Area of Triangle calculator offers numerous advantages, making it an invaluable tool for anyone needing to calculate the area of a triangle accurately and efficiently. This section highlights the key benefits of the calculator, focusing on its accuracy, ease of use, efficiency, versatility, and educational value. One of the primary benefits of the Hirequotient Area of Triangle calculator is its high accuracy. Calculating the area of a triangle manually can sometimes lead to errors, especially with complex formulas like Heron's formula or trigonometric calculations. The calculator ensures that each step is performed correctly, providing precise results every time. This accuracy is essential for academic purposes, professional applications, and any scenario where exact measurements are crucial. Ease of Use The user-friendly interface of the Hirequotient calculator makes it accessible to everyone, from students to professionals. The clear, straightforward design means that even those with limited mathematical knowledge can input values and get results effortlessly. The calculator guides users through each step, ensuring that all necessary information is provided for accurate calculations. Time is often of the essence, and the Hirequotient Area of Triangle calculator excels in providing quick results. Manual calculations can be time-consuming, particularly when dealing with more complex triangles. By automating the process, the calculator saves valuable time, allowing users to focus on interpreting the results rather than performing the calculations. This efficiency is particularly beneficial in educational settings, where time can be better spent on understanding concepts rather than performing repetitive calculations. The calculator's versatility is another significant advantage. It supports various methods for finding the area of a triangle, including the basic formula, Heron's formula, and the trigonometric formula. This versatility means that users can calculate the area of any triangle type, whether it is equilateral, isosceles, scalene, or right-angled. This adaptability ensures that the calculator can handle a wide range of geometric problems, making it a reliable tool for different scenarios. Educational Tool Beyond its practical applications, the Hirequotient Area of Triangle calculator is an excellent educational tool. By using the calculator, students can gain a deeper understanding of geometric principles and the different methods for calculating the area of a triangle. The step-by-step guidance provided by the calculator helps reinforce learning, making it easier for students to grasp complex concepts. Additionally, by seeing how different formulas are applied, students can develop a stronger foundation in geometry. Statistics and Real-World Applications In real-world applications, accurate area calculations are vital. For example, in construction, precise measurements are necessary to ensure structural integrity and optimal use of materials. Using a reliable tool like the Hirequotient Area of Triangle calculator can help mitigate such risks by providing accurate measurements, thus contributing to more efficient and cost-effective project The Hirequotient Area of Triangle calculator stands out for its accuracy, ease of use, efficiency, versatility, and educational value. Whether you are a student, educator, professional, or hobbyist, this tool will enhance your ability to calculate the area of a triangle quickly and accurately. By incorporating this calculator into your toolkit, you can ensure that your geometric calculations are always precise and reliable. Example Calculations To fully understand how to calculate the area of a triangle, it is helpful to look at detailed examples for each method: the basic formula, Heron's formula, and the trigonometric formula. These examples will illustrate the step-by-step process and demonstrate the versatility of each method. Basic Formula Example The basic formula for the area of a triangle is one of the simplest and most commonly used methods. It is particularly useful for right-angled and isosceles triangles where the base and height are easily identifiable. Example: Consider a right-angled triangle with a base of 6 units and a height of 8 units. To find the area: Identify the Base and Height: • Base (𝑏) = 6 units • Height (h) = 8 units Apply the Formula: =24 square units This straightforward calculation demonstrates the efficiency and simplicity of using the basic formula to find the area of a right-angled triangle. Heron's Formula Example Heron's formula is ideal for finding the area of a triangle when only the side lengths are known. This method is particularly useful for scalene triangles, where the sides are of different lengths. Example: Consider a triangle with side lengths of 7 units, 8 units, and 9 units. To find the area: Measure the Sides: • Side a = 7 units • Side b = 8 units • Side c = 9 units Compute the Semi-Perimeter: 𝑠=𝑎+𝑏+𝑐2=7+8+92=12 units Apply Heron's Formula: =720≈26.83 square units This example highlights how Heron's formula can be used to accurately determine the area of a triangle based solely on its side lengths. Trigonometric Formula Example The trigonometric formula is useful when you know the lengths of two sides and the measure of the included angle. This method is applicable to various types of triangles and is particularly helpful in trigonometry. Example: Consider a triangle with sides a=7 units and b=9 units, and an included angle C=45∘. To find the area: Measure the Sides and the Included Angle: • Side a = 7 units • Side b = 9 units • Included angle C = 45° Apply the Trigonometric Formula: This example shows how the trigonometric formula can be applied to find the area of a triangle when the side lengths and included angle are known, offering a precise calculation method. Special Cases While the general methods for calculating the area of a triangle are versatile, certain types of triangles have unique properties that allow for simplified calculations. Here, we will explore special cases, including equilateral triangles, isosceles triangles, and right triangles. Understanding these specific scenarios will further enhance your ability to calculate the area of any triangle accurately and efficiently. Equilateral Triangle An equilateral triangle has all three sides of equal length and all three angles measuring 60 degrees. This symmetry allows for a simplified formula to calculate the area. Simplified Formula: The area of an equilateral triangle can be calculated using the formula: This simplified formula makes it easy to calculate the area of an equilateral triangle without needing to measure the height separately. Isosceles Triangle An isosceles triangle has at least two sides of equal length, and the angles opposite these sides are also equal. This property can be used to simplify the calculation of the area. Calculating the Area: To find the area of an isosceles triangle, you can use the basic formula by first determining the height. The height can be calculated using the Pythagorean theorem if the base and the equal sides are known. Example: Consider an isosceles triangle with two sides of 5 units each and a base of 6 units. First, calculate the height: By leveraging the properties of isosceles triangles, you can simplify the process of finding their area. Right Triangle A right triangle has one angle that is exactly 90 degrees. The legs of the triangle can be directly used as the base and height, making the area calculation straightforward. Straightforward Calculation: The area of a right triangle is found using the basic formula, where the legs of the triangle serve as the base and height: Example: For a right triangle with legs of 3 units and 4 units: Area=12×3×4=6 square units The simplicity of this calculation makes right triangles one of the easiest types to work with in geometry. In conclusion, understanding how to find the area of a triangle is a fundamental skill in geometry that has practical applications across various fields. Whether you are dealing with an equilateral, isosceles, or right triangle, knowing the appropriate formulas and methods ensures accurate and efficient calculations. FAQs: area of a triangle How to find the area of a triangle? To find the area of a triangle, use the formula: Area=1/2×base×height. Measure the base and the height of the triangle, then multiply them and divide by two. How to find area of a triangle? To find the area of a triangle, you can use the basic formula, Heron's formula, or the trigonometric formula, depending on the given information. The basic formula is the most commonly used and straightforward method. How do you find the area of a triangle? You find the area of a triangle by using the formula Area=1/2×base×height This requires knowing the measurements of the base and the height of the triangle. What is the area of a triangle? The area of a triangle is the measure of the region enclosed by the triangle. It can be calculated using various formulas, with the most common being 1/2×base×height. How to find the area of a right triangle? To find the area of a right triangle, use the formula: Area=1/2×base×height,where the base and height are the two legs of the triangle. How to find area of a right triangle? Finding the area of a right triangle involves using the base and height, which are the two perpendicular sides. The formula is 1/2×base×height. How to calculate the area of a triangle? To calculate the area of a triangle, measure the base and height, then apply the formula: Area=1/2×base×height. How to find an area of a triangle? To find an area of a triangle, use the basic formula Area=1/2×base×height if you know the base and height, or use Heron's formula if you know all three sides. How do you find area of a triangle? You find the area of a triangle by multiplying the base by the height and then dividing by two, using the formula 12×base×height\frac{1}{2} \times \text{base} \times \text{height}21×base×height What is the formula for the area of a triangle? The formula for the area of a triangle is Area=12×base×height\text{Area} = \frac{1}{2} \times \text{base} \times \text{height}Area=21×base×height How to calculate area of a triangle? To calculate the area of a triangle, you need to know the base and height. Use the formula: Area=12×base×height\text{Area} = \frac{1}{2} \times \text{base} \times \text{height}Area=21×base×height How do I find the area of a triangle? To find the area of a triangle, measure the base and height, then use the formula: Area=12×base×height\text{Area} = \frac{1}{2} \times \text{base} \times \text{height}Area=21×base×height Which figure best demonstrates the setup for the box method of finding the area of a triangle? A rectangle divided into two equal right triangles best demonstrates the setup for the box method of finding the area of a triangle. How to get the area of a triangle? To get the area of a triangle, measure the base and height, then apply the formula Area=12×base×height\text{Area} = \frac{1}{2} \times \text{base} \times \text{height}Area=21×base×height How do you find the area of a right triangle? To find the area of a right triangle, use the formula Area=12×base×height\text{Area} = \frac{1}{2} \times \text{base} \times \text{height}Area=21×base×height where the base and height are the two legs of the triangle. What is the formula for area of a triangle? The formula for the area of a triangle is Area=12×base×height\text{Area} = \frac{1}{2} \times \text{base} \times \text{height}Area=21×base×height How to find the surface area of a triangle? To find the surface area of a triangle, you typically refer to the area using the formula Area=12×base×height\text{Area} = \frac{1}{2} \times \text{base} \times \text{height}Area=21×base×height How do u find the area of a triangle? You find the area of a triangle by measuring the base and height, then applying the formula 12×base×height\frac{1}{2} \times \text{base} \times \text{height}21×base×height How to find the area of a triangle with 3 sides? To find the area of a triangle with 3 sides, use Heron's formula: Area=s(s−a)(s−b)(s−c)\text{Area} = \sqrt{s(s - a)(s - b)(s - c)}Area=s(s−a)(s−b)(s−c) where sss is the semi-perimeter. What is the formula to find the area of a triangle? The formula to find the area of a triangle is 12×base×height\frac{1}{2} \times \text{base} \times \text{height}21×base×height How to find surface area of a triangle? To find the surface area of a triangle, use the basic area formula 12×base×height\frac{1}{2} \times \text{base} \times \text{height}21×base×height for its two-dimensional space. How to do area of a triangle? To do the area of a triangle, use the formula 12×base×height\frac{1}{2} \times \text{base} \times \text{height}21×base×height and input the measurements for the base and height. What is the area of a right triangle? The area of a right triangle is calculated using the formula 12×base×height\frac{1}{2} \times \text{base} \times \text{height}21×base×height where the base and height are the two perpendicular What is the formula for the area of a triangle? The formula for the area of a triangle is 1/2×base×height. What is the formula for finding the area of a triangle? The formula for finding the area of a triangle is 1/2×base×height. What is area of a triangle? The area of a triangle is the space enclosed by its three sides, calculated using the formula 1/2×base×height What is the formula for finding the area of a triangle? The formula for finding the area of a triangle is 1/2×base×height. How to solve area of a triangle? To solve the area of a triangle, measure the base and height, then apply the formula 1/2×base×height. How to find the area of an equilateral triangle? To find the area of an equilateral triangle, use the formula How to get area of a triangle? To get the area of a triangle, measure the base and height, then use the formula 1/2×base×height How to solve the area of a triangle? To solve the area of a triangle, measure the base and height, and then apply the formula 1/2×base×height. How to find area of a non-right triangle? To find the area of a non-right triangle, use Heron's formula if you know all three sides, or the trigonometric formula if you know two sides and the included angle. How to find area and perimeter of a triangle? To find the area, use 1/2×base×height. What is the area of a triangle formula? The area of a triangle formula is 1/2×base×height. What is an area of a triangle? An area of a triangle is the region enclosed by the three sides, calculated using 1/2×base×height. How to find the area of a triangle with coordinates? To find the area of a triangle with coordinates, use the formula: How do you find an area of a triangle? You find an area of a triangle by using the formula 1/2×base×height. How do you find an area of a triangle? You find an area of a triangle by using the formula 1/2×base×height. How to find the area of a triangle formula? To find the area of a triangle formula, understand that it involves multiplying the base by the height and then dividing by two. The formula is 1/2×base×height. How to calculate the area of a right triangle? To calculate the area of a right triangle, use the formula 1/2×base×height where the base and height are the two legs forming the right angle. How to find the area and perimeter of a triangle? To find the area, use the formula 1/2×base×height. For the perimeter, add up the lengths of all three sides: Perimeter=a+b+c How to find the height of a triangle without the area? To find the height of a triangle without knowing the area, you can use trigonometric ratios or the Pythagorean theorem if the triangle is right-angled, or rearrange the area formula if you know the area: height=(2×Area)/base. How to calculate area of a right triangle? To calculate the area of a right triangle, apply the formula 1/2×base×height. The base and height are the two perpendicular sides. How to figure out the area of a triangle? To figure out the area of a triangle, measure the base and the height, then use the formula 1/2×base×height. How to find the surface area of a triangle prism? To find the surface area of a triangular prism, calculate the area of the two triangular bases and the areas of the three rectangular sides, then sum them up: Surface Area=2×Area of Triangle+Perimeter of Triangle×Length of Prism What is the area of the equilateral triangle with the length of each side equal to a? The area of an equilateral triangle with side length a is given by the formula How to find area of a triangle without height? To find the area of a triangle without the height, use Heron's formula: Area= How to find the area of an isosceles triangle? To find the area of an isosceles triangle, use the formula 1/2×base×height. If the height is unknown, you can calculate it using the Pythagorean theorem. A triangle base is 6 inches long and its height is 5 inches. What is the area of the triangle? For a triangle with a base of 6 inches and a height of 5 inches, the area is calculated as: Area=12×6×5=15 square inches How do you find the area of a triangle? To find the area of a triangle, measure the base and height, then apply the formula 1/2×base×height. How to find the surface area of a triangle prism? To find the surface area of a triangular prism, calculate the area of the two triangular bases and add the areas of the three rectangular sides. The total surface area is: Surface Area=2×Area of Triangle+Perimeter of Triangle×Length of Prism How do you get the area of a triangle? You get the area of a triangle by using the formula 1/2×base×height. How to find the area of a non-right triangle? For a non-right triangle, you can use Heron's formula if you know all three sides: Area= How to find area of an equilateral triangle? To find the area of an equilateral triangle, use the formula What is the formula for area of a triangle? The formula for the area of a triangle is 1/2×base×height. How to find an area of a right triangle? To find the area of a right triangle, use the formula 1/2×base×height where the base and height are the two perpendicular sides. How to find the area of a triangle prism? To find the area of a triangle prism, you need to calculate the area of the triangular base using 1/2×base×heightand then use this to find the volume and surface area of the prism. How do I find the area of a right triangle? To find the area of a right triangle, measure the base and height, then use the formula 1/2×base×height How do you calculate the area of a triangle? You calculate the area of a triangle by using the formula 1/2×base×height How to find area of a triangle with 3 sides? To find the area of a triangle with 3 sides, use Heron's formula: Area= where s is the semi-perimeter of the triangle. What is the area of a triangle? The area of a triangle is the measure of the space enclosed by its three sides, typically calculated using the formula 1/2×base×height. How to find the area of a triangle calculator? To find the area of a triangle using a calculator, input the base and height values into the area formula 1/2×base×height, or use a specific triangle area calculator that can handle different How to get the area of a right triangle? To get the area of a right triangle, apply the formula 1/2×base×height How to solve for area of a triangle? To solve for the area of a triangle, measure the base and height, and use the formula 1/2×base×height How do you find the surface area of a triangle? The surface area of a triangle is the same as its area and is calculated using 1/2×base×height How to find the area of a triangle without the height? If the height is unknown, use Heron's formula for the area if the side lengths are known: where s is the semi-perimeter. How do you find area of a right triangle? You find the area of a right triangle by using the formula 1/2×base×height. What is the formula for the area of a right triangle? The formula for the area of a right triangle is 1/2×base×height. How to figure area of a triangle? To figure the area of a triangle, use the formula 1/2×base×height. How do u find area of a triangle? You find the area of a triangle by measuring the base and height, then using the formula 1/2×base×height. How to find the perimeter and area of a triangle? To find the area, use 12×base×height\frac{1}{2} \times \text{base} \times \text{height}21×base×height. For the perimeter, add up the lengths of all three sides: Perimeter=a+b+c\text{Perimeter} = a + b + cPerimeter=a+b+c How to find the area of a triangle on a graph? To find the area of a triangle on a graph, you can use the coordinates of the vertices and apply the formula: This method is useful for triangles plotted in a coordinate plane. What is the area of this triangle? a. b. c. d. e. To determine the area of a triangle given multiple choice options (a, b, c, d, e), identify the base and height from the provided information, then use the formula 1/2×base×height. How to figure the area of a triangle? To figure the area of a triangle, measure the base and height, then use the formula 1/2×base×height How to find the area of a triangle with 2 sides and an angle? To find the area of a triangle with 2 sides and an included angle, use the trigonometric formula: where a and b are the sides, and C is the included angle. How to find the area of a scalene triangle? For a scalene triangle, use Heron's formula if you know the lengths of all three sides: where s is the semi-perimeter. Given a triangle with a base of 8 units and a height of 6 units, what is its area? The area of a triangle with a base of 8 units and a height of 6 units is: Area=1/2×8×6= 24 square units How to find the area of an obtuse triangle? To find the area of an obtuse triangle, use the basic formula 1/2×base×height or Heron's formula if the side lengths are known. How to find perimeter and area of a triangle? To find the area, use For the perimeter, sum the lengths of all three sides: Perimeter=a+b+c How find the area of a triangle? To find the area of a triangle, measure the base and height, then use the formula 1/2×base×height How to do the area of a triangle? To find the area of a triangle, measure the base and height, and then apply the formula 1/2×base×height How to figure out area of a triangle? To figure out the area of a triangle, measure the base and height, and then use the formula 1/2×base×height. The formula for the area of a triangle is , where b is the length of the base and h is the height. The formula for the area of a triangle is: How do you find the area of a non-right triangle? To find the area of a non-right triangle, use Heron's formula if you know all three sides, or the trigonometric formula if you know two sides and the included angle. How to find the area of a triangle? To find the area of a triangle, use the formula 1/2×base×height if you know the base and height, or use Heron's formula if you know the side lengths. How to find the area of a triangle that is not a right triangle? For a triangle that is not a right triangle, use Heron's formula: where sss is the semi-perimeter. What is the area of triangle PQR? Round to the nearest tenth of a square unit. To find the area of triangle PQR, use the given dimensions and apply the appropriate formula (basic formula, Heron's formula, or trigonometric formula), then round the result to the nearest tenth. How to find height of a triangle without area? To find the height of a triangle without knowing the area, you can use trigonometric ratios if you have the lengths of the sides or rearrange the area formula if you know the area and the base. How to find the area of a triangle on a coordinate plane? To find the area of a triangle on a coordinate plane, use the formula: What is the area of a triangle with side lengths 30, 40, and 50? To find the area of a triangle with side lengths 30, 40, and 50, use Heron's formula: What is the area of an equilateral triangle? The area of an equilateral triangle is given by the formula: What is the area of an acute isosceles triangle with a base of 5 and height of 9? For an acute isosceles triangle with a base of 5 and a height of 9, the area is: Area=12×5×9=22.5 square units How do u find the area of a right triangle? To find the area of a right triangle, use the formula 1/2×base×height, where the base and height are the two legs forming the right angle. What is the equation for the area of a triangle? The equation for the area of a triangle is: How to find area of a triangle with coordinates? Points x(6,8)x(6,8)x(6,8), y(3,3)y(3,3)y(3,3), and z(13,−3)z(13,-3)z(13,−3) form a triangle. What is the area of △xyz? How to find the base of a triangle without the area? To find the base of a triangle without knowing the area, use trigonometric ratios or the Pythagorean theorem if the height and other side lengths are known. How to find the surface area of a 3D triangle? For a 3D triangular surface (like a triangular pyramid), you need to calculate the area of each triangular face and then sum them. What is the formula for finding the area of a right triangle? The formula for finding the area of a right triangle is: What is the area of a triangle? The area of a triangle is the measure of the space enclosed by its three sides. It is calculated using the formula 1/2×base×height. How to get area of a right triangle? To get the area of a right triangle, use the formula 1/2×base×height. Measure the two perpendicular sides, which are the base and the height, then apply the formula. How to find the area of a 90 degree triangle? The area of a 90-degree triangle, or a right triangle, is found using the formula 1/2×base×height. The base and height are the two sides that form the right angle. How to find area of a triangle calculator? To find the area of a triangle using a calculator, enter the base and height into the formula 1/2×base×height. Some calculators may also allow you to input side lengths and use Heron's formula or other methods. What is the formula for area of a right triangle? The formula for the area of a right triangle is 1/2×base×height. How do you find the surface area of a triangle prism? To find the surface area of a triangular prism, calculate the area of the two triangular bases and the areas of the three rectangular sides, then add them together. The formula is: Surface Area = 2×Area of Triangle + Perimeter of Triangle × Length of Prism How to find area of a triangle prism? To find the area of a triangular prism, calculate the area of the triangular base using 1/2×base×height, and then use this to find the volume and surface area of the prism. How to determine the area of a triangle? To determine the area of a triangle, measure the base and height, then apply the formula 1/2×base×height. What is the area of a triangle calculator? An area of a triangle calculator is a tool that allows you to input the base and height, or the side lengths and included angles, to quickly calculate the area of the triangle. How to measure the area of a triangle? To measure the area of a triangle, use the formula 1/2×base×height after measuring the base and the height. How to find area of a triangle? To find the area of a triangle, use the formula 1/2×base×height. Measure the base and height, then apply the formula. How to find the area of a right-angle triangle? To find the area of a right-angle triangle, measure the two legs, which act as the base and height, and then use the formula 1/2×base×height. How to find area of a triangle with 2 sides and an angle? To find the area of a triangle with 2 sides and an included angle, use the trigonometric formula: where a and b are the sides, and C is the included angle. How to solve for the area of a triangle? To solve for the area of a triangle, use the formula 1/2×base×height if you know the base and height, or use Heron's formula if you know the lengths of all three sides. How do you find the area of an equilateral triangle? To find the area of an equilateral triangle, use the formula: How to find area of a triangle formula? The formula to find the area of a triangle is: Thomas M. A. A literature-lover by design and qualification, Thomas loves exploring different aspects of software and writing about the same. Never Miss The Updates We cover all recruitment, talent analytics, L&D, DEI, pre-employment, candidate screening, and hiring tools. Join our force & subscribe now! Like/ dislike something or want to co-author an article? Drop us a note! Stay On Top Of Everything In HR
{"url":"https://www.hirequotient.com/blog/area-of-a-triangle","timestamp":"2024-11-08T11:39:14Z","content_type":"text/html","content_length":"179797","record_id":"<urn:uuid:e56ad30f-bdba-4573-9bfc-59b16d25481d>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00061.warc.gz"}
What is the Sliding Window Algorithm? A Complete Guide - [Updated November 2024 ] What is the Sliding Window Algorithm? A Complete Guide processing data, sliding window, sliding window algorithm, window algorithm The Sliding Window Algorithm is a data processing technique that is used in many efficient algorithms and methods. It is a sequential scanning technique that involves dividing a sequence of data into overlapping subsequences, or windows, and then processing these windows in an optimal manner. The main goal of the sliding window algorithm is to solve problems that involve performing a certain function or computation on a sequential data set, such as an array or a string. The algorithm optimizes the time complexity of the computation by only considering a subset of the data at each step, rather than reprocessing the entire dataset. The sliding window algorithm is particularly useful in applications where the size of the data set is very large and the computation needs to be done in an efficient manner. By using the sliding window technique, the algorithm avoids unnecessary computations and reduces the overall complexity of the problem. The sliding window algorithm can be applied to a wide range of problems, such as string matching, array manipulation, and optimization problems. It is a versatile technique that can be customized to suit the specific requirements of different applications. The optimality of the algorithm lies in its ability to efficiently process large datasets with minimal memory usage and computation overhead. What is the Sliding Window Algorithm? The sliding window algorithm is a method used for efficiently processing sequential data in a window of fixed size. It is a common technique used in many applications, including string searching, data compression, and image processing. The algorithm works by defining a window of a certain size and scanning through the data one element at a time, updating the window as it goes. One of the key benefits of the sliding window algorithm is its efficient time complexity. The algorithm only requires a single scan through the data, which makes it a highly efficient method for processing large amounts of data. The time complexity of the algorithm is typically linear, or O(n), where n is the size of the input data. The sliding window algorithm can be optimized for different applications by adjusting the size of the window and the function used to update the window. For example, in string searching, the window can be a substring of the input data, and the function used to update the window can be a simple shift operation. These optimizations can significantly improve the performance of the algorithm in specific scenarios. The sliding window algorithm also offers optimality in certain cases. For example, in the longest substring without repeating characters problem, the sliding window algorithm provides an optimal solution with a time complexity of O(n), where n is the length of the input string. This optimality makes the sliding window algorithm a popular choice for solving a wide range of problems How does the Sliding Window Algorithm work? The Sliding Window Algorithm is a computational optimization technique used to solve problems related to sequential data processing. It is an efficient algorithmic method that uses a sliding window to scan through the data in a sequential manner. At its core, the algorithm works by maintaining a window of a fixed size as it scans through the data. This window slides one element at a time, which allows for efficient processing of the data without unnecessary re-computation. The main idea behind the Sliding Window Algorithm is to minimize the time complexity by avoiding redundant computations. By using the sliding window technique, the algorithm can optimize the processing of data by only considering a subset of the data at a time. The sliding window algorithm can be applied to a variety of problems in different domains, such as string manipulation, substring search, array manipulation, and more. It is particularly useful when dealing with data that has some sequential or temporal structure. By using the sliding window technique, the algorithm can process the data in a more efficient and optimized way, reducing the overall complexity and improving the performance of the algorithm. It is a widely used technique in data processing and has various applications in computer science and related fields. Step 1: Initialization The Sliding Window algorithm is a technique used in computer algorithms and data processing to efficiently scan and process sequential data. It is widely used in various applications due to its optimality and efficiency in computation. The first step in implementing the Sliding Window algorithm is the initialization phase. This step involves setting up the initial configuration of the window and other variables that will be used throughout the algorithm. • Window: The window refers to a fixed-size subset of the data that will be scanned at each iteration. It is initialized with the first set of data elements. • Pointers: The algorithm uses pointers to keep track of the current position in the data and the boundaries of the window. These pointers are initialized to the starting position. • Function or Condition: The algorithm requires a function or condition that defines whether the current window satisfies the desired criteria or needs to be adjusted. This function or condition is initialized based on the problem being solved. During the initialization step, the complexity and optimization of the algorithm are considered. The window size and the values of other variables are chosen in a way that ensures efficient processing of the data. In summary, the initialization step of the Sliding Window algorithm involves setting up the initial window, pointers, and conditions for processing the sequential data. It is a crucial step that lays the foundation for the subsequent iterations of the algorithm. Step 2: Moving the Window Once we have initialized the sliding window, the next step is to move it across the input data. This step is crucial as it allows us to perform the computation on different portions of the data and determine the optimal solution. The technique used to move the window is what makes the sliding window algorithm efficient and effective. The method of moving the window depends on the specific application and the problem at hand. In general, the window moves one element at a time, either to the right or left, depending on the requirements. This movement is often done in a systematic way to ensure optimal processing of the data. The complexity of the sliding window algorithm lies in the time it takes to move the window and perform the necessary computations. The goal is to minimize this complexity and ensure efficient processing of the data. Various optimization techniques can be employed to achieve this, such as precomputing certain values or using more advanced algorithms. During the movement of the window, the algorithm may need to recalculate or update certain values or functions to maintain the accuracy of the computation. This could involve updating a running sum, adjusting the size of the window, or any other relevant operation. It is essential to carefully handle these updates to prevent any errors or inconsistencies in the results. Overall, the step of moving the window in the sliding window algorithm is a crucial part of the computation process. It allows for efficient processing of data and optimization of the algorithm’s performance. By systematically moving the window and updating relevant values, the algorithm can scan through the input data and provide the desired output with improved time complexity. Step 3: Updating the Result Once the sliding window has been moved to the next position during the scanning phase, updating the result is the next step in the sliding window algorithm. This step involves processing the data currently within the window and updating the result based on the processed information. The updating process depends on the specific application and problem being solved using the sliding window technique. It may involve computing a function on the data within the window, comparing it with the current result, and updating the result accordingly. This step is crucial for maintaining the optimality of the algorithm and ensuring efficient computation. For sequential processing problems, the updating step often involves updating a result variable or outputting the result as soon as a new element is added to the window. This allows for efficient computation and reduces the time complexity of the algorithm. Depending on the problem, the updating step can be as simple as comparing the current result with the new value in the window and updating it if necessary. In other cases, more complex operations may be involved, such as merging or aggregating multiple values in the window to derive the updated result. Overall, the updating step plays a crucial role in the sliding window algorithm as it ensures that the result remains accurate and reflects the current state of the data. By appropriately updating the result at each step, the algorithm can efficiently process large datasets and provide optimal solutions to various problems. Applications of the Sliding Window Algorithm The Sliding Window Algorithm is a computational method commonly used in data processing applications that require sequential scanning of data. It is an optimization technique for reducing the complexity of computations over a window of data. This algorithm is particularly efficient for problems that involve processing a continuous stream of data, such as signal processing or real-time data analysis. One of the main applications of the sliding window algorithm is in time series analysis. By applying the sliding window technique to a time series dataset, it becomes possible to efficiently compute various statistical measures, such as moving averages or moving medians, over a defined window size. This allows for the analysis of trends and patterns in the data, without having to perform computationally expensive computations on the entire dataset. Another application of the sliding window algorithm is in image processing. By applying the sliding window technique to an image, it becomes possible to perform localized computations, such as edge detection or object recognition, over a defined window size. This allows for the efficient processing of large images, as only a small portion of the image needs to be processed at a time. The sliding window algorithm is also widely used in data compression algorithms, such as the LZ77 compression algorithm. By applying the sliding window technique, it becomes possible to identify and encode repeated patterns within a data stream. This can significantly reduce the size of the compressed data, as repeated patterns are replaced with references to previously encoded data. In summary, the sliding window algorithm has a wide range of applications in various fields such as time series analysis, image processing, and data compression. It is a powerful technique for efficiently processing sequential data and can significantly optimize computations over a defined window size. Its optimality lies in its ability to reduce the complexity of computations by only considering a small portion of the data at a time. String Manipulation String manipulation is a fundamental operation in computer science and is often required for various types of data processing and analysis. It involves performing operations on strings, such as concatenation, splitting, searching, and replacing, to extract and modify specific parts of the data. A common method used in string manipulation is the sliding window algorithm. This technique involves dividing the input data into fixed-size windows or substrings and then performing computation on each window. This can be used to solve problems that involve sequential processing, pattern matching, or optimization. The sliding window algorithm has a time complexity that depends on the size of the input data and the window size. By using this algorithm, we can efficiently scan the data and process each window in a sequential manner. This allows for efficient extraction of relevant information and avoids unnecessary computation on irrelevant parts of the data. The sliding window algorithm is often used in various applications of string manipulation, such as text processing, data compression, data mining, and signal processing. It is a powerful technique for analyzing and manipulating large amounts of data efficiently. In addition to its application in string manipulation, the sliding window algorithm can also be used for optimization problems. By adjusting the window size and the processing function, we can optimize the algorithm to achieve better performance or accuracy, depending on the specific problem. The optimality of the algorithm can be determined based on the specific problem requirements and Overall, string manipulation is a crucial part of data processing and analysis, and the sliding window algorithm is a powerful technique for efficiently performing computations on sequential data. It allows for efficient extraction and modification of relevant information, leading to improved time complexity and better optimization for various applications. Array Manipulation Array manipulation refers to the efficient modification of an array in order to achieve a desired outcome. It is a common application of the sliding window algorithm, a method that allows for the sequential processing of data by scanning through a fixed-size window. The sliding window algorithm is an efficient technique that is often employed in array manipulation. It involves defining a window of a fixed size and then moving the window through the array, computing a function or performing another optimization at each step. This algorithm is particularly useful for problems that involve optimizing a function over a range of values, as it allows for the computation to be done in linear time complexity. By using the sliding window algorithm for array manipulation, it is possible to optimize the processing of data by avoiding redundant computations. This can result in significant improvements in both the time and space complexity of the algorithm. In addition, the sliding window technique can be used to solve a wide range of problems, making it a versatile tool for many applications. One example of array manipulation using the sliding window algorithm is finding the maximum sum of a subarray of a given array. By defining a window and moving it through the array while updating the maximum sum, it is possible to find the optimal solution in a time-efficient manner. This technique can be extended to solve various other problems, such as finding the longest substring without repeating characters or determining the maximum product of a subarray. In summary, array manipulation using the sliding window algorithm is an efficient method for optimizing computations and solving a range of problems. By defining a window and scanning through the array, redundant computations can be avoided and the complexity of the algorithm can be significantly reduced. This technique is widely used in various applications and provides a powerful tool for data processing and optimization. Optimization Problems Optimization problems refer to the task of finding the best solution from a set of feasible solutions. These problems are often characterized by the need to maximize or minimize a specific function, known as the objective function, while satisfying a set of constraints. Processing a large amount of data often requires efficient computation methods to find the optimal solution. The sliding window algorithm is one such technique that can be applied to optimization problems. This algorithm involves examining sequential subarrays or subsets of the data in a sliding manner, allowing for efficient processing. The sliding window algorithm operates by maintaining a “window” of data that slides along the input, updating the window as it moves. This technique can be particularly useful in scenarios where the optimal solution can be found by considering a subset of the data at a time. By utilizing the sliding window method, optimization problems can be solved with improved time complexity. The algorithm scans the input data in a sequential manner, avoiding redundant computations and reducing overall processing time. The optimality of the sliding window algorithm depends on the specific problem being solved. Some algorithms may provide an optimal solution, while others may only offer an approximation. It is essential to analyze the problem and understand the trade-offs between computational complexity and solution optimality when applying the sliding window technique. Overall, the sliding window algorithm is an efficient method for solving optimization problems. By utilizing the technique of processing sequential subsets of data, the algorithm can lead to improved computation times and provide viable solutions. However, it is important to carefully consider the problem at hand and evaluate the optimality of the algorithm when applying this technique. Advantages and Disadvantages of the Sliding Window Algorithm The sliding window algorithm offers several advantages that make it a popular choice in various applications: • Optimality: The sliding window algorithm is designed to achieve optimal solutions for certain problems by using a sliding window technique. • Efficient computation: The algorithm reduces the computation time by processing data in smaller segments, known as windows, instead of processing the entire dataset at once. • Time complexity: The sliding window algorithm often has a lower time complexity compared to other algorithms, allowing for faster execution. • Data processing: This algorithm is particularly useful for processing and analyzing large datasets, as it breaks down the data into manageable segments. • Application: The sliding window algorithm has applications in a wide range of domains, including image processing, natural language processing, and network optimization. However, it is important to consider the potential disadvantages of the sliding window algorithm: • Window size selection: Choosing the appropriate window size can be challenging, as it depends on the specific problem and dataset. An inappropriate window size may lead to suboptimal results. • Overlapping windows: In some cases, overlapping windows may be necessary to capture all relevant information. However, this can increase the complexity and computational requirements of the • Data dependency: The sliding window algorithm assumes a certain degree of data dependency within the dataset. If the data does not exhibit the desired patterns or dependencies, the algorithm may not yield accurate results. Despite these limitations, the sliding window algorithm remains a powerful and versatile technique for solving optimization problems and processing large datasets efficiently. The sliding window algorithm offers several advantages in various application scenarios: • Optimality: The sliding window algorithm can provide optimal solutions for many problems, ensuring that the computed result is the best possible outcome. • Computation Complexity: This technique can significantly reduce the computation complexity by only processing a portion of the input data rather than the entire data set. It allows for efficient processing of large data sets in real-time applications. • Sequential Processing: The sliding window algorithm processes data sequentially, which means it can handle streaming data or data that arrives incrementally. This makes it suitable for tasks that involve continuous data processing, such as online monitoring and sensor data analysis. • Efficient Data Scan: With the sliding window algorithm, the function only needs to scan a fixed-size window of data rather than the entire dataset. This optimizes the scan time and reduces the memory requirements, especially when dealing with large datasets. • Method Optimization: It provides a systematic approach to data manipulation and window management. By carefully selecting the window size and sliding step, the algorithm can be fine-tuned to achieve optimal results for specific problems. In summary, the sliding window algorithm is a powerful technique that offers advantages such as optimality, efficient computation, sequential processing, and method optimization. These advantages make it a valuable tool in various domains, including data analysis, signal processing, machine learning, and real-time applications. The sliding window algorithm, while a powerful optimization technique, also has its drawbacks and limitations in certain applications. 1. Limited Optimality: Although the sliding window algorithm is highly efficient in many cases, it may not always yield the optimal solution. Due to its sequential processing, it may overlook certain possibilities and settle for suboptimal results. 2. Time Complexity: Depending on the size of the sliding window and the amount of data to be processed, the algorithm can have a high time complexity. This is particularly true when dealing with large amounts of data, where the sliding window needs to scan through a significant portion of the dataset in each iteration. 3. Data Dependencies: The sliding window algorithm relies heavily on the order of the data and assumes sequential processing. This can be limiting in scenarios where there are dependencies between data points that do not follow a strict sequential pattern. 4. Limited Applicability: The sliding window algorithm is best suited for applications where the window size remains constant and the data can be processed in a sequential manner. It may not be suitable for scenarios where the window size needs to be dynamically adjusted or when parallel processing is required. 5. Computational Overhead: While the sliding window algorithm can be efficient in terms of time, it may introduce additional computational overhead. This is especially true when performing complex computations within the window, as the algorithm needs to continuously update and reprocess the data. 6. Limited Support for Irregular Data: When dealing with irregular or unpredictable data patterns, the sliding window algorithm may not provide optimal results. The fixed window size and sequential processing may not effectively capture the underlying patterns and relationships in the data. 7. Ineffective on Streaming Data: Streaming data, where data is continuously generated and processed in real time, poses challenges for the sliding window algorithm. The need to maintain a fixed-size window and perform sequential processing may not be feasible or efficient for streaming data scenarios. Overall, while the sliding window algorithm is a useful method for many applications, it is important to consider its limitations and suitability for specific use cases. Understanding the drawbacks can help in selecting alternative algorithms or optimizing the sliding window approach for improved performance. FAQ about topic “What is the Sliding Window Algorithm? A Complete Guide” What is the sliding window algorithm? The sliding window algorithm is a technique used in computer science to efficiently solve problems that involve sequential or string data. It involves maintaining a dynamic window of elements or characters and sliding this window over the input data to perform a specific operation or calculation. How does the sliding window algorithm work? The sliding window algorithm works by initializing a window with a certain size and then sliding it over the input data. At each step, the algorithm performs a specific operation on the elements or characters within the current window. The window then moves to the next position, continuing the process until all elements or characters have been processed. What are some applications of the sliding window algorithm? The sliding window algorithm has various applications, such as string matching, finding maximum or minimum values in a subarray, substring problems, and more. It is commonly used in algorithms and data structures for problems that involve sequences or strings. Can the sliding window algorithm be used for real-time data processing? Yes, the sliding window algorithm can be used for real-time data processing. It is a scalable and efficient solution for handling continuous streams of data by processing it in a sliding window fashion. This allows for real-time analysis and insights on the data. What are the advantages of using the sliding window algorithm? The sliding window algorithm provides several advantages. It has a time complexity of O(n), which makes it efficient for processing large data sets. It also reduces the need for redundant computations by reusing calculations from the previous window, thereby optimizing the overall performance of the algorithm. Leave a Comment
{"url":"https://digitalgadgetwave.com/what-is-the-sliding-window-algorithm-a-complete/","timestamp":"2024-11-10T14:57:23Z","content_type":"text/html","content_length":"136810","record_id":"<urn:uuid:6a44d2a4-23f1-46f5-a06e-0822cf95a022>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00775.warc.gz"}
EViews Help: Rotating Factors Rotating Factors You may perform factor rotation on an estimated factor object with two or more retained factors. Simply call up the dialog by clicking on the button or by selecting from the factor object menu, and select the desired rotation settings. The and dropdowns may be used to specify the basic rotation method (see “Types of Rotation” for a description of the supported methods). For some methods, you will also be prompted to enter parameter values. In the depicted example, we specify an oblique Promax rotation with a power parameter of 3.0. The Promax orthogonal pre-rotation step performs Varimax (Orthomax with a parameter of 1). By default, EViews does not row weight the loadings prior to rotation. To standardize the data, simply change the dropdown menu to or . In addition, EViews uses the identity matrix (unrotated loadings) as the default starting value for the rotation iterations. The section labeled allows you to perform different initializations: • You may instruct EViews to use an initial random rotation by selecting in the dropdown. The dialog changes to prompt you to specify the number of random starting matrices to compare, the random number generator, and the initial seed settings. If you select random, EViews will perform the requested number of rotations, and will use the rotation that minimizes the criterion function. As with the random number generator used in parallel analysis, the value of this initial seed will be saved with the factor object so that by default, subsequent rotation will employ the same random values. You may override this initialization by entering a value in the edit field or press the button to have EViews draw a new random seed value. • You may provide a user-specified initial rotation. Simply select in the dropdown, the provide the name of a • Lastly, if you have previously performed a rotation, you may use the existing results as starting values for a new rotation. You may, for example, perform an oblique Quartimax rotation starting from an orthogonal Varimax solution. Once you have specified your rotation method you may click on . EViews will estimate the rotation matrix, and will present a table reporting the rotated loadings, factor correlation, factor rotation matrix, loading rotation matrix, and rotation objective function values. Note that the factor structure matrix is not included in the table output; it may be viewed separately by selecting from the factor object menu. In addition EViews will save the results from the rotation with the factor object. Other routines that rely on estimated loadings such as factor scoring will offer you the option of using the unrotated or the rotated loadings. You may display your rotation results table at any time by selecting from the factor menu.
{"url":"https://help.eviews.com/content/factanal-Rotating_Factors.html","timestamp":"2024-11-11T21:03:32Z","content_type":"application/xhtml+xml","content_length":"10784","record_id":"<urn:uuid:e7ea1ac2-d71f-4b8b-9395-75f91d59e3db>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00055.warc.gz"}
Craps Odds - The Craps Coach As you know, a die has six sides with six different values, and that two die are rolled every time. A good guess would be that since there are six different numbers on two die’s then there would be twelve different possible outcomes of the roll. However, a closer look reveals there are thirty-six different possible outcomes of the dice roll. Since there are two die, the same number can be rolled in many different ways. Just like calculating the odds of playing a lottery, all different number combinations must be considered. The way to go about calculating these various combinations is by starting at the low end of possible rolls. The lowest number on the dice is one. If both die were rolled as one’s (snake eyes) the outcome would be 2. This is one possible roll. The next number up is a three. To determine how many times this can be rolled, do some simple math. There is only one equation that will produce this outcome: 2 + 1 = 3. Therefore, there are two different possible outcomes: Die One = 1, Die Two = 2 Or Die One = 2, Die Two = 1 To further explain, let’s use the number seven as a possible outcome. Simple math reveals the sum of 7 can be produced in three ways: 1 + 6, 2 + 5 and 3 + 4. Going one step further, to calculate the number of times these outcomes can be rolled, simply multiply the outcome by 2 (representing two die) and you have the value of six – There are six ways to roll the number 7 with two die. The combinations are shown here: Die One = 1, Die Two = 6 Or Die One = 2, Die Two = 5 Or Die One = 3, Die Two = 4 Die One = 6, Die Two = 1 Or Die One = 5, Die Two = 2 Or Die One = 4, Die Two = 3 If we were to do the same for each roll outcome, we would see that the possible ways of rolling a 6 are the same as rolling an 8. Likewise, a 5 and 9 have equal chance of being rolled, as do a 4 and 10, 3 and 11 and 2 and 12. Knowing this is important, for it will keep you in the know regarding what your payoffs may during any given wager. Remember, just because your winning bet depends on your point being rolled before a seven, does not mean those odds are the same for every point. Knowing this may just play a part if you have a choice of increasing your stake on a wager. To calculate the percentage of you chances at rolling a certain number, divide the number of possible outcomes by the number of total dice outcomes (36). For the number 7, this would show as 6/36 x 100% = 16.6% Calculating craps odds and probability seems hard, but it’s not as complicated as one might think. When calculating the probabilities of any gambling activity, the first thing one looks at is the number of potential outcomes. When rolling two six-sided dice, like you in a game of craps, there are 36 possible outcomes. (There are only 11 possible totals, 2 through 12, but there are 36 combinations that can result in those totals.) There is only one way to roll a 2 (or a 12). Roll a 1 on each die (or a 6 on each die.) Since there are 36 possible combinations, and only 1 of those combinations can total 2, the probability of getting a 2 on a roll is 1 out of 36, or 35 to 1, as stated in odds terms. There are 2 ways to roll a 3 though – you can roll a 1 and a 2, or roll a 2 and a 1, so the probability of rolling a 3 is 2 out of 36. 2 out of 36 is the same as 1 out of 18, which stated in odds terms is 17 to 1. Here’s a chart outlining the possible combinations, how many ways each total can be rolled, and what the odds are for each total.
{"url":"https://thecrapscoach.com/craps-odds.htm","timestamp":"2024-11-06T08:08:35Z","content_type":"text/html","content_length":"171561","record_id":"<urn:uuid:c2d551ca-30db-4afe-8a8a-6fcb33bb9bc3>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00765.warc.gz"}
04 Searching and Sorting Lecture from: 10.10.2024 | Video: Videos ETHZ | Rui Zhangs Notes | Visualisations: VisualGo and Toptal | Official Script Given an array A[1..n] containing numbers and a target number b, our goal is to find the index k such that A[k] = b. Case 1: Linear Search (Unsorted Arrays) • Algorithm: In linear search, we traverse each element in the array sequentially until we find the target value b. • Time Complexity:. In the worst case, we need to examine every element. This makes linear search inefficient for large arrays. Implementation Example: public static int linearSearch(int[] arr, int b) { for (int i = 0; i < arr.length; i++) { if (arr[i] == b) { return i; // Found it! return -1; // Not found in the array Geht es besser? No, but how would we prove that. The core idea is to show that any algorithm attempting to search an unsorted array must, in the worst case, perform at least n operations. We can demonstrate this by focusing on a specific scenario: what if the target element b is not present in the array A? If b isn’t in the array, any algorithm needs to check every single element to definitively say “b is not here.” This means it must perform at least one operation (comparison, look-up, etc.) for each Case 2: Binary Search (Sorted Arrays) • Pre-requisite: The input array A must be sorted in ascending order for binary search to work correctly. • Algorithm: Binary search exploits the sorted nature of the array. It repeatedly divides the search interval in half. If the middle element is equal to the target, we’ve found it! Otherwise, we discard half the array based on whether the target is less than or greater than the middle element and continue searching in the remaining half. • Time Complexity: O(log n). The search space halves with each comparison, leading to a significantly faster search for large arrays. Implementation Example: public static int binarySearch(int[] arr, int b) { int low = 0; int high = arr.length - 1; while (low <= high) { int mid = low + (high - low) / 2; // Calculate mid to avoid overflow if (arr[mid] == b) { return mid; // Found it! } else if (arr[mid] < b) { low = mid + 1; // Search the right half } else { high = mid - 1; // Search the left half return -1; // Not found in the array Geht es besser Now this is a fairly tough question to answer. If we want to prove that there is no better way, then we’d have to prove that for all algorithms even the ones not invented until now, there isn’t one that could possibly be better. Every comparison-based search algorithm can be visualized as a decision tree. Each node in this tree represents a comparison, and the branches emanating from each node correspond to the possible outcomes of that comparison (e.g., “target is greater” or “target is smaller”). The leaves of the tree represent all possible final results – finding the target element or determining its absence. Crucially, for any algorithm to be considered correct, it must account for every possible input. This means that our decision tree must have at least n+1 leaves, representing each element’s potential index and the case where the target is not found. The depth of this decision tree directly corresponds to the worst-case number of comparisons an algorithm requires. To achieve n+1 leaves, the tree must have a minimum of nodes, where is the height of the tree. This leads us to: This is an information theoretical argument. The input is an array (or list) named A containing n numbers. These numbers can be of various data types (integers, floats, etc.) The output is a new permutation of the original array A, where the elements are arranged in non-decreasing order: . This means the smallest element is at index 1, the next smallest at index 2, and so Bubble Sort Let’s dive into Bubble Sort – a simple yet fundamental sorting algorithm. How Bubble Sort Works: 1. Repeated Passes: Bubble sort operates by repeatedly stepping through the list, comparing adjacent elements and swapping them if they are in the wrong order. 2. The “Bubbling” Effect: Larger elements gradually “bubble up” to their correct positions at the end of the list with each pass. 3. Sorted Subarrays: After each pass, the largest element in the unsorted portion is placed in its final position. The sorted portion of the list grows with each iteration. Pseudocode Example: function bubbleSort(A): n = length(A) for i = 1 to n-1: for j = 1 to n-i: if A[j] > A[j+1]: swap(A[j], A[j+1]) return A Imagine you have a list of numbers and you’re repeatedly comparing adjacent elements. If they are out of order, you swap them. This process continues until the entire list is sorted. Time Complexity: • Best Case: – If the array is already sorted, only one pass is needed to verify the order. • Average Case: – In most cases, multiple passes are required to bring all elements into their correct positions. • Worst Case: – Occurs when the input array is in reverse sorted order. Space Complexity: – Bubble sort sorts in-place, requiring only a constant amount of additional memory. Why does it work? The key insight is that each pass of Bubble Sort places the largest unsorted element in its correct position at the end of the array. To formally demonstrate why n passes are necessary, we can 1. Initial State: The array begins in a potentially unsorted state with n elements. 2. Iterative Process: In each pass (iteration) of the outer loop, we compare adjacent elements and swap them if out of order. This ensures that the largest element within the unsorted portion is “bubbled” to its final position at the end. 3. Sorted Subarray Growth: After each pass, a single element is definitively placed in its correct sorted position. The subarray containing already sorted elements grows by one with each iteration. 4. Termination Condition: This process continues until the entire array has been iterated over n times. At this point, every element has been compared and swapped as necessary, resulting in a fully sorted array. Consider the worst-case scenario: The input array is in reverse sorted order. In this situation, each pass requires n-1 comparisons and swaps to bring the largest unsorted element (which is initially at index 1) to its correct position (index n). Therefore, a minimum of n passes are required to fully sort the array. Formal Analysis with Invariant: Invariant: At the end of each pass in Bubble Sort, the largest unsorted element is guaranteed to be in its correct sorted position at the end of the array. This property acts as an invariant throughout the algorithm’s execution. 1. Initial State: The array begins in a potentially unsorted state with n elements. 2. Iterative Process: Each pass of the outer loop compares adjacent elements and swaps them if out of order. This process, guided by the invariant, ensures that the largest unsorted element “bubbles” to its correct position at the end of the array. 3. Sorted Subarray Growth: After each pass, a single element is definitively placed in its correct sorted position. The subarray containing already sorted elements grows by one with each iteration. This growth directly corresponds to the invariant holding true for each subsequent pass – the largest unsorted element is guaranteed to be in place. 4. Termination Condition: This process continues until the entire array has been iterated over n times. At this point, every element has had its chance to “bubble” up and reach its final sorted position, resulting in a fully sorted array. The invariant ensures that with each pass, we are making consistent progress towards the final sorted state. The worst-case scenario (array in reverse sorted order) demonstrates why n passes are necessary: After n-1 passes, all but one element will be correctly placed. The last remaining unsorted element requires a single pass to reach its correct position. …Geht es besser?… Let us try to create a new algorithm using the same invariant. Selection Sort Last time we examined Bubble Sort and its reliance on the crucial invariant: At the end of each pass, the largest unsorted element is guaranteed to be in its correct sorted position at the end of the array. This concept will serve as a foundation for understanding Selection Sort. Instead of repeatedly moving elements towards their final positions like Bubble Sort, Selection Sort adopts a different strategy: 1. Finding the Maximum: In each pass, we identify the largest element within the unsorted portion of the array. This is akin to “selecting” the maximum value. 2. Swap and Advance: The identified largest element is then swapped with the element at the end of the unsorted subarray (index n-i-1). Understanding the Progression: • Imagine a partially sorted array. With each pass of Selection Sort, we essentially “fill in” the next correct position with the maximum remaining value. • The sorted subarray at the end expands one element at a time. Why n Passes? To guarantee that every element is placed in its correct position, we need to perform n passes. In each pass, we locate and swap the largest element from the unsorted portion, ensuring that by the end of n passes, the entire array is sorted. Key Difference: Bubble sort iteratively moves elements towards their final positions, while Selection Sort swaps only the maximum value with its designated place. Insertion Sort For this algorithm we are going to be thinking of a different and much simpler and natural invariant. Insertion Sort is another simple sorting algorithm that builds the sorted list one element at a time. The algorithm iterates over the array, and at each step, it inserts the current element into its correct position in the already sorted part of the array. The invariant we’ll maintain in Insertion Sort is: In other words, after each iteration, the subarray from to is already sorted. How Insertion Sort Works: 1. Iterate through the array: For each element in the array (starting from the second element), insert it into its correct position in the already sorted part of the array. 2. Shifting Elements: Inserting an element might involve shifting larger elements one position to the right to make space for the new element. 3. Place the element: Once the correct position is found, place the current element there. for j = 1 to n-1: x = A[j] # Use binary search to find the position k where A[j] belongs k = binarySearch(A, 0, j-1, x) # Shift the elements to the right to make room for A[j] for i = j-1 down to k: A[i + 1] = A[i] A[k] = x Comparisons: For the binary search part comparisons are required. We need to do this n times. Therefore: $\sum_{j = 2}^{n} c \log(j-1) = c \sum_{i = 1}^{n-1} \log(i) = c \log(\prod_{i = 1}^{n-1} i) = c \log((n-1)!) \in \Theta(n \log(n)) Switchings of Elements: In the worst case we need to switch (we are using an array, not an linked list) n times and repeat that n times → . Merge Sort For this algorithm we are going to be thinking of a divide and conquer type invariant. Merge Sort is a divide and conquer sorting algorithm. It works by recursively splitting the array into smaller subarrays, sorting those subarrays, and then merging them back together in sorted order. This results in an efficient time complexity of . The correct invariant in Merge Sort is: This means that after the recursive sorting step, the subarray from left to mid and the subarray from mid+1 to right are each sorted individually. The merge step then combines these two sorted subarrays into a single sorted subarray. How Merge Sort Works: 1. Divide: Recursively split the array into two halves until each subarray contains a single element (base case). 2. Conquer: Recursively sort the left and right halves. 3. Merge: Combine the two sorted halves into one sorted array. function mergeSort(A, l, r): if l < r: mid = (l + r) // 2 mergeSort(A, l, mid) # Sort left half mergeSort(A, mid + 1, r) # Sort right half merge(A, l, mid, r) # Merge the two sorted halves function merge(A, l, mid, r): left_part = A[l:mid+1] right_part = A[mid+1:r+1] i = j = 0 k = l # Merge the sorted subarrays back into A[l..r] while i < len(left_part) and j < len(right_part): if left_part[i] <= right_part[j]: A[k] = left_part[i] i += 1 A[k] = right_part[j] j += 1 k += 1 # Copy any remaining elements from left_part while i < len(left_part): A[k] = left_part[i] i += 1 k += 1 # Copy any remaining elements from right_part while j < len(right_part): A[k] = right_part[j] j += 1 k += 1 Time Complexity: Merge Sort operates by recursively splitting the array in half. Each level of recursion requires time to merge the two halves back together. Since the array is divided in half at each step, the depth of recursion is . Thus, the overall time complexity is: Space Complexity: Merge Sort requires additional space to store the temporary subarrays during the merge process, leading to a space complexity of . Why the Invariant Holds: 1. Base Case: When the subarray has a single element (i.e., ), it’s trivially sorted. 2. Inductive Step: Assuming that both and are sorted after the recursive calls, the merge step ensures that is sorted. Hence, the invariant holds throughout the algorithm’s execution. Continue here: 05 Sorting, Data Structures
{"url":"https://cs.shivi.io/Semesters/Semester-1/Algorithms-and-Datastructures/Lecture-Notes/04-Searching-and-Sorting","timestamp":"2024-11-05T00:38:44Z","content_type":"text/html","content_length":"108057","record_id":"<urn:uuid:0cfd3715-f089-4d9e-844b-e213b391ac3d>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00656.warc.gz"}
[model,inlierIdx] = ransac(data,fitFcn,distFcn,sampleSize,maxDistance) fits a model to noisy data using the M-estimator sample consensus (MSAC) algorithm, a version of the random sample consensus (RANSAC) algorithm. Specify your function for fitting a model, fitFcn, and your function for calculating distances from the model to your data, distFcn. The ransac function takes random samples from your data using sampleSize and uses the fit function to maximize the number of inliers within maxDistance. [___] = ransac(___,Name,Value) additionally specifies one or more Name,Value pair arguments. Fit Line to 2-D Points Using Least Squares and RANSAC Algorithms Load and plot a set of noisy 2-D points. load pointsForLineFitting.mat hold on Fit a line using linear least squares. Due to outliers, the line is not a good fit. modelLeastSquares = polyfit(points(:,1),points(:,2),1); x = [min(points(:,1)) max(points(:,1))]; y = modelLeastSquares(1)*x + modelLeastSquares(2); Fit a line to the points using the MSAC algorithm. Define the sample size, the maximum distance for inliers, the fit function, and the distance evaluation function. Call ransac to run the MSAC sampleSize = 2; % number of points to sample per trial maxDistance = 2; % max allowable distance for inliers % Define the fit function using polyfit. fitLineFcn = @(points)polyfit(points(:,1),points(:,2),1); % Define the distance function to classify each point as an inlier or outlier % based on the maxDistance threshold. distLineFcn = @(model,points)(points(:, 2) - polyval(model, points(:,1))).^2; [modelRANSAC,inlierIdx] = ransac(points,fitLineFcn,distLineFcn, ... Refit a line to the inliers using polyfit. modelInliers = polyfit(points(inlierIdx,1),points(inlierIdx,2),1); Display the final fit line. This line is robust to the outliers that ransac identified and ignored. inlierPts = points(inlierIdx,:); x = [min(inlierPts(:,1)) max(inlierPts(:,1))]; y = modelInliers(1)*x + modelInliers(2); plot(x, y, 'g-') legend('Noisy points','Least squares fit','Robust fit'); hold off Input Arguments data — Data to be modeled m-by-n matrix Data to be modeled, specified as an m-by-n matrix. Each row corresponds to a data point in the set to be modeled. For example, to model a set of 2-D points, specify the point data as an m-by-2 Data Types: single | double fitFcn — Function to fit a subset of data function handle Function to fit a subset of data, specified as a function handle. The function must be of the form: If it is possible to fit multiple models to the data, then fitFcn returns the model parameters as a cell array. distFcn — Function to compute distances from model function handle Function to compute distances from the model to the data, specified as a function handle. The function must be of the form: distances = distFcn(model,data) If model is an n-element array, then distances must be an m-by-n matrix. Otherwise, distances must be an m-by-1 vector. sampleSize — Minimum sample size positive scalar integer Minimum sample size from data that is required by fitFcn, specified as a positive scalar integer. maxDistance — Maximum distance for inlier points positive scalar Maximum distance from the fit curve to an inlier point, specified as a positive scalar. Any points further away than this distance are considered outliers. The distance is defined by distFcn. The RANSAC algorithm creates a fit from a small sample of points, but tries to maximize the number of inlier points. Lowering the maximum distance improves the fit by putting a tighter tolerance on inlier points. Name-Value Arguments Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter. Before R2021a, use commas to separate each name and value, and enclose Name in quotes. Example: 'MaxNumTrials',2000 ValidateModelFcn — Function to validate model function handle Function to validate model, specified as the comma-separated pair consisting of 'ValidateModelFcn' and a function handle. The function returns true if the model is accepted based on criteria defined in the function. Use this function to reject specific fits. The function must be of the form: isValid = validateModelFcn(model,varargin) If no function is specified, all models are assumed to be valid. MaxSamplingAttempts — Maximum number of sampling attempts 100 (default) | integer Maximum number of attempts to find a sample that yields a valid model, specified as the comma-separated pair consisting of 'MaxSamplingAttempts' and an integer. MaxNumTrials — Maximum number of random trials 1000 (default) | integer Maximum number of random trials, specified as the comma-separated pair consisting of 'MaxNumTrials' and an integer. A single trial uses a minimum number of random points from data to fit a model. Then, the trial checks the number of inliers within the maxDistance from the model. After all trials, the model with the highest number of inliers is selected. Increasing the number of trials improves the robustness of the output at the expense of additional computation. Confidence — Confidence of final solution 99 (default) | scalar from 0 to 100 Confidence that the final solution finds the maximum number of inliers for the model fit, specified as the comma-separated pair consisting of 'Confidence' and a scalar from 0 to 100. Increasing this value improves the robustness of the output at the expense of additional computation. Output Arguments model — Best fit model parameters defined in fitFcn Best fit model, returned as the parameters defined in the fitFcn input. This model maximizes the number of inliers from all the sample attempts. inlierIdx — Inlier points logical vector Inlier points, returned as a logical vector. The vector is the same length as data, and each element indicates if that point is an inlier for the model fit based on maxDistance. [1] Torr, P. H. S., and A. Zisserman. "MLESAC: A New Robust Estimator with Application to Estimating Image Geometry." Computer Vision and Image Understanding. Vol. 18, Issue 1, April 2000, pp. Extended Capabilities C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. Version History Introduced in R2017a
{"url":"https://nl.mathworks.com/help/vision/ref/ransac.html","timestamp":"2024-11-07T22:02:25Z","content_type":"text/html","content_length":"100016","record_id":"<urn:uuid:067b8204-0151-41ca-a47a-540d50e32f98>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00412.warc.gz"}
Mid-ocean ridges are elevated above the seafloor. The elevat… | Quiz Lookup Mid-ocean ridges are elevated above the seafloor. The elevat… Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Give the R&ocy;m&acy;n numer&acy;l &ocy;f the cr&acy;nial nerve at the end &ocy;f the arrow: Multiply: 112 x 112 . D&ocy; n&ocy;t include the subscript in y&ocy;ur &acy;nswer. For ex&acy;mple, if the &acy;nswer is 1008, you would enter 100. The &acy;ssertive mess&acy;ge f&ocy;rm&acy;t sh&ocy;uld always be used in the &ocy;rder given in your text for best results. 17. when the dentist is c&ocy;mpleting &acy; cl&acy;ss III rest&ocy;r&acy;ti&ocy;n, which of the following restorative materials will be used? T2 dec&acy;y is c&acy;used by Write &acy; pr&ocy;&ocy;f f&ocy;r p&acy;rt 2 of the Fund&acy;mental Theorem of Arithmetic: The prime factorization of any integer n > 1 is unique. Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? I &acy;m &acy;ll&ocy;wed t&ocy; use &ocy;ne st&acy;ndard size sheet of paper of handwritten notes during tests. I am allowed to have 3 sheets of blank paper if I need to do any calculations by hand. I am allowed to use my Ti 83/84 calculator during tests. Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Mid-&ocy;ce&acy;n ridges &acy;re elev&acy;ted ab&ocy;ve the seafl&ocy;or. The elevation of the seafloor Give the R&ocy;m&acy;n numer&acy;l &ocy;f the cr&acy;nial nerve at the end &ocy;f the arrow: Give the R&ocy;m&acy;n numer&acy;l &ocy;f the cr&acy;nial nerve at the end &ocy;f the arrow: Give the R&ocy;m&acy;n numer&acy;l &ocy;f the cr&acy;nial nerve at the end &ocy;f the arrow: Give the R&ocy;m&acy;n numer&acy;l &ocy;f the cr&acy;nial nerve at the end &ocy;f the arrow: Give the R&ocy;m&acy;n numer&acy;l &ocy;f the cr&acy;nial nerve at the end &ocy;f the arrow: Give the R&ocy;m&acy;n numer&acy;l &ocy;f the cr&acy;nial nerve at the end &ocy;f the arrow: Give the R&ocy;m&acy;n numer&acy;l &ocy;f the cr&acy;nial nerve at the end &ocy;f the arrow: Give the R&ocy;m&acy;n numer&acy;l &ocy;f the cr&acy;nial nerve at the end &ocy;f the arrow: Give the R&ocy;m&acy;n numer&acy;l &ocy;f the cr&acy;nial nerve at the end &ocy;f the arrow: Give the R&ocy;m&acy;n numer&acy;l &ocy;f the cr&acy;nial nerve at the end &ocy;f the arrow: Multiply: 112 x 112 . D&ocy; n&ocy;t include the subscript in y&ocy;ur &acy;nswer. For ex&acy;mple, if the &acy;nswer is 1008, you would enter 100. Multiply: 112 x 112 . D&ocy; n&ocy;t include the subscript in y&ocy;ur &acy;nswer. For ex&acy;mple, if the &acy;nswer is 1008, you would enter 100. The &acy;ssertive mess&acy;ge f&ocy;rm&acy;t sh&ocy;uld always be used in the &ocy;rder given in your text for best results. The &acy;ssertive mess&acy;ge f&ocy;rm&acy;t sh&ocy;uld always be used in the &ocy;rder given in your text for best results. T2 dec&acy;y is c&acy;used by T2 dec&acy;y is c&acy;used by Write &acy; pr&ocy;&ocy;f f&ocy;r p&acy;rt 2 of the Fund&acy;mental Theorem of Arithmetic: The prime factorization of any integer n > 1 is unique. Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? Wh&acy;t is the n&acy;me f&ocy;r this c&ocy;nditi&ocy;n? I &acy;m &acy;ll&ocy;wed t&ocy; use &ocy;ne st&acy;ndard size sheet of paper of handwritten notes during tests. I am allowed to have 3 sheets of blank paper if I need to do any calculations by hand. I am allowed to use my Ti 83/84 calculator during tests. I &acy;m &acy;ll&ocy;wed t&ocy; use &ocy;ne st&acy;ndard size sheet of paper of handwritten notes during tests. I am allowed to have 3 sheets of blank paper if I need to do any calculations by hand. I am allowed to use my Ti 83/84 calculator during tests. I &acy;m &acy;ll&ocy;wed t&ocy; use &ocy;ne st&acy;ndard size sheet of paper of handwritten notes during tests. I am allowed to have 3 sheets of blank paper if I need to do any calculations by hand. I am allowed to use my Ti 83/84 calculator during tests. I &acy;m &acy;ll&ocy;wed t&ocy; use &ocy;ne st&acy;ndard size sheet of paper of handwritten notes during tests. I am allowed to have 3 sheets of blank paper if I need to do any calculations by hand. I am allowed to use my Ti 83/84 calculator during tests. I &acy;m &acy;ll&ocy;wed t&ocy; use &ocy;ne st&acy;ndard size sheet of paper of handwritten notes during tests. I am allowed to have 3 sheets of blank paper if I need to do any calculations by hand. I am allowed to use my Ti 83/84 calculator during tests. I &acy;m &acy;ll&ocy;wed t&ocy; use &ocy;ne st&acy;ndard size sheet of paper of handwritten notes during tests. I am allowed to have 3 sheets of blank paper if I need to do any calculations by hand. I am allowed to use my Ti 83/84 calculator during tests. I &acy;m &acy;ll&ocy;wed t&ocy; use &ocy;ne st&acy;ndard size sheet of paper of handwritten notes during tests. I am allowed to have 3 sheets of blank paper if I need to do any calculations by hand. I am allowed to use my Ti 83/84 calculator during tests. I &acy;m &acy;ll&ocy;wed t&ocy; use &ocy;ne st&acy;ndard size sheet of paper of handwritten notes during tests. I am allowed to have 3 sheets of blank paper if I need to do any calculations by hand. I am allowed to use my Ti 83/84 calculator during tests. I &acy;m &acy;ll&ocy;wed t&ocy; use &ocy;ne st&acy;ndard size sheet of paper of handwritten notes during tests. I am allowed to have 3 sheets of blank paper if I need to do any calculations by hand. I am allowed to use my Ti 83/84 calculator during tests. In 1936, Gre&acy;t Brit&acy;in, Fr&acy;nce, Germany, Italy, and the S&ocy;viet Uni&ocy;n signed a n&ocy;nintervention agreement that declared they would not get involved in the Civil War that had erupted in which of the following countries?
{"url":"https://quizlookup.com/mid-ocean-ridges-are-elevated-above-the-seafloor-the-elevation-of-the-seafloor/","timestamp":"2024-11-08T15:21:47Z","content_type":"text/html","content_length":"70226","record_id":"<urn:uuid:03a5632b-fc7d-4c99-8865-765019fc815b>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00314.warc.gz"}
Free Printable Math Word Search - Word Search Maker Free Printable Math Word Search - Free Math Word Search Printable Games PDF Sheets I made three different difficulty levels of these math word search printables Each with two different answer arrangments So whether you re looking for an easy medium hard version you ll find them down below To print these off all you have to do is scroll to the difficulty you Browse and print Math word searches below You can also browse Math Crossword Puzzles or make your own Math word search crossword fill in the blank word scramble matching bingo handwriting exercise open response worksheet or flashcards Free Printable Math Word Search Free Printable Math Word Search Algebra Word Search. A fun, free printable word search puzzle worksheet featuring algebra for math students or anyone looking for to review their knowledge of the world. The 26 vocabulary words covered in this puzzle are: absolute, additive, binomial, coefficient, constant, equation, exponent, expression, factor, function, inequality, inverse Math Word Search Puzzles for first through sixth grade including specific puzzles for geometry algebra and many other topic areas Free printable PDFs with color answer keys Math Word Word Searches My Word Search Math Word Search These free maths word searches are for different grade students Download the free printables to engage the students in classroom with some fun activity while getting to know about the geometry algebra or Math Word Search Printable PDF Word Search Printable Solve these free math word search puzzles on your computer or download and print them Free Printable Math Word Search Letter Words Unleashed Exploring The Beauty Of Language Printable Math Word Search Free Math Word Search Printable Games PDF Sheets Crazy Play these free math word search puzzles directly online or click here to download and print them for classroom use Free Math Word Search Printable A free math word search printable with 18 math terms to find including decimal algebra multiplication fraction and addition Free printable Math word search puzzles complete with corresponding answer sheet with a title and bordered grid. Algebra Word Search Puzzles To Print Our collection of engaging free math word search worksheets combines the fun of word searches with the educational value of math concepts Math Vocabulary Word Search Math Word Searches Printable Free Printable Math Word Search A free math word search printable with 18 math terms to find including decimal algebra multiplication fraction and addition Browse and print Math word searches below You can also browse Math Crossword Puzzles or make your own Math word search crossword fill in the blank word scramble matching bingo handwriting exercise open response worksheet or flashcards Math Word Search Free Printable Math Word Search Free Printable Math Word Search Free Printable Free Math Word Search Printable Games PDF Sheets Math Word Search Math Words Geometry Words Math Word Search Free Printable
{"url":"https://wordsearchmaker.net/free-printable-math-word-search","timestamp":"2024-11-05T03:30:01Z","content_type":"text/html","content_length":"48940","record_id":"<urn:uuid:a372908a-4ac4-40c2-bc68-c4f7f86922f7>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00579.warc.gz"}
GeoBasics2.2 part 2 GeoBasics2.2: I can prove that the interior angles of a triangle sum to two, right angles Proving Interior Angles of a Triangle, part 2 1. The Interior Angles of a Triangle Select the statement that best represents the conclusion of the proof 2. The crucial premise In order to prove the conclusion, you must first find three angles that sum to 180° because they are three adjacent angles on a straight line. Then you must connect each of these angles to the interior angles of the triangle. Select the crucial premises that allow you to connect each of the angles of the triangle to three angles that sum to 180° 3. The Proof Using only the points and lines given in the figure above, prove that the interior angles of HAN sum to two right angles
{"url":"https://beta.geogebra.org/m/jstFXu4z","timestamp":"2024-11-15T03:14:33Z","content_type":"text/html","content_length":"112072","record_id":"<urn:uuid:15debba1-d80b-414a-b394-fccb9f7eff9b>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00213.warc.gz"}
Sudoku Solving Techniques: Beyond the Basics Griley 20 mins0 Advanced Strategies to Master Sudoku Puzzles Sudoku challenges many with its intricate puzzles and the need for keen logical thinking. But fear not, advanced strategies exist that can take your Sudoku-solving skills to the next level. These techniques move beyond the basic ‘one possibility per cell’ approach. They can help you tackle even the most daunting grids. Embracing advanced Sudoku solving techniques requires patience and practice. You’ll learn to spot patterns and possibilities that are not immediately obvious. These include identifying special configurations of numbers and using these to eliminate other possibilities. Let’s delve deeper into some of these advanced strategies: • The X-Wing Method zeroes in on rows and columns, bypassing subgrid blocks. It finds a specific number that fits only in two spots across parallel lines. • Swordfish Technique expands on this idea. It uses three lines instead of two and allows for the elimination of digits based on the alignment of potential cells. • Forcing Chains requires you to follow a sequence of eliminations. This chain reaction can reveal the number for a particular cell. • XY Wing Method focuses on cells interconnected with two possibilities. Observing these connections can help you remove incorrect numbers. • Naked Pairs are about spotting two cells with the same number pair within a group. This can simplify your choices elsewhere. • Hidden Pairs might be more subtle, concealed amongst other candidates. Spotting these can reduce complexity significantly. • The Pointing Pair Technique revolves around pinpointing a number’s location within a block. Doing so affects choices in the entire row or column. • Nishio Strategy takes a gamble with a trial-and-error method. Pick a candidate and see where it leads; it’s a riskier move but can yield results when stuck. Each of these advanced Sudoku solving techniques offers a way to progress through the puzzle when the going gets tough. They can often prove crucial when facing particularly challenging Sudoku puzzles. Remember, the key to mastery is practice. Use these strategies often, and you’ll find yourself breezing through Sudoku puzzles with greater ease. The X-Wing Method Explained The X-Wing method stands out as an advanced sudoku solving technique. It may seem daunting at first, but understanding it can be a game-changer. Let’s break it down into simpler terms. The core idea is about spotting patterns in rows and columns. This method does not involve subgrid blocks directly. It targets a specific digit that only fits in two possible spots on the grid. But the trick is, these spots lie in parallel rows or columns, which forms the shape of an ‘X’. Here’s a step-by-step guide to use the X-Wing strategy: 1. Identify a digit that has only two possible positions in two separate rows. 2. Draw imaginary lines across these rows, extending through the possible positions of your chosen digit. 3. Repeat the process for columns as well. 4. Look for intersecting points where these imaginary lines meet within the grid. 5. If you find a solid ‘X’ shape, you can eliminate this digit from consideration in other cells along those intersecting lines. This approach narrows down the possible numbers you can place in the remaining cells. It’s especially helpful when the puzzle gets tougher and the usual tactics don’t cut it. The X-Wing method requires a sharp eye to detect the right pattern and make a strategic elimination. Practice it often, and it will become a powerful tool in your sudoku solving repertoire. Swordfish Technique: A Step Up from X-Wing The Swordfish technique is an advanced level up from the X-Wing method. While the X-Wing focuses on two rows or columns, the Swordfish pattern targets three, widening the scope for accuracy. Here’s how you can apply the Swordfish strategy to your game: 1. Begin by scouting for a number that features only in three places along two different rows. 2. Do the same across columns; find sets of three numbers aligning with the rows. 3. Connect these numbers mentally to form a grid-like swordfish pattern. 4. Where the pattern intersects, remove the specific number from other cells in those columns. 5. Double-check that the numbers form a consistent grid pattern like a swordfish. Executing the Swordfish technique can be tricky at first. It demands that you envision a broader layout of possible number placements. Unlike the X-Wing, where two lines intersect, the Swordfish requires you to map three lines in your head. This strategy is quite useful for intermediate players ready to tackle higher difficulty levels. Practice is key; try using the Swordfish in simpler puzzles first to gain confidence. Remember, while using the Swordfish technique, focus is essential. Misplacing a number or missing a line could throw off your entire game. Patience and careful analysis are your best assets when employing this strategy. It can seem overwhelming, but once mastered, the Swordfish can be a formidable weapon in your sudoku solving arsenal. Forcing Chains: Understanding Sequential Eliminations When you encounter a tough Sudoku puzzle, the Forcing Chains strategy can come to your rescue. This advanced sudoku solving technique relies on a sequence where each move influences the next. Here’s how you can apply Forcing Chains effectively: 1. Look for a cell with multiple candidate numbers to start the chain. 2. Choose one of these numbers and follow through the potential outcomes. 3. If placing a number leads to a contradiction elsewhere, discard that option. 4. Continue the sequence until a definitive number placement is validated. 5. Use this validated placement to make further deductions on the grid. Forcing Chains is like setting off a cascade where one choice guides subsequent decisions. It’s a logical progression where one assumption leads you to the next. This strategy might require several tries but can give you a breakthrough when you’re stuck. Being methodical and patient is vital when using Forcing Chains, as hasty errors can derail the entire process. Remember, this technique is advanced and should be used with caution if you’re new to sudoku solving. Keep practicing it, and it will boost your problem-solving skills in more challenging puzzles. The XY Wing Method: Finding Pivotal Intersections Encountering a stubborn Sudoku grid sometimes calls for the XY Wing method. This strategy, similar to the X-Wing and Swordfish, helps unveil the numbers you’ve been searching for. Let’s get to grips with it using easy-to-follow steps. 1. Start by spotting a cell containing only two possible numbers. 2. Find two other cells with two numbers each, on the same row, column, or block. 3. Ensure that one digit in each of these cells matches one in the starting cell. 4. Pinpoint the location where the last two cells share a row or column — known as the pivot point. 5. If the pivot contains a number found in both cells, you can eliminate it there. This simplification comes from spotting a form of digital alliance between those cells. Making use of the XY Wing method breaks down complex puzzles into manageable pieces. Just like the other advanced sudoku solving techniques, it requires a sharp eye and logical thinking. Regular practice of this strategy will not only enhance your problem-solving skills but make your gameplay more enjoyable. Remember, the key to mastering difficult Sudoku puzzles is combining various techniques, and the XY Wing method is a perfect addition to your toolkit. Naked Pairs: A Simple Yet Effective Strategy When delving into the realm of advanced sudoku solving techniques, ‘Naked Pairs’ stands out as a straightforward yet impactful tactic. This strategy revolves around spotting two cells in a single row, column, or block that share the same exclusive pair of potential numbers. The recognition of a naked pair allows for an elegant simplification of the puzzle. To utilize the Naked Pairs strategy, follow these steps: 1. Scan for two cells within the same unit (row, column, or block) housing only a pair of numbers. 2. Confirm these two numbers appear in no other cells within that unit. 3. Exclude these numbers from all other cells in the unit. 4. Use the elimination of possibilities to solve surrounding cells more easily. This elimination process can drastically reduce the number of potential candidates for the rest of the unit, simplifying your next moves. It’s all about pattern recognition and logical exclusion with naked pairs; they can clear up uncertainty in a grid section and bring you closer to solving the puzzle. Although Naked Pairs might seem minor, their application can be mighty, particularly in moderate to difficult puzzles. Regular practice of this method will sharpen your ability to spot these helpful pairs faster and with greater ease. And remember, combining Naked Pairs with other techniques like X-Wing and Swordfish can significantly enhance your sudoku-solving prowess. Hidden Pairs: Unveiling the Concealed Numbers Finding hidden pairs in Sudoku is like uncovering hidden treasures. They can seem elusive, but once discovered, they break open parts of the puzzle. The strategy of hidden pairs is a bit like naked pairs, with a twist. Let’s simplify the process: 1. Look for two cells within a block, row, or column with identical pairs of numbers. 2. These pairs may hide among other candidates in the cells. 3. Cross out the other numbers, leaving only the hidden pair. 4. Now, treat these pairs as you would with naked pairs. 5. The exclusivity of these pairs simplifies choices in neighboring cells. This method can be powerful when other sudoku solving techniques aren’t revealing the next move. Uncovering a hidden pair involves keen observation and can significantly narrow down the number of potential candidates elsewhere in the grid. Perfect for puzzles of higher difficulty, hidden pairs give you leverage in seemingly unsolvable situations. Practice will hone your eye for spotting these concealed treasures. Remember, persistence is key. Use this tactic in combination with those previously mentioned for a smarter solving approach. Pointing Pair Technique: Directing Your Focus The Pointing Pair Technique zeroes in on how a certain number is placed within a sudoku block. This strategy works by identifying a number that can only exist in a row or column, and not in other parts of the block. Here’s a clear guide on how to apply it: 1. Search for a number confined to one row or column within a block. 2. Understand that this number cannot be in the same row or column outside the block. 3. Eliminate this number as a candidate from other cells in that row or column. 4. Deduce other possibilities using this newfound empty space. Using this technique directs your focus and helps narrow down your options. The Pointing Pair Technique can clarify where numbers should not go, which is just as important as where they should. This straightforward approach is easy to grasp and contributes greatly in solving complex puzzles. Remember that attention to detail is key when using this strategy. It may seem subtle, but the impact is significant, helping to advance your game to the next level. Combine this with other sudoku solving techniques to enhance your puzzle-solving skills. Nishio Strategy: Trial and Error Approach When other strategies fail, the Nishio Strategy might be your last resort. It’s like taking a calculated risk in your game. Here is a simple breakdown of how to apply this trial-and-error technique to your Sudoku puzzles: 1. Identify a cell with just a few possible numbers. These cells are your starting point. 2. Choose one of these numbers as your tester. Be ready to backtrack if needed. 3. Work through the puzzle as if your chosen number is the correct one. 4. Watch out for any contradictions. If something doesn’t fit, the number is wrong. 5. Discard that option and try the next number in that cell. 6. Repeat this process until you find the number that fits without causing issues. The Nishio Strategy is all about making bold guesses and seeing the effects. It is more about trying and learning from mistakes than other methods. Do not be afraid to experiment with it. Also, keep in mind that this approach can be time-consuming. It requires patience and a willingness to backtrack often. But sometimes, this is the key to unlock a tricky puzzle. Make sure to use this strategy as a final attempt. It’s best to try all other sudoku solving techniques before jumping into Nishio. And remember, the more you practice, the better you’ll get at spotting when to use this tactic.
{"url":"https://imwrapper.com/20241014/sudoku-solving-techniques-beyond-the-basics/","timestamp":"2024-11-07T20:06:31Z","content_type":"text/html","content_length":"233826","record_id":"<urn:uuid:78cd8f9e-0a0b-42f4-b3a6-d98c1b42874a>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00787.warc.gz"}
The Natural Rate of Interest and Asset Class Returns For the last two years, the large majority of central banks have been hiking rates and conducting quantitative tightening programs to fight against the inflationary pressures witnessed in almost every country after the Covid reopening, supply chain shocks, Russia-Ukraine war, labor shortages, and other related factors. Despite all these efforts, in most of the advanced economies, the results achieved have been quite poor, at least when compared to markets’ initial expectations, with core inflation remaining at elevated levels, well above the central bank’s targets, and the economic activity proving to be extremely resilient. While central bankers have kept referring to “long and variable” lags that can separate the moment these policies are announced from the time when the actual effects on the real economy start to be felt, some commentators are hypothesizing that economies are not responding to the current restrictive policies because of an increase in the natural rate of interest^1, also known as r*. This implies that, to obtain the same restrictive effects on the economy and inflation, central bankers would need to take more action than before, with further In this article, after analyzing the nature of r*, its importance, and its estimation models, we will discuss how asset classes perform in different r* environments. Similarly, in the last paragraph, we analyze the historical influence of r* on factors’ performances. What is r*, how is it estimated, and is it rising? The natural rate of interest, hypothesized for the first time by the Swedish economist Knut Wicksell in 1898, is the real interest rate that supports the economy at its natural level of output while keeping inflation constant. In simpler terms, it is the real rate at which monetary policy is neither accommodative nor restrictive. This means that whenever the actual interest rate is below this value, the economy tends to expand above potential, generating inflation and causing unemployment to decrease below its natural level. On the other hand, when the interest rate is set above r*, the economy tends to contract, inflation shrinks, and unemployment rises. These linked dynamics are what the mandates of all central banks around the world are based on. In fact, since r* is the key parameter to assess the degree of easing or tightening of a certain policy, without having a reasonable estimation of the natural rate, central bankers cannot conduct monetary policies in a responsible way and the risk of massive economic imbalances and crises rises significantly. This argument can be expanded to financial markets. As monetary policy-related decisions have a direct influence on asset classes’ performances, it is logical that r* will play a role in determining asset classes’ returns as well. Even though this could look elementary, it must be noted that r* cannot be observed directly but needs to be estimated by complex models that take into account macroeconomic, demographic, and social factors. Moreover, r* is believed to be unstable, drifting up or down over time and to vary widely from jurisdiction to jurisdiction. This is one of the reasons why the job of central banks is so complicated, especially when it involves setting a unified policy for a large heterogeneous jurisdiction, like the United States or the Euro Area. The most widely used tools to estimate r* are Laubach and Williams (LW, 2003) and Holston, Laubach, and Williams (HLW, 2017) models. These models apply the Kalman filter and use a large set of inputs like real GDP growth, inflation changes, and short-term policy rates to estimate the output natural rate of growth and the natural rate of interest r*. Also, both LW and HLW models incorporate transitory and permanent shocks to supply and demand and demand and dynamic endogenous behavior of inflation and output. The findings of these models are unambiguous. From the ’60s on, a strong secular downtrend is immediately recognizable in both the trend growth (natural rate of output growth) and the natural interest rate r*. In particular, the former is estimated to have declined in the US from its 1970 level of 4.5% to the current 1.8% and in the Euro Area from around 3% to the current 1%, while the latter fell in the same period from 4% to 1.5% in the US and from 3% to 0.7% in the Euro Area. In both jurisdictions, in the last 50 years, both trend growth and r* are estimated to have declined sharply and are currently near the post-2008 lows. These models are built on the assumption of a time-invariant Gaussian distribution for economic shocks, which should therefore be uncorrelated. However, this has been clearly contradicted by the data, especially during the COVID period, when all countries witnessed enormous fluctuations in GDP growth and a sequence of highly negatively correlated swings in output caused by the succession of shutdowns and re-openings. For these reasons, the models were recently updated by the three authors and the publications of r* estimates on the New York Fed website have been relaunched, after having been suspended during the COVID period. In particular, this updated version of the model implements two important modifications. First, it allows for time-varying volatility of the shocks to output and inflation during the pandemic period consistent with the appearance of extreme outliers in the data. Second, it incorporates a proxy for a persistent, but not permanent, supply shock that is designed to capture the effects of COVID-19 and related policy responses. Specifically, it creates a COVID-adjusted measure of the natural rate of output, where the magnitude of the adjustment is proportional to the country-specific COVID-19 Stringency Index from the Oxford COVID-19 Government Response Tracker (Hale et al., 2021)^ 2. What emerged from these modifications is that there is no evidence of a quantitatively meaningful trend reversal in trend growth and r*, with their 2022 estimations being, both in the Euro Area and in the US, slightly lower than the corresponding values in 2019. This result runs counter to some commentary that very large fiscal stimulus and rising levels of government debt, alongside evidence of large output gaps and high inflation, point to a higher level of the natural rate than before the pandemic^3. The blue line represents the COVID-adjusted model while the yellow line is the pre-adjustment one. Source: Measuring the Natural Rate of Interest after COVID-19” – Holston, Laubach, Williams (2023) The Effect of r* on Asset Class Returns Having outlined what the main movements in r* are, we now move one to a short statistical analysis of the effect that this variable has had on returns, a topic on which there has not been much empirical research. Our prior belief as to why r* might have an impact on asset classes might not be completely apparent at first glance: We believe that current r* estimates might play a role in the expected return of those asset classes that are most directly related to long term economic growth. Nevertheless, we test whether there is a statistical relationship between r* and various different asset classes. For this, we gathered quarterly r* estimates from the website of the New York Federal Reserve (using the updated HLW methodology) and quarterly asset class returns from Bloomberg. We use quarterly data because a higher frequency estimate of r* was not available. The specific assets/ asset classes we compile return data for are the following: S&P500, NASDAQ, Gold, WTI Oil, Copper, US Treasuries, US IG Credit, US HY Credit. Lastly, we gather data for the risk-free rate from Kenneth R. French’s Website. Our data goes back to the first quarter of 1961 or some years later for time series that do not go back to 1961. With this assortment of asset classes, we believe to have represented a large portion of the investment opportunity set of an ordinary investor. We then estimate the coefficients of a univariate OLS regression with the independent variable being r* and the dependent variable being returns or (in the case of the stock market indices) excess returns over the risk-free rate. We get statistically significant loadings on r* for US Treasuries at the 5% level and for the S&P500 and NASDAQ at the 10% level; it should be mentioned that the R^2 for all regressions is low at only a couple of percentage points. Returns for US Treasuries have a positive loading on r* while the two stock market indices have negative loadings. These findings are mostly in line with our ex-ante belief. Firstly, those asset classes that, in general, have the highest dependency on the general economy seem to have a statistically significant relationship to r* and secondly, the sign of the loadings make sense. For equities, a higher r* can be seen as a higher expectation for long-term interest rates, therefore decreasing the value of cash flows of future years and leading to negative returns. Since US Treasuries generally have a negative correlation to equities, a higher r* forecast leads to higher returns in Treasuries. The Effect of r* on Factor Returns Looking at the picture in equities, we were not fully content with finding a general relationship; therefore, we hypothesized that the same relationship that held true for equities in general can be applied to equities with specific characteristics. That is, equities with a higher dependency on the long-term growth of the economy will also have a relationship to r* estimates. In order to test this hypothesis, we used factor data downloaded from jkpfactors.com, a website giving access to many different factors and factor themes which was made in the course of the creation of Jensen, Kelly, Pedersen (2023), a paper that was already discussed in this BSIC article. More specifically, we test all available themes for relationships. Namely, these are: Accruals*, Debt Issuance*, Investment*, Leverage*, Low Risk, Momentum, Profit Growth, Profitability, Seasonality, Size*, Skewness, Value. Those themes marked with a star (*) are themes in which one is short equities with high values for a characteristic and long equities with low values for a characteristic. Running the same regression as before (we do not use excess returns for these factor themes), we get the following results: Accruals, Debt Issuance, Profit Growth, Seasonality, and Skewness all are statistically significant at the 5% level. Keeping the above in mind, a higher r* is positive for portfolios that are either short in high accrual, debt issuance, and skewness equities and long low values for the respective characteristics, or long in high profit growth and seasonal equities and short in low values for these characteristics. Once again, these results are easily reconciled with our prior expectation. Not only are equities with extreme values for these characteristics more strongly related to long term economic growth, but also does the sign of the loading make intuitive sense. To give an example, those companies that have a high seasonality are more susceptible to changes in long-term growth and if an economy can be sustained at a higher interest rate, then the economy shows more growth potential. In this article, we have shown what the significance of r* is for economics and how it is estimated. As a financial application of r*, we have hypothesized that returns of assets that are more closely related to long-term economic growth will show a concurrent statistical relationship to r*. This intuition was proven correct in both asset classes in general and different factors specifically. However, more research will be needed in order to unravel the true underlying economic effects of r* on different asset classes and factors. Equally, a higher frequency estimate of r* could be used to more accurately review the statistical relationships we have found in this article. 1: Lewis and Vazquez Grande (2017); Buncic (2021); and Davis, Zalla, Rocha, and Hirt (2023) all argue in the direction of a higher r* 2: Also refer to “Measuring the Natural Rate of Interest after COVID-19” – Holston, Laubach, Williams (2023). 3: This opposite thesis is supported by the recent paper “R-star is higher. Here’s why” by Davis, Zalla, Rocha, and Hirt (2023). In this paper, the authors do not only claim that r* is now historically high but also that the era of secularly low rates is already over and a new era of “sound money” has begun. [1] Laubach, T., Williams J.C., “Measuring the Natural Rate of Interest”, 2003, The Review of Economics and Statistics [2] Holston, K., Laubach, T., Williams J.C., “Measuring the natural rate of interest: International trends and determinants”, 2017, Journal of International Economics [3] Hale, T., et al., “A global panel database of pandemic policies”, 2021, Nature [4] Lewis, K., Vazquez-Grande, F., “Measuring the natural rate of interest: A note on transitory shocks”, 2017, Journal of Applied Econometrics [5] Buncic, D., “Econometric Issues with Laubach and Williams’ Estimates of the Natural Rate of Interest”, 2021, Available on SSRN [6] Davis J.H., et al., “R-Star is Higher. Here’s Why”, 2023, Available on SSRN [7] Holston, K., Laubach, T., Williams J.C., “Measuring the Natural Rate of Interest After COVID-19”, 2023, Federal Reserve Bank of New York [8] Jensen, T.I., Kelly B., Pedersen, L.H., “Is There a Replication Crisis in Finance?”, 2023, Journal of Finance TAGS: r-star, natural rate of interest, neutral rate of interest, factors, real rate 0 Comments Leave a Reply Cancel reply
{"url":"https://bsic.it/the-natural-rate-of-interest-and-asset-class-returns/","timestamp":"2024-11-10T21:24:59Z","content_type":"text/html","content_length":"90250","record_id":"<urn:uuid:4a9af3bc-7f74-426c-8e6c-21d5e40d55d0>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00129.warc.gz"}
Numerical Solution The range of ω variation, ω[min] ≤ ω < ω[max] depends on ω[0] for the lower limit according to the following equation: ω[min] = ω[0] - k • (ω[0] - 1) • (ω[0] -2), whereby k is a pre-defined parameter between -1 and 0. Since k is negative (ω[min] < ω[0]), specifying a greater absolute value of k results in a larger difference between ω[min] and ω[0], and thus a larger range. This parameter is set as OMEGA_MIN. The upper limit of the range of ω variation, ω[max], is a set value between ω[0] and 2. This is defined as OMEGA_MAX. In the second stage of calculation, the first step is performed with ω = ω[min]. The relaxation factor is then increased incrementally with each iteration until either ω[max] is reached or the process begins to diverge, whereupon ω is set back to ω[min ] and calculation continues. The increments of ω variation also vary: low values of ω in the beginning are run through rapidly, but the increment approaches 0 asymptotically as ω approaches the value of 2. The increments are calculated such that the approximated optimal value of ω[0] is reached within a given number of iterations starting from ω[min]. The standard number of assumed steps is relatively small number set as As already mentioned, ω is set back to ω[min] as soon as the iterative results begin to recognisably diverge. The criterium for discerning whether the solution process is converging or diverging is the absolute value, Δ[max], i.e. the deviation of the results from one iteration to the next. Hereby a mean value of Δ[max] is calculated from the last n steps and compared continually with the current Δ[max]. The number of steps included in the mean value for comparison, n, is also defined as a solver-parameter and called OMEGA_TESTNUM. The calculation process is considered divergent when Δ[max] equals or exceeds the comparative mean value. However, it would be impractical to set the relaxation factor back automatically every time this condition is satisfied before a certain minimum number of calculation steps has been performed. This quantity is a further criterium which must be met before an ω set-back occurs. The standard minimum number of iterations here is 23, defined as OMEGA_VETO. An equation is ultimately considered solved when the deviation, Δ[max], remains smaller than a defined limit for a continuous series of a prescribed number of iterations. This quantity is defined as Finally, in order to "smooth" the results of calculation, a post-run of iterations is performed with a constant relaxation factor, ω = 1 (defined as OMEGA_POSTRUN=1.0). An increase in this parameter should be avoided so as not to jeopardise the smoothing effect of post-calculation. The number of iterations of this stage is prescribed as POSTRUN=15.
{"url":"http://help.antherm.kornicki.com/PrimaryConcepts/70_NumericalSolution.htm","timestamp":"2024-11-15T03:46:52Z","content_type":"text/html","content_length":"13591","record_id":"<urn:uuid:6e21737f-7b63-438d-9a2b-246530504fe5>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00345.warc.gz"}