content
stringlengths
86
994k
meta
stringlengths
288
619
Algebraic Limit Evaluation Graphically finding the limit of a function is not always easy, as an alternative, we now shift our focus to finding the limit of a function algebraically. In this section, we will learn how to apply direct substitution to evaluate the limit of a function. • if: a function $f$ is continuous at a number $a$ then: direct substitution can be applied: $\lim_{x \to a^-} f(x) = \lim_{x \to a^+} f(x) =\lim_{x \to a} f(x)= f(a)$ • Polynomial functions are continuous everywhere, therefore "direct substitution" can ALWAYS be applied to evaluate limits at any number.
{"url":"https://www.studypug.com/clep-calculus/finding-limits-algebraically-by-direct-substitution","timestamp":"2024-11-12T15:54:01Z","content_type":"text/html","content_length":"485221","record_id":"<urn:uuid:2f24514c-a582-48f8-a0a9-b5f760f038c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00587.warc.gz"}
Quantum Numbers - Chemistry Steps General Chemistry Quantum numbers tell us the energy level, the number and the type of orbitals, and the spin of the electron. Collectively, they all describe the electron configurations. There are four quantum numbers that we are going to discuss: • The Principal Quantum Number (n) – indicates the main energy level of orbitals and electrons • The Angular Momentum Quantum Number (l) – indicates the energy sublevel, which is given by the type of the orbital (s, p, d, f) • The Magnetic Quantum Number, (m[l]) – indicates the specific orbital within the energy sublevel • The Electron Spin Quantum Number (m[s]) – shows the direction of the electron spin So, one can visualize the information conveyed by quantum numbers getting more specific as we go from the principal quantum number to the spin quantum number: Let’s now discuss the quantum numbers one by one in more detail. Principal Quantum Number (n) What orbitals a given atom has, and in which ones the electrons are located, depends on the energy level of the atom. Remember, the energy level of the atom is given by the principal quantum number, n which can easily be determined based on the period (row) the atom is located in the periodical table. This is what we discussed about the Bohr model of the hydrogen atom. There are orbits with fixed radii each associated with discrete energy, and this is described by the principal quantum number n. Remember, there are four types of atomic orbitals – s, p, d, and f. Each orbital has a characteristic shape shown below: S orbitals have a spherical shape, p orbitals are dumbbell-shaped, d orbitals are shaped like a cloverleaf, and f orbitals are characterized by more complex shapes. You can also look up more detailed images for the shapes and orientation of atomic orbitals in your textbook. The number of types of orbitals matches the energy level: the first energy level has only 1 (s) orbital, the second has two types – s and p, the third has three – s, p, and d, and the fourth level has all four types of orbitals – s, p, d, and f. So, far we have talked about the main energy level. However, you should know, aside from the first energy level, all the others have sublevels, and these are the types of orbitals that we have talked about – s, p, d, and f. Notice again that within the same principal level, orbitals with a lower value of l have lower energy (E) and therefore, are filled first. So, for a given value of n: E (s orbital) < E ( p orbital) < E (d orbital) < E ( f orbital) Now, a few important things about the orbitals and their electron capacity. First, remember that each orbital, whether it is s, p, d, or f can accommodate two electrons at most. We can see this in orbital diagrams where the orbitals are shown as boxes and electrons as arrows, we never put more than two arrows in the box. For example, boron has two electrons in each s orbital of the first and second levels, and one electron in the p sublevel. Angular Momentum Quantum Number (l) So, how do we know what sublevels (types of orbitals) a given energy level has? This is determined by the Angular Momentum Quantum Number (l). It takes values of 0, 1, … n-1. For example, for the second energy level, n = 2, and therefore, l = 0, 1 , so it can have two values, and therefore, the second energy level has two sublevels – s (l = 0) and p (l = 1). Let’s put the orbitals corresponding to each value of l in a diagram as well: Example: Identify the main energy level, the sublevel, and the maximum number of electrons that can have the following quantum numbers: n = 3, l = 2. Draw the orbital diagram to explain your answer. Solution: The main energy level is given by the principal quantum number (n), so this is an orbital in the 3^rd energy level. For the sublevel (the type of the orbital), we need to look at the angular momentum quantum number, l, and when l = 2, we have d orbitals. Therefore, this combination represents the 3d sublevel: Each d sublevel has five orbitals, and because each orbital can accommodate two electrons at most, the maximum number of electrons will be 10. Magnetic Quantum Number, m[l] The next quantum number is the Magnetic Quantum Number, m[l] which shows the number of orbitals in the sublevel. It takes values from –l to +l including the zero and all the integers. For example, when l = 2, we have d orbitals, and because m[l] = -2, -1, 0, +1, +2, there are 5 orbitals in each sublevel. The summary of quantum numbers including their meaning and values is given in the diagram below: Based on the values of m[l ](-l … 0 … +l), there can only be 1 s orbital in the given energy level, 3 p orbitals, 5 d orbitals, and 7 f orbitals. And because each orbital can only take a maximum of two electrons, there can only be a maximum of two electrons in any s sublevel, 6 electrons in the p subshell, 10 in the d, and 14 in the f sublevel. We have now covered the first three quantum numbers, and these are sufficient to identify any specific orbital and electron. The only thing we cannot figure out based on these is the spin of a given electron which we will discuss in the next section. For example, which orbital is indicated by the following set of quantum numbers: n = 3, l = 2, m[l] = 0? Starting with the principal quantum number, we know that it is an orbital in the 3^rd energy level. l = 2, indicates a d orbital, and m[l] = 0 indicates the middle one of the five d orbitals. The Electron Spin Quantum Number (m[s]) The last quantum number is the Electron Spin Quantum Number (m[s]) which shows the direction of the electron spin and depending on this may take a value of +1/2, represented by ↑, or -1/2, represented by ↓. Placing the direction of the arrow is important as the electrons in the same orbital may only have opposite spin. This is the Hund’s rule, which states that electrons will fill all the degenerate orbitals (equal in energy) with parallel spins (both arrows up or down) first before pairing up in one orbital. We can also formulate it as the lowest energy configuration for an atom is the one having the maximum number of unpaired electrons within the same energy sublevel. For example, in carbon, the second electron in the p sublevel goes to the next (empty) p orbital rather than fitting in with the other electron: Hund’s rule is another demonstration of the same principle which is the tendency to adopt the lowest energy state possible. There is a stronger repulsive interaction between two electrons in the same orbital compared to when they occupy separate orbitals of equal energy. Let’s show the application of Hund’s rule in explaining the electron configuration of carbon: Notice that placing the electron unpaired in the 3s orbital is also incorrect because, it is important to mention, that Hund’s rule applies to electrons in the same energy level. Check this article for more information and exception on Hund’s rule, as well the Aufbau’s and Pauli’s exclusion principles. To summarize, the information given by the quantum numbers, we can say that every set of the four quantum numbers specifies one electron in the atom. The first three describe (n, l, m[l]) its orbital, and the fourth (m[s]) describes its spin. Therefore, at one quantum number describing two electrons should be different. This is the Pauli’s exclusion principle which states that no two electrons in an atom can have the same four quantum numbers. So, if the electrons are in the same orbital, they must have the same n, l, m[l ]values, and therefore, the only one that can be different is the m[s] which is the spin of the electron shown by the position of the arrow. For example, for the electrons in a 2p orbital, n = 2, l = 1, and m[l] = –1, 0, or +1 (doesn’t matter which one because for two electrons in the same orbital, it will be identical). Therefore, the m [s] must be different (+1/2 or -1/2) in order not to violate Pauli’s exclusion principle. The values of m[s] are assigned arbitrarily as we do not know if the first electron is +1/2 or -1/2, however, if we assign it +1/2, then the second must be -1/2 and vice versa. Conventionally though, we put the first arrow pointing up, but it has no scientific evidence, and maybe the other way around. Let’s do a practice example. What are the values of n, l, m[l,] and m[s] for the 3p^4 electron? Solution: The first number is 3 and that is the principal quantum number (n = 3). It is a p orbital, and therefore, l = 1. For the m[l], we need to draw the orbital diagram to fill the electrons one by one and see which p orbital the fourth electron goes to. For the m[s], the arrow may be pointing up or down depending on how we drew the first electron. Again, this is arbitrary, however, they must be in opposite directions, and since we assigned +1/2 for the first electron in the first p orbital, the second must be pointing down – m[s] = -1/2. Check this 95-question, Multiple-Choice Quiz on the Electronic Structure of Atoms including questions on properties of light such as wavelength, frequency, energy, quantum numbers, atomic orbitals, electron configurations, and more. Check Also Leave a Comment
{"url":"https://general.chemistrysteps.com/quantum-numbers/","timestamp":"2024-11-06T18:02:20Z","content_type":"text/html","content_length":"204289","record_id":"<urn:uuid:1f09a62d-6aeb-47fb-aa33-45d6dae4bfd9>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00166.warc.gz"}
ball mill variable speed kc engineer by permitting operation at lower ball charges for a given mill power draw. Keywords Motor design, motor speed, mill speed, rated speed. Introduction The purpose of this paper is to provide guidance to Owners and EPCM engineers in the specification of the "design" and "maximum" speed of grinding mill motors. Purchasing a WhatsApp: +86 18838072829 Investigation of grinding circuit products showed that, like gold, PGM is selectively enriched in ball mill discharge and cyclone underflow. Huang and Mejiab (2005) characterized the gravity recoverable PGMs and Au with a combination of regular Knelson (60 Gs) and variable speed Knelson (115 Gs) technology. WhatsApp: +86 18838072829 Generally, filling the mill by balls must not exceed 30%35% of its volume. The productivity of ball mills depends on the drum diameter and the relation of ∫ drum diameter and length. The optimum ratio between length L and diameter D, L: D, is usually accepted in the range WhatsApp: +86 18838072829 This thesis examines variable speed ball mill performance under changing operating conditions to recommend operating conditions for the Copper Mountain Mine. JK SimMet, a very powerful predictive tool, was used to estimate grinding circuit performance and mill power consumption. WhatsApp: +86 18838072829 Ball mill, most of the size reduction is done by impact. Critical Speed of a Ball Mill (ƞc): 𝜂𝑐 = 1 2𝜋 ∗ √𝑔 √𝑅−𝑟 (1) Where, ηc is a critical rotational speed, 'R' is radius of the ball mill and 'r' is radius of the ball. For effective operation of the mill, the mill should be operated at 65 to 80% of ... WhatsApp: +86 18838072829 It could also be inferred that residue is inversely related to Blaine or fineness. 15 rpm mill speed is near the critical mill speed, thus Blaine is minimum whereas residue is higher at 16% (Fig. WhatsApp: +86 18838072829 Confirming the graphical analysis of the experimental values through surface plots for the ball mill working capacity and the ball mill speed against response variable as presented in Fig. 5. Based on the threedimensional data, the surface plot show a functional relationship between the SN ratio and two independent process variables. WhatsApp: +86 18838072829 Livingston, NJ Manufacturer* 5 Mil 1918 1049. Manufacturer of standard and custom dual roll mills for grinding and milling rice, spice, starch and fiber. Features include carbon or stainless steel construction, dual counterrotating serrated or grooved mills and ability to mill or crack within output range of 13 mesh to 150 mesh without ... WhatsApp: +86 18838072829 This thesis examines variable speed ball mill performance under changing operating conditions to recommend operating conditions for the Copper Mountain Mine. JK SimMet, a very powerful predictive tool, was used to estimate grinding circuit performance and mill power consumption. WhatsApp: +86 18838072829 The process of ball milling and the materials that compose planetary ball mills are highly complex, and the existing research on the change in ballmilling energy is not mature. The theoretical model of a ball mill was established for the first time to simulate the motion, collision process, energy transfer, and temperature change of small balls during the ballmilling process. Furthermore, by ... WhatsApp: +86 18838072829 This laboratory contains all necessary equipment for performing fundamental experiments on fluid particle mechanics, including a jaw crusher, vacuum leaf filter, batch sedimentation, etc. Mechanical Operation Lab is available in huge ranges like TROMMEL (Variable Speed), Cone Classifier, Bucket Conveyor, Plate Frame Filter Press, Ribbon Mixer ... WhatsApp: +86 18838072829 BALL MILL (Variable Speed) Equipment Materials : SS Usage : For Laboratory Price : 1 INR/Set Minimum Order Quantity : 1 Power : 500 Watt (w) Voltage : 220 Volt (v) Operate Method : Electric Send WhatsApp: +86 18838072829 Ball mills have been successfully run at speeds between 60 and 90 percent of critical speed, but most mills operate at speeds between 65 and 79 percent of critical speed. Rod mills speed should be limited to a maximum of 70% of critical speed and preferably should be in the 60 to 68 percent critical speed range. WhatsApp: +86 18838072829 Twenty five years ago, an engineering company pioneered into the field of scientific equipment and the year 1989 saw the firm foundation of Engineers Limited in the heart of Ambala (known as 'Science City'), Haryana, firm was founded under the phenomenal guidance and foresight of our patron, Mr. K. C. Kansal Since then, the firm has witnessed unsurpassing success. WhatsApp: +86 18838072829 A ball mill, a type of grinder, is a cylindrical device used in grinding (or mixing) materials like ores, chemicals, ceramic raw materials and paints. Ball mills rotate around a horizontal axis, partially filled with the material to be ground plus the grinding medium. Different materials are used as media, including ceramic balls, flint pebbles ... WhatsApp: +86 18838072829 Get MS Ball Mill With Variable Speed and Balls in Ambala, Haryana at best price by Engineers Limited. Also find Ball Mills price list from verified companies | ID: WhatsApp: +86 18838072829 Crushed ore is fed to the ball mill through the inlet; a scoop (small screw conveyor) ensures the feed is constant. For both wet and dry ball mills, the ball mill is charged to approximately 33% with balls (range 3045%). Pulp (crushed ore and water) fills another 15% of the drum's volume so that the total volume of the drum is 50% charged. WhatsApp: +86 18838072829 the year 1989 saw the firm foundation of Engineers Limited in the heart of Ambala (known as ... Mr. K. C. Kansal Since then, the firm has witnessed unsurpassing success. Today, the KC Group is country's leading Manufacturer and Wholesaler of worldclass Educational ... Horizontal Variable Speed Ball Mill Rotap Sieve Shaker O u r P r o d ... WhatsApp: +86 18838072829 Mill speed has profound effect on the ball trajectories, impact force and power draw. The mill can reach the best performance at the mill speed ranging from 70% to 80% of critical speed, and correspondingly the maximum percent of the impact force between 600 and 1400 N is obtained. For the 80% of critical speed, the measured power draw has a ... WhatsApp: +86 18838072829 Manufacturer of Process Control Instrumentation Lab Control Valve Characteristics (Linear Equal % Type), Study of P/I I/P Converter, Control Valve Characteristics (Linear, Equal % Quick Opening Type) and Calibration Of Thermocouple offered by Engineers Limited, Ambala, Haryana. WhatsApp: +86 18838072829 Operator can rapidly react to changes in ore characteristics due to variablespeed; Process optimization leads to a more efficient use of grinding power, resulting in significant energy savings. Finetuning of the speed in ball mills increases metals recovery; Dedicated mill controller performs critical monitoring, protecting the mill WhatsApp: +86 18838072829 BALL MILL MODEL 9 VARIABLE SPEED. £ Capco offers a range of ball mills for sample preparation, including laboratory ball mills, high energy ball mills, and more. Visit our website to buy ball mills for your sample preparation needs. WhatsApp: +86 18838072829 variable speed operation as the answer to improving current tumbling mill grinding circuits. Benefits related to reliability, sophisticated functionalities for wear reduction and maintenance issues can be obtained using latest technology Variable Speed Drives (VSD). How they work and how they can be easily WhatsApp: +86 18838072829 This approach is based on a hybrid numerical model of a 24degreeoffreedom gearbox, simulating one gear train and two drive shafts. The impact forces of the mill drum are modelled by a discrete element method (DEM). The ballfilling rate (Fr), the mill speed (Nr), and the ball size (Db) are considered to study this phenomenon. WhatsApp: +86 18838072829 Laxmi Mild Steel Batch Ball Mill, Capacity: 1 Kg. ₹ 80,000/ Piece Get Latest Price. Capacity: 1 Kg. Material: Mild Steel. Type: Batch Type. Brand: Laxmi. Size: 400 x 500 mm to 1800 x 1800 mm. Drive: Electric Motor Through Helical Gear Box. WhatsApp: +86 18838072829 mill drives (RMDs) offer the best solution. Three of the most common drive types for RMDs are low speed motors without a gearbox, high speed motors with a reduction gearbox, and variable speed drives (VSDs). No gearbox: The low speed motor is connected directly to the pinion. The rated speed is typically around 200 to 400 rpm. Synchronous ... WhatsApp: +86 18838072829 The grinding process of the ball mill is an essential operation in metallurgical concentration plants. Generally, the model of the process is established as a multivariable system characterized with strong coupling and time delay. In previous research, a twoinputtwooutput model was applied to describe the system, in which some key indicators of the process were ignored. To this end, a three ... WhatsApp: +86 18838072829 Ball mill accessories include big gear, pinion, hollow shaft, ring gear, big ring gear, steel ball, compartment board, transmission, bearing, end lining, and so on. Sufficient supply of each ... WhatsApp: +86 18838072829 Optimum performance of ball mill could potentially refine Blaine fineness, thereby improving the cement quality. This study investigates the effects of separator speed and mill speed on Blaine WhatsApp: +86 18838072829
{"url":"https://amekon.pl/Dec/26-7189.html","timestamp":"2024-11-11T09:41:39Z","content_type":"application/xhtml+xml","content_length":"26451","record_id":"<urn:uuid:29ffdbff-4499-435d-9e48-0e2d556887ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00568.warc.gz"}
Costs,mark up & margins Costs,mark up &amp; margins Good morning all, I am just doing my last bit of revision and cost's arn't clear to me, i understand that you value it at thelower of cost and net relisable value, but i just don't get it really, anyone have any simple explaining meathods? Also mark up and margins are catching me out to arhhhhhhh help ;-) • MArk up sales 60000 Cost 100% 42857 Gross Profit 40% 60000/140 = 1% = 428.57 x 100 = 42857 Sales 100% 1538 Cost of sales 65% 1000 GP 35% (100-35=65) 1000/65= 1% = 15038x 100= 1538 Good morning all, I am just doing my last bit of revision and cost's arn't clear to me, i understand that you value it at thelower of cost and net relisable value, but i just don't get it really, anyone have any simple explaining meathods? Also mark up and margins are catching me out to arhhhhhhh help ;-) I take it your talking stock values. Cost of stock = the value + how much it took to get it to its position when it was valued (ie delivery charges etc) NRV = cost as above + what it takes to get it to the customer. Most of the time the value of the stock would be on its cost. However sometimes the NRV may be lower than the cost, possibly old stock that has gone out of fashion etc so the selling price would be less than the cost just to clear the stock. So it would be prudent (mention that its the concept behind it!) to value it at the lower of the two. Margins & Markups The way I look at it is: Markups the cost price is 100% & the mark up is say 20% so cost is £100 / 100 x 120 = £120 ie 120/100 Margin is the difference between the selling price & cost price. This time the selling price is 100% and say margin is 20% so selling price is £100 / 100 x 80 = £80 ie 100/80 so markup you add it on to the cost price and margin you take it away from the selling price • here's how I remember mark up and margin Markup is more than one word as is cost of sales so the 100% goes at cos Sales = 130 % Mark up = 30% Margin is only one word as is sales so the 100% goes next to sales Sales 100% = Cos 70% = Margin =30%= Hope that helps • Thanks guys great help x
{"url":"https://forums.aat.org.uk/Forum/discussion/20934/costs-mark-up-margins","timestamp":"2024-11-01T22:56:49Z","content_type":"text/html","content_length":"296061","record_id":"<urn:uuid:e9778f80-1929-416f-ae50-09f5ecacc07b>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00113.warc.gz"}
How to use the WORKDAY function What is the WORKDAY function? The WORKDAY function returns a date based on a start date and a given number of working days (nonweekend and nonholidays). 1. Introduction What is a workday? A weekday that is not a weekend day or holiday. Typically Monday through Friday excluding any holiday dates. Use the WEEKDAY function to determine if a date is a workday or a weekend day. What is a holiday? A day on which regular activities and work is suspended due to a cultural, religious or legal custom. Varies by country and region. What is an Excel date? An Excel date is a serial number that Excel recognizes and can be filtered, sorted and used in other date calculations. Excel dates are actually serial numbers formatted as dates, 1/1/1900 is 1 and 2/2/2018 is 43133. There are 43132 days between 2/2/2018 and 1/1/1900. You can try this yourself, type 10000 in a cell, press CTRL + 1 and change the cell's formatting to date, press with left mouse button on OK. The cell now shows 5/18/1927. 2. Syntax WORKDAY(start_date, days, [holidays]) start_date Required. days Required. Positive integer returns a date after the start_date (future) and negative integer returns a date before the start_date. [holidays] Optional. A list of holiday dates. 3. Example 1 This example demonstrates how to use the WORKDAY function to calculate a future date with the condition of being a workday excluding weekend days and holidays. The start date is shown in cell C3 in the image above. The number of working days days is 12 and the function returns 1/18/2028. The formula in cell B8 is shown below, note that no holidays has been Formula in cell B8: The image below shows the start date and the weekdays. The count is displayed (row 5) below dates that are workdays. Note that there are no numbers below weekend days because they are not workdays. Counting 12 days after the start date and we get 1/18/2028. 4. Example 2 This example demonstrates the WORKDAY function with a negative day argument which means it calculates a workday before the start date instead of after the start date. The start date is shown in cell C3 in the image above. The number of working days days is -12 (negative number meaning counting backwards in time) and the function returns 12/16/2027. The formula in cell B8 is shown below, note that holiday 12/25/2027 has been specified. Formula in cell B9: This example also shows no numbers below weekend days because they are not workdays, this applies to 12/25/2027 as well because it is specified as a holiday. Counting 12 days before the start date and we get 12/16/2027. 5. Example 3 If an employee's last day of work is June 15, 2025, and they are entitled to 10 working days of severance pay, on what date will their last day of severance pay be? What we know: • start_date = 6/15/2025 (This date is entered in cell C19) • days = 10 (This number is specified in cell C20) Formula in cell B25: The formula in cell B25 returns 6/27/2025 which represents the last day of severance pay. The chart displays a period from June 15 to June 27, 2025. It's a column chart where each bar represents a day, and the height of each bar is 1 unit. The columns have data labels counting from 1 to 10 based on workday or weekend. The days are labeled with both the day of the week and the date. The chart demonstrates the use of Excel's WORKDAY function which calculates a future date based on a start date, number of working days, and optionally, a list of holidays to exclude. 6. Example 4 A construction project is scheduled to start on May 1, 2025, and the estimated duration is 45 working days. What is the expected completion date for the project? What we know: • start_date = 5/1/2025 (This date is entered in cell C19) • days = 15 (This number is specified in cell C20) Formula in cell B25: The formula in cell B25 returns 5/22/2025 which represents the completion date for the project. The chart displays a period from May 1 to May 22, 2025. It's a column chart where each bar represents a day, and the height of each bar is 1 unit. The columns have data labels counting from 1 to 15 based on workday or weekend. The days are labeled with both the day of the week and the date. The chart demonstrates the use of Excel's WORKDAY function which calculates a future date based on a start date, number of working days, and optionally, a list of holidays to exclude. 7. Example 5 A construction project is scheduled to start on August 3, 2026, Planning: 4 working days Design: 12 working days Procurement: 15 working days Construction: 45 working days Testing 6 working days Handover: 3 working days What is the expected completion date for the milestones and for the entire project? Assume one milestone must end until the next milestone starts. Formula in cell E5: The formula in cell E5 calculates all the milestone dates based on the specified workday values in C5:C10 and the start date in C4. It returns the following array: {"Fri, Aug 7, 2026";"Tue, Aug 25, 2026";"Tue, Sep 15, 2026";"Tue, Nov 17, 2026";"Wed, Nov 25, 2026";"Mon, Nov 30, 2026"} These dates correspond to the milestones specified in cells B5:B10, for example, the start date is 8/3/2026 and the planning takes 4 workdays which results in 8/7/2026. The SCAN function uses an accumulator value (a) that adds the number of workdays row by row. Argument Value Milestone dates: start_date 8/3/2026 Planning: 4 Fri, Aug 7, 2026 Design: 12 Tue, Aug 25, 2026 Procurement: 15 Tue, Sep 15, 2026 Construction: 45 Tue, Nov 17, 2026 Testing: 6 Wed, Nov 25, 2026 Handover: 3 Mon, Nov 30, 2026 The formula in cell E5 performs the following steps: 1. C4: This is the starting date for the scan operation. 2. C5:C10: This is the range of cells that will be scanned. 3. LAMBDA(a, b, WORKDAY(a, b)): This is a LAMBDA function that will be applied to each pair of values (a, b) from the scan operation. a is an accumulator variable that changes as the calculation proceeds through b (C5:C10) In other words, the SCAN function applies the provided lambda function a starting date and an array containing the number of days for each milestone in the specified range C5:C10, using the previous result as the first argument (a) and the current value from the range as the second argument (b). The WORKDAY(a, b) function calculates a date based on the number of working days specified in cell b. The WORKDAY function takes two arguments: the start date and the number of working days to add or subtract. Variable a changes to the last calculated date, Excel handles dates a integer which makes this calculation possible. In summary, the formula is scanning through the range C5:C10, using the value in C4 as the starting date, and calculating each milestone date specified in each cell in the range C5:C10 based on the previous milestone date and number of working days to the next milestone date. The completion date for the construction project is calculated in cell B14: This formula returns 11/30/2026 which is the same value as in the "handover" date in cell E10. 8. The function not working The WORKDAY function returns • #VALUE! error value if start_date or [holidays] are not a valid date. • #NUM error if start_date plus days argument returns an invalid date. (Excel can't handle dates before 1/1/1900.) Use the DATE function to create valid Excel dates. 'WORKDAY' function examples Functions in 'Date and Time' category The WORKDAY function function is one of 22 functions in the 'Date and Time' category. Excel function categories Excel categories How to comment How to add a formula to your comment <code>Insert your formula here.</code> Convert less than and larger than signs Use html character entities instead of less than and larger than signs. < becomes &lt; and > becomes &gt; How to add VBA code to your comment [vb 1="vbnet" language=","] Put your VBA code here. How to add a picture to your comment: Upload picture to postimage.org or imgur Paste image link to your comment.
{"url":"https://www.get-digital-help.com/how-to-use-the-workday-function/","timestamp":"2024-11-06T23:53:57Z","content_type":"application/xhtml+xml","content_length":"178937","record_id":"<urn:uuid:70bc6567-ce59-4474-98df-e9ce9287d417>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00406.warc.gz"}
Original Name: AS126 Total Number of Cords: 157 Original Author: Marcia & Robert Ascher Number of Ascher Cord Colors: 16 Museum: Museum für Völkerkunde, Berlin Similar Khipu: Previous (UR1152) Next (UR215) Museum Number: VA47095 Catalog: UR1126 Provenance: Unknown Khipu Notes: Khipu Notes Region: Ica Use ⌘ + or ⌘ – to Zoom In or Out Khipu Notes Ascher Databook Notes: 1. This is one of several khipus acquired by the Museum in 1907 with provance Ica. For a list of them, see UR1100. 2. By spacing, the khipu is separated into 3 parts. By spaces smaller than that between the parts and by markers, part 1 is separated into 6 groups of 10 pendants each, and part 2 is 6 groups of 4 pendants each (with 2 additional pendants in the last group). Part 3 is 1 group of 6 pendants. 3. Each group is unified by color. In part 1: group 1 is CB-W or CB-B; groups 2 and 6 are DB-W; groups 3 and 4 are CB-W; and group 5 is CB. The markers between them are short B cords. In part 2: groups 1, 2, and 4 are CB-W; group 3 is DB-W; and group 5 is B-W. In group 6, 4 of the pendants are GG-W and 2 additional pendants (1 between positions 1 and 2, and the other between positions 2 and 3) are DB:W. In part 3, with the exception of the first pendant, the group is DB-GG. 4. In part 1 (assuming that the missing P27 had value 20): a. The only pendant values are 11, 12, 20, and 21 and the subsidiary values are 0 or 1. b. The 60 pendant values sum to 1000. c. All the values in corresponding positions are the same for groups 1, 2, 3, and 4. With the exception of the interchange of values in positions 2 and 3, the values in group 6 are also the 5. In part 2 (assuming that the missing P74 had value 40): a. The only pendant values in positions 1-3 of all groups are 20, 30, 40, 50, 60. b. The sum of the values in each of the first 3 positions in the 6 groups is 250. That is: \[ \sum\limits_{i=1}^6 P_{i1}=\sum\limits_{i=1}^6 P_{i2}=\sum\limits_{i=1}^6 P_{i3}= 250 \]
{"url":"https://www.khipufieldguide.com/sketchbook/khipus/UR1126/html/UR1126_sketch.html","timestamp":"2024-11-14T17:14:04Z","content_type":"application/xhtml+xml","content_length":"26650","record_id":"<urn:uuid:1aaa0f42-72bb-42b3-bf85-39a95ef0b3b1>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00772.warc.gz"}
On the maximum off-axis gain of symmetrical pencil-beam antennas For a general class of symmetrical pencil-beam antennas, the gain at a given off-axis angle can be maximized by choosing the proper antenna size. The maximum gain at the given angle relative to the on-axis gain is independent of the given angle and dependent only on the main-beam pattern. It is computed here for four commonly used gain functions. Its value, in all cases, is close to 4 dB. This result is important in the definition of service areas for communication and broadcast satellites. IEEE Transactions on Antennas and Propagation Pub Date: May 1977 □ Antenna Design; □ Antenna Radiation Patterns; □ Directional Antennas; □ High Gain; □ Pencil Beams; □ Satellite Antennas; □ Angular Distribution; □ Beams (Radiation); □ Communication Satellites; □ Radiant Flux Density; □ Size Determination; □ Symmetry; □ Communications and Radar
{"url":"https://ui.adsabs.harvard.edu/abs/1977ITAP...25..435S/abstract","timestamp":"2024-11-03T20:51:25Z","content_type":"text/html","content_length":"35875","record_id":"<urn:uuid:cc64befe-c78c-47e7-b412-b2a269f3a0d9>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00497.warc.gz"}
Mathematical Statistics 2 a basic understanding of introductory statistical concepts and some familiarity with R as taught in Inleiding Mathematische Statistiek. An overview about each of the four topics topic presented in this course is given here below Safe Testing (Prof. Dr. P. D. Grünwald). In traditional hypothesis testing, the sample size or at least the sampling protocol must be determined in advance. In practice, it is desirable to use more flexible stopping rules. Researchers do this even though the methods do not allow for it, leading to false results appearing in the literature. We will outline some exciting recent techniques that can guarantee small error probabilities with 'optional stopping' after all. The underlying mathematics builds upon the insight that, in a casino, you do not expect to get rich, no matter what is your rule for continuing to gamble or going home. Bayesian methods (Dr. M. A. Hadji) Bayesian inference is based on the Bayesian interpretation of probabilities. In Bayesian statistics, we assume the parameter is a random variable which we endow with our prior belief. The data will update our belief about the parameter through the computation of a posterior distribution. It can be difficult to directly access the posterior distribution. In these cases, it is common to use Markov chains Monte Carlo (MCMC) methods. The most common choices of priors in wellknown models will be presented. Some MCMC methods to sample from the posterior will be Survival analysis (Prof. Dr. M. Fiocco) This area of statistics deals with time to event data, whose analysis is complicated not only by the dynamic nature of events occurring in time but also by censoring where some events are not observed directly but it is only known that they fall in some interval or range. Different types of censored and truncated data, non-parametric methods to estimate the survival function and regression models to study the effect of risk factors on survival outcomes will be discussed. Special aspects such as time-dependent covariates and stratification will be introduced. Longitudinal data analysis (Dr. M. Signorelli) Longitudinal data (sometimes called panel data) are data collected through a series of repeated observations of the same subjects over time. Since repeated measurements from the same subject are typically correlated, the analysis of longitudinal data requires statistical methods that do not rely on the usual independence assumptions. In this part of the course, the two most widely used statistical models for longitudinal data - linear mixed models, and generalized linear mixed models – will be discussed. Estimation of the models will be performed using the R software environment. Course objectives The overall aim of the course is to introduce students to four different areas of statistics. By the end of the course, students are expected to have a basic understanding of the topics discussed and to be able to use existing software to apply the methods covered during the course. Mode of instruction Weekly 2 × 45 min of lecture in class, and 2 × 45 min of practical sessions with exercises. Laptop with the statistical package R (http://www.r-project.org) already installed is required for each practical section. Assessment method Four individually written reports (20% each), and a presentation (20%) on a selected topic. The presentations will be held individually or in pairs, depending on the group size. The reports are regarded as practical assignments, and can not be retaken. The presentation can be retaken. Lecture material provided in class. Enroll in Usis to obtain the course material and course updates from Brightspace. Tijn Jacobs - t.jacobs.3@umail.leidenuniv.nl
{"url":"https://www.studiegids.universiteitleiden.nl/en/courses/109613/mathematical-statistics-2","timestamp":"2024-11-07T21:43:06Z","content_type":"text/html","content_length":"18476","record_id":"<urn:uuid:87829014-9abc-498d-bf0d-8914842fac5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00362.warc.gz"}
The double-reciprocal equation for competitive inhibition is as follows: 1/V[0] = 1/V[max] + K[m] • α/V[max] • 1/[S] where α = 1 + [I]/ K[I] the reciprocal of V 0 equals the reciprocal of V max plus K m times alpha divided by V max times the reciprocal of the substrate's concentration where alpha equals one plus the inhibitor's concentration divided by K I Based on this equation, a double-reciprocal plot should give a straight line, with the intercept 1/ V[max] and slope K[m] • α / V[max]. the reciprocal of V max and slope K m times alpha divided by V max Different Lineweaver-Burk plots with varying inhibitor concentrations should, therefore, give different slopes (because α increases with the inhibitor concentration), but the same y-intercept. This means, that V[max] at different concentrations of a competitive inhibitor is unchanged; however, the apparent K[m], K[m(app)] (K[m(app)] = K[m] • α), differs. If double-reciprocal plots of 1/V [0] against 1/[S] with varied inhibitor concentrations yield straight lines, with different slopes, but with the same y-intercept, the inhibitor is competitive (Figure 1) [1]. V max at different concentrations of a competitive inhibitor is unchanged; however, the apparent K m, which is equal to K m times alpha, differs. If double-reciprocal plots of the reciprocal of V 0 against the reciprocal of the substrate's concentration with varied inhibitor concentrations yield straight lines, with different slopes, but with the same y-intercept, the inhibitor is competitive. Figure 1: Figure a; Lineweaver-Burk plot showing competitive inhibition. Figure b; Shows slopes of each linear regression plotted against the inhibitor concentration. Calculating K[I], K[m], and V[max] calculating K I, K m, and V max If the inhibitor is competitive, only 1 inhibitor constant needs to be calculated. To calculate the inhibitor constant, several assays with different inhibitor concentrations must be conducted. Each of the resulting datasets should be plotted and the slopes and y intercepts can be determined by linear regression. From these fits, V[max] can be calculated as the reciprocal of the y-intercept. If none of the kinetic parameters have been determined, this linear fit does not provide enough information to determine K[I] and K[m]. V max can be calculated as the reciprocal of the y-intercept. If none of the kinetic parameters have been determined, this linear fit does not provide enough information to determine K I and K m. To determine these parameters, it is necessary to plot the "slopes" from the different assays against the inhibitor concentration. Thus based on the following equation: Slope [competetive] = K[m] • α/V[max] = K[m]/V[max] + K[m]/V[max] • 1/K[I] • [I] the slope of a competitive inhibition is equal to the k m times alpha divided by V max, which all equals to the K m divided by the V max plus K m divided by V max times the reciprocal of K I times the concentration of the inhibitor This plot should therefore also result in a straight line with the intercept K[m] / V[max], and slope (K[m] / V[max] ) • 1 / K[I]. Thus, K[I] can be calculated by dividing the y-intercept with the slope. Because V[max] has already been calculated, K[m] can be calculated from the y-intercept of this fit, by multiplying this intercept with V[max]. K m divided by V max, and slope K m divided by V max times one divided by K I. Thus, K I can be calculated by dividing the y-intercept with the slope. Because V max has already been calculated, K m can be calculated from the y-intercept of this fit, by multiplying this intercept with V max. Steps of calculating the kinetic parameters when using a competitive inhibitor Prepare Lineweaver-Burk plots of the kinetic data and fit the data using linear regression (1 fit per inhibitor concentration). The y-intercepts of the Lineweaver-Burk plots at different inhibitor concentrations should be the same (or at least close). Take the reciprocal to the y-intercept; this is V[max]. Plot the slopes of each of these lines as a function of the inhibitor concentration in a new plot, and fit this plot using linear regression. To calculate K[m], multiply the y-intercept of this line with V[max]. To calculate K[I], divide the y-intercept of this line with the slope. </ muteV max. Plot the slopes of each of these lines as a function of the inhibitor concentration in a new plot, and fit this plot using linear regression. To calculate K m, multiply the y-intercept of this line with V max. To calculate K I, divide the y-intercept of this line with the slope. 1. Lehninger, Albert L.; Nelson, David L.; Cox, Michael M. (2008). Principles of Biochemistry (5th ed.). New York, NY: W.H. Freeman and Company. ISBN 978-0-7167-7108-1. Referred from:
{"url":"https://theory.labster.com/competitive_inhibition/","timestamp":"2024-11-03T09:47:57Z","content_type":"text/html","content_length":"48806","record_id":"<urn:uuid:c1f7fb6c-cefb-4cc3-b89c-c5c64f03ae15>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00124.warc.gz"}
Calculate the stiffness, applied force, or extension of an elastic body using this online stiffness calculator. Stiffness refers to the resistance of an elastic body to deformation when an external force is applied. It is a measure of how much the body displaces under the force along the same degree of freedom. In rotational systems, stiffness is the ratio of momentum to rotation. Flexibility or pliability is the inverse of stiffness. This online stiffness calculator can help you determine the stiffness, applied force, and extension by providing the other known values. Send the result to an email • k = F / δ • F = k x δ • δ = F / k • k = Stiffness • F = Applied Force • δ = Extension
{"url":"https://calchub.xyz/stiffness/","timestamp":"2024-11-06T10:47:18Z","content_type":"text/html","content_length":"40255","record_id":"<urn:uuid:0f8604b0-902d-4e9a-b35c-9d6315bea2b5>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00076.warc.gz"}
Relation Symbol In Math Example: Equals! Relation Symbol in Math Example: Equals! Relation symbols in math are crucial for understanding mathematical relationships. They include equals (=), not equals (≠), greater than (>), less than (<), greater than or equal to (≥), and less than or equal to (≤). Relation symbols are used to compare numbers, expressions, or functions. They’re integral in equations and set theory. Here’s a brief overview: Equals (=): Signifies that two expressions are equivalent. Not equals (≠): Indicates that two expressions are not the same. Greater than (>): Shows that one number is larger than another. Less than (<): Conveys that one number is smaller than another. Greater than or equal to (≥): Means one number is either greater than or equal to another. Less than or equal to (≤): Implies one number is either less than or equal to another. Example: If x = 3, then x + 2 > 4 and x – 1 < 3 Relation symbols form the backbone of equations and inequalities, facilitating the expression of mathematical ideas succinctly. Key Takeaway Relation symbols are important in mathematics for conveying the relationship between mathematical quantities or expressions. They provide a concise and standardized way to express mathematical concepts and enable clear communication and understanding among mathematicians, scientists, engineers, and students. Relation symbols such as equality symbols (=) and inequality symbols (<, >, ≤, ≥) are used to indicate the relationships between quantities, whether they are exactly the same or one is greater or lesser than the other. Function notation (f(x)) allows for the expression of relationships between variables and the use of relation symbols to define the nature of the relationship, making it crucial for interpreting and analyzing mathematical concepts. Understanding Relation Symbols in Mathematics: A Comprehensive Guide Symbol Name Example Meaning = Equals 5 = 5 5 is equal to 5 ≠ Not equals 5 ≠ 4 5 is not equal to 4 > Greater than 6 > 2 6 is greater than 2 < Less than 3 < 7 3 is less than 7 ≥ Greater than or equal 5 ≥ 5 5 is greater than or equal to 5 ≤ Less than or equal 4 ≤ 5 4 is less than or equal to 5 Explore the meanings and examples of relation symbols in math, including equals, not equals, greater than, and less than, to master mathematical expressions and comparisons. Importance of Relation Symbols Why are relation symbols important in mathematical notation? Relation symbols play a crucial role in conveying the relationship between mathematical quantities or expressions. These symbols, such as “=”, “<”, “>”, and “≠”, are essential for indicating equality, inequality, and other relationships within mathematical expressions and equations. They provide a concise and standardized way to express mathematical concepts, allowing for clear communication and understanding among mathematicians, scientists, engineers, and students. By using relation symbols, mathematicians can represent complex relationships and make comparisons between different quantities with precision and clarity. Furthermore, these symbols are fundamental for establishing logical connections and conditions in mathematical arguments and proofs. In essence, relation symbols are indispensable tools for expressing and analyzing mathematical relationships, making them a vital component of mathematical notation. Types of Relation Symbols When exploring the types of relation symbols in mathematics, it is important to distinguish between equality and inequality signs. These symbols play a crucial role in expressing relationships between quantities. Additionally, understanding function notation is essential for representing and defining relationships between variables. Equality Vs. Inequality Signs The distinction between equality and inequality signs is fundamental in understanding relation symbols in mathematics. Equality, denoted by “=”, signifies that two quantities are exactly the same, while inequality signs (<, >, ≤, ≥) indicate a relationship where one quantity is lesser, greater, lesser than or equal to, or greater than or equal to the other. These symbols are crucial in expressing relationships between numbers or variables. The table below illustrates the usage and meaning of these symbols: Symbol Meaning = Equal ≠ Not equal < Less than > Greater than ≤ Less than or equal to ≥ Greater than or equal to The usage and meaning of these symbols Understanding these symbols is fundamental in solving equations and making comparisons in mathematics. Function Notation Explanation An understanding of function notation is essential for grasping the diverse types of relation symbols used in mathematics. Function notation provides a convenient way to represent the input and output of a function. It is denoted as f(x), where ‘f’ represents the function and ‘x’ is the input. This notation allows for the expression of relationships between variables, making it easier to comprehend and work with complex mathematical concepts. In function notation, the use of symbols like ‘=’, ‘<’, ‘>’, ‘≤’, and ‘≥’ helps to define the nature of the relationship between two quantities. For example, in f(x) = 2x + 1, the ‘=’ symbol indicates that the function f(x) yields an output equal to 2x + 1. Understanding function notation is crucial for interpreting and analyzing the various types of relation symbols encountered in mathematical contexts. Equality and Inequality Relations In mathematics, equality and inequality relations are fundamental concepts used to compare the relative values of two quantities. 1. Equality (=) signifies that two quantities are exactly the same, emphasizing balance and fairness. 2. Less than (<) and greater than (>) inequalities denote a comparison of relative values, highlighting the concepts of smaller and larger. 3. Less than or equal to (≤) and greater than or equal to (≥) represent inclusivity, conveying the idea of being open to and accepting of different values. 4. Not equal to (≠) symbolizes diversity and individuality, acknowledging the existence of differences between quantities. Understanding equality and inequality relations is crucial for solving equations, making comparisons, and interpreting mathematical statements. These concepts form the basis for expressing relationships between numbers and quantities, allowing for precise mathematical reasoning and analysis. Relation Symbols in Functions Now, let’s turn our attention to the use of relation symbols in functions. In the context of functions, relation symbols play a crucial role in expressing relationships between elements. Understanding the connection between relation symbols and functions is essential for grasping the fundamental concepts of mathematics. Relation Symbol Examples Let’s delve into the realm of relation symbol examples, particularly focusing on the utilization of relation symbols within functions. When working with relation symbols in functions, it’s important to understand their significance in expressing mathematical relationships. Here are a few examples to help clarify their usage: 1. Equality ( = ): This symbol signifies that two values are exactly the same, emphasizing balance and equilibrium in mathematical expressions. 2. Less Than ( < ): It denotes comparison and relative ordering, often evoking a sense of progression and directionality in mathematical relationships. 3. Greater Than ( > ): Similar to the less than symbol, it represents comparison and relative ordering, but in the opposite direction, conveying concepts of magnitude and expansion. 4. Not Equal To ( ≠ ): This symbol highlights disparity and difference, eliciting a sense of contrast and distinction within mathematical contexts. Functions and Relations The application of relation symbols in functions plays a crucial role in defining and understanding mathematical relationships. In the context of functions and relations, relation symbols are used to express how elements in one set are related to elements in another set. This relationship can be represented through various symbols, each with its own meaning and interpretation. Relation Symbol Meaning = Equal ≠ Not Equal < Less than > Greater than ≤ Less than or equal to ≥ Greater than or equal to Relation Symbol and Their Meanings Understanding these relation symbols is fundamental in comprehending the behavior and properties of functions, enabling mathematicians to analyze and solve real-world problems with precision and Properties of Relation Symbols The properties of relation symbols play a crucial role in clarifying the nature and behavior of mathematical relationships. Understanding these properties is essential for accurately interpreting and manipulating mathematical expressions. Here are some key properties of relation symbols: 1. Reflexivity: A relation R on a set A is reflexive if every element in A is related to itself. This property helps us understand self-related elements in a set. 2. Symmetry: A relation R on a set A is symmetric if for all a and b in A, if a is related to b, then b is related to a. This property aids in understanding the bidirectional nature of 3. Transitivity: A relation R on a set A is transitive if for all a, b, and c in A, if a is related to b and b is related to c, then a is related to c. This property helps in understanding the chaining of relationships. 4. Antisymmetry: A relation R on a set A is antisymmetric if for all a and b in A, if a is related to b and a is not equal to b, then b is not related to a. This property allows us to understand the absence of bidirectional relationships. Using Relation Symbols in Equations Within the context of equations, relation symbols serve to denote the connection or comparison between mathematical entities, thereby facilitating the expression of mathematical relationships. These symbols, such as “=”, “<”, “>”, “<=”, “>=”, and “≠”, play a crucial role in articulating the equality, inequality, and other relationships between quantities in mathematical expressions. When using relation symbols in equations, it is important to understand their implications. For example, the “>” symbol is used to indicate that one quantity is greater than another, while the “≠” symbol is used to show that two quantities are not equal. These symbols are essential when comparing unequal values in equations and mathematical statements, as they allow for precise communication of the relationships between different numbers and variables. Understanding and utilizing these symbols correctly is fundamental for accurately expressing mathematical concepts and solving equations. For instance, the symbol “=” signifies equality, indicating that the expressions on either side of the equation are equivalent. In contrast, inequality symbols like “<” and “>” denote a comparison between the quantities, showcasing which is smaller or larger. These symbols are fundamental in formulating and solving equations, allowing for the representation and manipulation of mathematical relationships with precision and clarity. Relation Symbols in Set Theory Discussing relation symbols in set theory involves examining the application of these symbols to establish connections and comparisons between sets of mathematical elements. In set theory, relation symbols play a crucial role in defining the relationships between sets, which helps in understanding the interactions and dependencies within different sets. Here are some emotional responses that often arise when delving into relation symbols in set theory: 1. Intrigue: The exploration of how sets relate to each other can spark curiosity and fascination. 2. Clarity: Understanding relation symbols can bring a sense of clarity and organization to complex mathematical concepts. 3. Frustration: The intricate nature of relation symbols may sometimes lead to frustration, especially when grappling with complex relationships between sets. 4. Satisfaction: Mastering relation symbols can bring a sense of accomplishment and satisfaction. Understanding these emotional responses can aid in appreciating the significance of relation symbols in set theory. This understanding provides a smooth transition into the subsequent section about ‘real-life examples of relation symbols’. Real-life Examples of Relation Symbols When examining real-life examples of relation symbols in mathematics, it is important to consider their practical applications and how they provide insight into various relationships and comparisons. Relation Symbol Example Meaning < 5 < 10 5 is less than 10 > 8 > 3 8 is greater than 3 = 4 + 3 = 7 4 plus 3 is equal to 7 ≤ x ≤ 10 x is less than or equal to 10 Real-life Examples of Relation Symbols These real-life examples show how relation symbols are used to express relationships such as less than, greater than, equal to, and less than or equal to. Understanding these symbols is crucial for interpreting mathematical statements and solving various problems in fields such as science, engineering, and economics. In conclusion, relation symbols play a crucial role in mathematics, particularly in representing relationships between mathematical objects. Understanding and using relation symbols correctly is essential for solving equations, working with functions, and studying set theory. In fact, according to a recent study, over 80% of math students struggle with using relation symbols accurately, highlighting the importance of mastering this fundamental concept in mathematics. Leave a Reply Cancel reply
{"url":"https://symbolismdesk.com/relation-symbol-in-math-example/","timestamp":"2024-11-07T15:40:42Z","content_type":"text/html","content_length":"144393","record_id":"<urn:uuid:b6401bf3-54a0-4e08-a926-4a889eed82ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00142.warc.gz"}
Precession and the pentagram Previous page Shamballa and leylines The shamballa shape is made up of 60 triangles, therefore an angular total of; 60 x 180° = 10800 Those triangles are in 12 groups which are in 5 subdivisions, if you take the total angles, multiplied by the 12 groups and divided by the 5 subdivisions; 10800 x 12 / 5 = 25920 This number 25920 will be familiar to those familiar with the Precession of the Equinoxes, wiki or from my article http://grahamhancock.com/watta1/ “The precession of the equinoxes is the motion of the equinoxes along the ecliptic (the plane of Earth’s orbit) caused by the cyclic precession of Earth’s axis of rotation. This motion is a cyclic ‘wobbling’ that makes the stars appear to shift in a systematic way through each of the 12 zodiacal signs. One precessional year is approximately 25,920 solar years.” Could there be a connection between this shape and the precessional year? The same number may be read from a pentagram. Interesting in another way, is that the lengths of sides have a Phi, and therefore golden mean relation. Phi or φ = 1.618 wiki. The length of a side of the inner pentagon, to the length of the triangle side, is in ration 1 : 1.618. And the length of the triangle side, to the length of a side of the inner pentagon plus the triangle side, is also in ratio 1 : 1.618. I have come to the conclusion that the pentagram is not a satanic symbol, it is rather a symbol for the Earth and therefore used in satanic ritual to control the Earth and its inhabitants. Next page the tree of life You must be logged in to post a comment.
{"url":"http://www.tescera.com/precession-and-the-pentagram/","timestamp":"2024-11-13T06:29:48Z","content_type":"text/html","content_length":"35573","record_id":"<urn:uuid:ecb92c09-d762-45af-b1bb-5dd8019c61c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00641.warc.gz"}
The Role of Fall Factor in Choosing a Fall Protection System | XSPlatformsThe Role of Fall Factor in Choosing a Fall Protection System | XSPlatforms In an earlier blog, we described the importance of using a personal energy absorber (PEA) in a fall protection system. We did this by explaining the maximum arrest force (MAF) and maximum anchor load Now, we want to highlight another important factor that is needed to determine the forces that are released on a user’s body when a fall is arrested. This is referred to as the fall factor. What is the fall factor? The fall factor is the ratio between the height of the fall and the length of rope that is available to absorb that fall. The value of the fall factor varies between 0 and 2 and is calculated by dividing the height of the fall by the length of the rope. The height of a fall is measured from the point where a person falls to the point that the fall is stopped. The lower the value of the fall factor, the less impact forces are applied to the body of the person and the ‘safer’ the fall. On the other hand, the higher the value, the greater the impact forces on the body will be and the more likely it is that serious injuries are sustained. Note that the fall factor is a way to indicate the severity of a fall, not an exact way to measure impact forces. To get a better picture of what fall factor is and how it’s calculated, we will provide some basic examples below, based on climbing images. As mentioned in the paragraph above, the fall factor is calculated following the equation: Note that factors like elasticity of the rope and hitting objects are taken out of the equation. Example 1 The anchor point of the climber is placed overhead and the rope is pulled tight during the climb. In the example, the rope length is 2 meters (6,5 ft). When the climber loses grip and falls, the fall distance is 0 because of the taut rope. This results in the following equation: In this case, the impact force on the user’s body is minimal, which is a safe value for the climber. The person would sustain some bruises in most cases. Example 2 In this example, the total rope length is 2 meters (6,5 ft) as well. But now, the person climbs higher up to the same height as the anchor point. If the climber should fall, the distance of that fall will be 2 meters (6,5 ft): A fall factor of 1 is reached here, which means that impact forces come into play that can possibly injure the climber, like breakage of limbs or a concussion. Example 3 The highest amount of force that is released on the climbers body is reached when the climber climbs to 2 meters (6,5 ft) above the anchor point with a rope of 2 meters (6,5 ft). In case of a fall, the climber will fall a total of 4 meters (13 ft), resulting in the maximum fall factor: The impact forces on the body are dangerously high when a fall factor of 2 is reached and the climber will sustain serious and possible life threatening injuries because of it. In the table below, you find a schematic overview of the fall factors as described above: Fall factor and fall protection systems In practice, the examples given in the previous section aren’t representative for occupational situations where work at height is performed and fall protection is needed. In this section, we will put the examples into practice. Fall factor 0 Example with an overhead fall protection system When workers need to work at height and a ceiling or another structural element is located above them, the use of an overhead fall protection system is recommended. Especially in combination with an automatic fall arrest device (also known as a retractable device). This retractable device acts like a seatbelt: it keeps the lanyard taut at all times and will block immediately when a sudden acceleration occurs (a fall). The image on the right shows that the fall distance is very minimal. Only the extension of the energy absorber will add to the fall height (in case a retractable device has a built-in energy absorber, the rope will not elongate). The lanyard will be the length of the distance between the attachment point on the harness and the anchor point, which is a horizontal lifeline in most cases. If we do a basic calculation with the anchor point 1 meter (3,2 ft) above the attachment point of the harness, the fall factor would be: Fall factor 1 Example with a wall-mounted lifeline system If the anchor point is located at waist height and the attachment point of the lanyard is located at the back of the worker, the fall factor will be around 1. In the example on the left, an overhead system is not possible, so the anchor point (a horizontal lifeline) is mounted at waist height on a wall. In this situation, the lanyard is 2 meters (6,5 ft). When a fall occurs, the user will fall approximately 2 meters (6,5 ft) as well. The protruding working area will somewhat decrease the fall distance, but the deflection of the lifeline will add to that again.. The impact forces will be quite high in this situation, that’s why a personal energy absorber (PEA) needs to be used, which decreases the forces and decreases the chance of (serious) injuries caused due to those forces. Fall factor 2 Example with a horizontal lifeline system Horizontal lifeline systems and other systems where the anchor points are located at foott level are generally installed on roofs. When a user falls, they fall the distance of the attachment point to the height of the anchor point (2 meters / 6,5 ft) and the distance of the lanyard below the anchor point (2 meters / 6,5 ft). In the example on the right, this means a fall distance of approximately 4 meters. When putting this into the equation, we get a fall factor of 2: The impact forces that are released on the body will be very high. That’s why a personal energy absorber has to be used in these systems. This will add to the fall height by 0,75 meter (2,5 ft), but it will significantly reduce the impact forces and decrease the risk of serious injuries. The most appropriate fall protection configuration The examples given do not entirely cover real life situations, because elements like rope flexibility, use of an energy absorber, deflection of the wire rope, distance of the anchor point to the roof edge etc. will have impact on the impact forces as well. Also, the distance to the next level below the working area is not determined (fall clearance). Nevertheless, keeping the above in mind will help in choosing the most appropriate configuration for your fall protection system: whenever possible, strive to minimize the fall factor to 0. ODIN calculation tool To help users, as well as installers of fall protection systems, XSPlatforms has developed a tool to determine if a lifeline system complies to local standards and is safe to use, taking all factors into account, including the impact forces. Download the leaflet with all information and added value of ODIN below. Also read our blog about calculating the fall clearance of a fall protection solution »
{"url":"https://fallprotectionxs.com/blog-the-role-of-fall-factor-in-choosing-a-fall-protection-system/","timestamp":"2024-11-14T08:16:16Z","content_type":"text/html","content_length":"178355","record_id":"<urn:uuid:2fd0041b-7f72-4919-b7a0-64bcb06553b8>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00480.warc.gz"}
Reflections of a Math Coach: Five Questions to Consider | Heinemann As an Elementary Math Coach, you have a critical role in moving math instruction forward within your school or district. Building teacher capacity has many challenges, and the responsibility can be daunting. You may be torn between many facets of the job as you work to deepen some teachers’ understanding of their math standards, increase other teachers’ knowledge of math content, expand teachers’ repertoire of instructional strategies, and transform teachers’ beliefs regarding what is important in mathematics. How can you meet the varied needs of elementary math teachers? What are some critical actions that support your goals? Consider the following: As an Elementary Math Coach, you have a critical role in moving math instruction forward within your school or district. Building teacher capacity has many challenges, and the responsibility can be daunting. You may be torn between many facets of the job as you work to deepen some teachers’ understanding of their math standards, increase other teachers’ knowledge of math content, expand teachers’ repertoire of instructional strategies, and transform teachers’ beliefs regarding what is important in mathematics. How can you meet the varied needs of elementary math teachers? What are some critical actions that support your goals? Consider the following: 1. Do I capitalize on the use of planning time to nurture a deep understanding of math standards? While textbooks offer a wealth of activities, they often lead teachers into a turn-to-the-next-page teaching approach. Collaborative planning, guided by a math coach, begins with the identification of specific learning outcomes and involves discussions about standards progressions and connections. The planning focuses on standards. When teachers have time to think about what came before and what comes after their grade-level standards, and are able to discuss the standards in depth, they are better able to select tasks that link to the standards. While we explore standards in other professional learning formats (faculty meetings, PLCs…), collaborative planning offers the opportunity to explore and unpack the standards in real time throughout the year and links the standards discussions to classroom planning. 2. Do I disseminate math tasks or help teachers become experts at selecting and designing meaningful math tasks? Math tasks are everywhere, but not all tasks are equal. It is not enough to locate a task for teaching a skill, teachers must develop the ability to evaluate tasks and select ones that are appropriate to the standard and to their students. When selecting a task to introduce a math concept and build mathematical understanding, would a paper/pencil task be the most appropriate or might students gain more understanding through tasks that include context, models, and math talk? Discussions about task selection build teacher expertise and empower teachers to choose or design meaningful tasks rather than simply turning pages. 3. Do I ensure that students’ needs are a part of the planning process by building teacher expertise in identifying strengths and needs and determining next steps? Simply following someone else’s series of pre-planned lessons has not proven to be a successful teaching plan for many of our students. When teachers listen to students’ conversations, observe them at work, and examine their written responses, they are more able to adjust their teaching plans to move students toward proficiency. Rather than reviewing student work to assign a grade, teachers benefit from discussions in which they review student work to identify students’ strengths and needs, and then use their insights to determine reasonable next steps. Through work with a math coach, teachers gain experience and confidence discussing and evaluating student work and brainstorming appropriate next steps. 4. Do I structure ongoing professional learning? Professional learning happens over time. A math coach provides ongoing support by guiding the learning every step of the way. Whether through collaborative planning, demonstration lessons, co-teaching, or informal discussions, coaches step in to discuss the math content, clarify the math standard, or suggest teaching options or next steps. Rather than single-session professional learning opportunities, math coaches recognize that growth happens over time and with support. 5. Am I a reflective practitioner and do I model that for the teachers with whom I work? Reflective teachers grow as professionals. They are open to new teaching ideas, they take risks, and they evaluate instructional strategies. So, how do we build reflective practitioners? How do we nurture the attitude that we should all be growing and learning? How do we build a learning community within our schools and districts? Teachers, schools, and districts benefit from math coaches who exemplify professional learning in their own lives, who share their enthusiasm for that learning, who take risks to expand their skills and encourage teachers to do the same, and who pose ongoing and supportive opportunities for teachers to continue their own learning. As an Elementary Math Coach, you transform math teaching through a focus on standards, content, instructional strategies, and dispositions. By supporting and expanding teacher capacity on a daily basis, you have the potential to transform classrooms into places in which both students and teachers grow as learners. has decades of experience supporting teachers in making sense of mathematics and effectively shifting how they teach. As a former elementary teacher, reading specialist, and math coach, Sue knows what it’s like in the classroom and her background is evident throughout her work as she unpacks best practices in a clear, practical, and upbeat way. She is the lead author of Math in Practice, a new grade-by-grade K-5 professional learning resource. She is also coauthor of the bestselling Putting the Practices Into Action, Mastering the Basic Math Facts in Addition and Subtraction, and Mastering the Basic Math Facts in Multiplication and Division. She served as editor of Heinemann’s popular Math Process Standards series and also wrote the bestselling Now I Get It. Sue is a nationally known speaker and education consultant who directs Quality Teacher Development, an organization committed to providing outstanding math professional development for schools and districts across the country. Watch an introductory Math in Practice webinar, hosted by Sue. Click here to watch Sue talk about the links between reading and math. Connect with Sue on Twitter @SueOConnellMath
{"url":"https://blog.heinemann.com/math-coach-five-questions-oconnell","timestamp":"2024-11-03T16:43:11Z","content_type":"text/html","content_length":"82813","record_id":"<urn:uuid:5cc16971-c53d-4c75-8c8d-d9e09bfc7bf0>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00642.warc.gz"}
An exploration of the life half of math and the math half of life. The Other Half, a podcast from ACMEScience.com, is an exploration of the the other half of a bunch of things. First, Anna and Annie want to take you on a tour of the other half of math — the fun half you might be missing when you learn math in school, the half that helps you makes sense of your own life. And on the flip side of that equation, they want to explore the other half of life — the half of day-to-day social scenarios that can be better understood by thinking like a mathematician. Lastly, Anna and Annie — as women of science — represent the other half of people. More than half of the humans on earth are female, but that parity isn’t reflected in the world of math and science. No matter what half you represent, listen in to Anna and Anna at The Other Half.
{"url":"http://www.theotherhalf.acmescience.com/","timestamp":"2024-11-07T00:30:01Z","content_type":"text/html","content_length":"81176","record_id":"<urn:uuid:9fe3ef92-f8de-46d4-ad46-2a56d2fa1d08>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00618.warc.gz"}
Chapter 3: Probability 3.3 You might begin with computing the median for all scores, irrespective of group, and then coding all of the scores below the median as 0 and all of the scores above the median as 1. Then create a 2 × 2 table. The columns can be labelled group 1 and group 2. The rows can be labelled 0 and 1. For score (being above average) to be independent of group, the percentage of 1s will need to be the same in the two columns (groups). 3.4 The only way not to toss at least one head after three tosses is to toss three tails. We assume that the probability of tossing a head is 0.5. Thus, the probability of tossing a tail is 0.5. The probability of tossing all three tails is 0.5(0.5)(0.5) or 0.125. If we subtract the probability of tossing three tails from 1.0, then we will have the probability of tossing at least one head. 3.5 Again, the key is to know how many possible outcomes there are on one roll of a pair of dice. You might start with rolling a 1 on the first die and a 1 on the second die (1,1), then (1,2), then (1,3), and so on through to (6,6). As we saw, there is an easier method for determining the number of such combinations: n^r. The method will not enumerate the outcomes, but it does convey the number of possible outcomes. In this case, n is the number of possible outcomes on one die (6) and r is the number of rolls of the dice (2). Thus, when you try to enumerate all possible outcomes for one roll of two dice, you will find 36 possible outcomes. The probability of any one of the outcomes is 1/36 or 0.0278. The probability of rolling one of the outcomes other than a pair of ‘6’s is 0.0278 (35) or 0.973. (Because there is one way to roll two ‘6’s, 0.0278 is multiplied by 35 rather than 36.) Assuming one roll of the dice is independent of the others, the three probabilities can be multiplied to determine the probability of not rolling at least one pair of ‘6’s after three rolls of the dice. This can be transformed into exponent form (0.0.973)^3. Thus the probability of not rolling at least one pair of ‘6’s after three rolls is 0.921. To answer our question the problem needs to be turned back around. The probability of not rolling at least one pair of ‘6’s is subtracted from 1.000. Thus, the probability of rolling at least one pair of ‘6’s after three rolls of a die is (1.000 − 0.921) or 0.079. 3.6 What is the probability of being infected with Skewed-Leptokurtosis, if we find that 10% of the population has the dreaded disease, 75% of those with the disease will test positive, and if 1.8 % of those who do not have the disease will test positive? (answer = 0.63) What is the probability of being infected with skewed leptokurtosis, if we find that 4% of the population has the dreaded disease, 90% of those with the disease will test positive, and if 1.8 % of those who do not have the disease will test positive? (Answer = 0.68) The interactive demonstration allows the student to modify the sensitivity and specificity as well as the base rate. It quickly becomes clear that if sensitivity and specificity are held constant, the base rate greatly influences the probability of having the disorder. Lower base rates (rarer disorders) result in lower probabilities of having the disorder. Conversely, lower sensitivity (false positives) results in higher probabilities of having the disorder. Once the student is familiar enough with how the matrix behaves, a few addition problems that require the student to work backwards are posed. For example, if the probability of having the disease is 0.95 if you test positive, and if the sensitivity is 0.80, then what is the specificity? Answers can be checked by working in the typical order, from information to final probability. 3.8 Can you create two skewed distributions that will give the appearance of normality when they are combined? Use SPSS to create the two separate data sets and view the resulting histograms. Begin by combining the data set. Next change the data sets so that both are positively or negatively skewed. Finally, reverse the direction of the skewness of one of the data sets. Then the answer is revealed when two distributions that are equally skewed, but in different directions, are combined. 3.9 For the purpose of assigning marks, the distribution that maximizes fairness is one that is symmetrically distributed and where the scores are spread from nearly minimum performance to maximum performance. The spread insures that one or two lucky or unlucky guesses on the part of a student will change his or her position relative to the other students. This is one way to describe the reliability of the scores. When the test is negatively skewed (ceiling effect), too many scores are clustered at the top. Small variations in performance can result in meaningful changes in a student’s relative position in a class. This is unfair. Some students deserving of top marks might not appear to be at the top of the distribution. When the test is positively skewed (basement effect), too many scores are clustered at the bottom of the distribution. Small accidents in performance can result in meaningful changes as well. This also is unfair. Some students deserving passing marks might accidently appear to be less knowledgeable than they are. In both cases of skewness, to the extent that the skewness reduces the reliability, the less fair is the test. 3.10 Hint: We can avoid any trial and error by beginning with a standard normal distribution. It is easy to create a distribution with a mean of 30. Begin with three scores: −1, 0, 1. The mean and the variance of these scores are 0 and 1, respectively. Using rules that we have covered earlier, we can change the variance from 1 to the desired 36 by multiplying all three scores by the square root of the desired variance. Because we desire a variance of 36, we multiply the three scores by 6: −6, 0, 6. The mean is unchanged but the variance is now 36. The variance of 36 is retained, but we may move the mean to 30 by adding 30 to all of the scores: 24, 30, 36. It is then a straightforward matter to change the distribution to one with a mean of 50 and a variance of 9. We add 20 to all three scores to create a mean of 50: 44, 50, 56. To obtain a variance of 9 we can reverse the process used to create the variance of 36. The resulting scores are 47, 50, and 53. Try the linear transformations with ns of 5 and 7.
{"url":"https://study.sagepub.com/bors/student-resources/in-chapter-review-challenge-question-answers/chapter-3","timestamp":"2024-11-08T02:00:57Z","content_type":"text/html","content_length":"67403","record_id":"<urn:uuid:06e967da-0792-481f-9619-f272dfab6c6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00218.warc.gz"}
What is 20 Percent of 40? - maglysisWhat is 20 Percent of 40? What is 20 Percent of 40? Your 20 percent of 40 may help you obtain a discount when purchasing. A 20% discount would result in paying $8 less for an original item priced at $40. For an accurate answer, it is necessary to calculate a percentage. To do this, divide the number by 100 before multiplying it by its value. Mathematically speaking, percentage is the ratio between an entire number and its unit, such as one dollar or a unit of measurement. The calculation involves dividing that whole number by that unit measurement and then multiplying it by 100 to determine an item’s final or sale price. If its original price were $40, then 20% off would equate to $8; divide the actual cost by a percentage, then multiply this resultant number by 100 for an accurate consequent sale price. The answer to 20 Percent of 40 is 8, which can be calculated by multiplying 0.20 times 40 or using a calculator to determine the percentage of an initial price. Calculators are handy if you’re shopping online and wish to know the discounted amount, helping prevent overpayment for items advertised with “20% Off.” To help make this calculation more transparent for consumers who may find “20% Off” confusing or intimidating, here’s an explanation that will provide clarity. At some point in your life, you may need to know what 20 percent of 40 is in euros. This information can be beneficial when shopping online and determining whether an item you’re considering purchasing is worth its cost; additionally, using this number allows you to calculate savings after applying discounts. First, multiply the original value with your percentage reduction and get its final price using a calculator. For instance, if an item that costs 200 euros is reduced by 20% to 160 euros. Another method for finding answers is dividing a percentage by its main number, which may take more time and be more accurate. Below are steps to help find your answer: the blue section shows 20 percent equaling 40, while the red area represents 80 percent, or 160. Alternatively, divide by a number and multiply it by 100 to get the final value. Japanese yen The Japanese yen is its national currency. The Bank of Japan oversees its monetary policy by issuing banknotes, managing and storing treasury funds, providing deposit and loan services to financial institutions, and implementing policies that foster the sustainable development of Japan’s economy. In 2021, Japan had an M1 money stock totaling 969 trillion yen and an M2 money stock totaling 1,163 trillion yen, respectively. Household financial assets totaled 4,025 trillion yen; households held over half as cash or deposits for more comprehensive data regarding financial transactions and investments from Japan’s Bank of Japan Statistics. Japan adopted aggressive monetary easing in response to a global financial crisis caused by rising oil and raw material prices and Lehman Brothers’ bankruptcy in America. This resulted in the yen’s depreciation against the U.S. dollar, further intensified by surging fuel costs and global recession risks; moreover, widening gaps between Japanese and U.S. interest rates are of grave concern. British pounds Are you curious how much 20 percent of 40 in British pounds is? This article thoroughly explains how to calculate it using methods similar to those used when computing percent values in other currencies, such as dollars, euros, Japanese yen, Chinese yuan, pesos, and rupees. In addition, mathematicians often employ various tricks to solve complex mathematical problems more efficiently. 20 percent of 40 is 8, which can be calculated by multiplying 0.20 times 40. For other currencies, multiply this figure by its conversion rate. Chinese yuan Chinese yuan is rapidly gaining ground as a global reserve currency. Recently, China switched its national wealth fund away from dollars and into yuan; surplus oil and gas revenues also accumulated there. Furthermore, this increase also brought a rise in share for Cross-Border Interbank Payment System, the world’s largest offshore yuan trading zone, yet still cannot replace dollars completely as reserve assets. Multiplying by 100 is a straightforward method to calculate percentages quickly. It can also be applied in many other circumstances; for instance, to estimate savings when buying products at 20% off their original price or use this formula to find out what percentage of 40 is present in dollars, euros, Japanese yen, British pounds or Chinese yuan (the answer would be 8). You could use this same approach and divide by 100 to determine what percent 40 represents – for instance, 20% of 40 British pounds would equal 8 pounds. Percentages are an effective way of representing amounts of money relative to their total value, which makes them particularly useful when dealing with large sums. Simply divide the top number by its corresponding bottom number and multiply this result by 100 to arrive at its percentage value. For instance, to calculate 20 percent of 40, you would divide by one and then multiply by 100 before reaching a percentage figure. Determining what 20 percent of 40 is in dollars, euros, Japanese yen, British pounds, Chinese yuan, or pesos may help determine whether an advertised discount is worthwhile. To do this quickly and accurately, use the diagram below – orange represents 40% while green represents the remaining portion – this makes the problem easier to visualize and solve promptly and efficiently. To determine what 20 percent of 40 is in pesos, divide it by ten and multiply that decimal by 0.8 to get your answer of 8. You can also use this calculator to find any percentage between two numbers quickly – it will do it all for you! For better or for worse, knowing how much something costs in another currency is often essential when working with numbers. For example, knowing how much 20 percent of 40 is in rupees could help you decide whether a product is worthwhile purchasing. Luckily, this answer can be quickly calculated – simply multiplying the percentage by a number gives the amount in rupees. The percentage formula is an algebraic equation that uses values and can be utilized in various ways. You could use it to find how much a certain percentage is of any number or divide a decimal by 100 to convert it to percentage form. Furthermore, this equation can also help find values between two numbers. It is easiest to calculate 20 percent of 40 using a calculator, as its automatic functions do all the math for you. Enter two numbers, and the calculator will show you the percentage breakdown of each number – thus providing an answer of 8. This article will also discuss how to do this calculation yourself.
{"url":"https://maglysis.com/what-is-20-percent-of-40/","timestamp":"2024-11-12T18:28:47Z","content_type":"text/html","content_length":"109956","record_id":"<urn:uuid:342eee2e-afd2-4c88-8cc1-677847fcaa72>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00515.warc.gz"}
Texas Go Math Grade 5 Lesson 14.4 Answer Key Graph and Analyze Relationships Refer to our Texas Go Math Grade 5 Answer Key Pdf to score good marks in the exams. Test yourself by practicing the problems from Texas Go Math Grade 5 Lesson 14.4 Answer Key Graph and Analyze Texas Go Math Grade 5 Lesson 14.4 Answer Key Graph and Analyze Relationships Unlock the Problem Sasha is making hot cocoa for a party. For each mug of cocoa, he uses 3 tablespoons of cocoa mix. If Sasha makes 9 mugs of cocoa, how many tablespoons of the cocoa mix will he use? Step 1 Use the rule to make an input/output table. Step 2 Write the related pairs of data as ordered pairs. (1, 3) ___ ___ ____ Step 3 Plot and label the ordered pairs in the coordinate grid. Use the pattern to write an ordered pair for the number of tablespoons of the cocoa mix needed for 9 mugs of cocoa. Plot the point in the coordinate grid. So, Sasha will need ___ tablespoons of the cocoa mix to make 9 mugs of cocoa. • How can you use the pattern in the coordinate grid to decide if your answer is reasonable? (1, 3) (2, 6) (3, 9) (4, 12) Share and Show Complete the input/output tables. Write ordered pairs and plot them in the coordinate grid. Graph and Analyze Relationships Lesson 14.4 Go Math Grade 5 Question 1. Multiply the number of tablespoons by 2 to find the weight in ounces. Question 2. Multiply the number of hours by 3 to find the distance in miles. Problem Solving Use the graph for 3-4. Question 3. H.O.T. Multi-Step The rule for the pattern is multiply the input by 5. Which ordered pair on the graph 80 does not follow the pattern? Explain. (3, 20). The input is 3, so the output is 5 × 3 or 15. The ordered pair would be (3, 15), not (3, 20) Go Math Grade 5 Lesson 14.4 Answer Key Question 4. H.O.T. Communicate If the input is 12, would the output be greater or less than 40? Write the ordered pair and plot it on the graph. Greater than 40. (12, 60) when the input is 8, the output is 40. So, if the input is 12, the output would be greater than 40. when the input is 12 5 × 12 = 60. Problem Solving Use the coordinate grid for 5-6. Complete the table for each recipe and plot the points. Use different colors to plot each person’s pattern. Question 5. Lou and George are making chili for the Annual Firefighter’s Ball. Lou uses 2 teaspoons of hot sauce for every 2 cups of chili. 1 George uses 3 teaspoons of the same hot sauce for every cup of chili. Question 6. Sense or Nonsense? Elsa said that George’s chili was hotter than Lou’s, because the graph showed that the amount of hot sauce in George’s chili was always 3 times as great as the amount of hot sauce in Lou’s chili. Does Elsa’s answer make sense, or is it nonsense? Explain. Elsa’s makes sense. I can see on the graph that for the same x-coordinate the y-coordinate is 3 times greater for George’s chili than Lou’s chili. So, George’s chili is 3 times as hot. Go Math Grade 5 Lesson 14.4 Practice and Homework Answer Key Question 7. H.O.T. Multi-Step If you mix 10 cups of George’s chili with 10 cups of Lou’s chili, how many teaspoons of hot sauce will there be in that 20 cups of chili? 40 tsp Daily Assessment Task Fill in the bubble completely to show your answer. Question 8. The table compares distance on a map to real-life distance. How many miles does a map distance of 6 inches represent? (A) 24 miles (B) 30 miles (C) 36 miles (D) 6 miles Answer: (C) 36 miles Use the graph for 9-10. Question 9. Which statement about the data is correct? (A) One pen costs $5. (B) Four pens cost $20. (C) Two pens cost S5. (D) Five pens cost$1. Answer: (D) Five pens cost$1. Question 10. Multi-Step Suppose Jake buys 30 pens. He also buys a notebook for $3. How much does lake spend in all? (A) $6 (B) $9 (C) $33 (D) $30 Answer: (B) $9 Texas Test Prep Question 11. Duber plots a pattern showing the number of pentagons and the total number of sides for that many pentagons. If the x-coordinate, the number of pentagons, is 8, which ordered pair shows the pattern? (A) (8, 5) (B) (5, 8) (C) (40, 8) (D) (8, 40) Answer: (D) (8, 40) Texas Go Math Grade 5 Lesson 14.4 Homework and Practice Answer Key Complete the input/output tables. Write ordered pairs and plot them in the coordinate grid. Question 1. Multiply the number of days by 5 to find the number of hours worked. Texas Go Math 5th Grade Lesson 14.4 Homework Answer Key Question 2. Multiply the number of gallons by 4 to find the number of quarts. Problem Solving Use the coordinate grid for 3-4. Complete the table for each person and plot the points. Use different colors to plot each person’s pattern. Question 3. Marion uses 2 buttons for each doll. Nola uses 4 buttons for each doll. Lesson 14.4 Homework 5th Grade Answer Key Question 4. How many buttons do Marion and Nola use altogether if they each make 5 dolls? Answer: 30 Lesson Check Fill in the bubble completely to show your answer. Question 5. Shane plots a pattern on a graph that shows the relationship between the length of one side of a square and the area of the square. If the x-coordinate, the length of one side of a square, is 6 inches, which ordered pair will Shane plot? (A) (6, 24) (B) (6, 36) (C) (36, 6) (D) (6, 6) Answer: (B) (6, 36) Question 6. The table compares distance on a map to actual distance. How many kilometers does a map distance of 8 centimeters represent? (A) 50 kilometers (B) 8 kilometers (C) 40 kilometers (D) 80 kilometers Answer: (D) 80 kilometers Use the graph for 7-9. Question 7. Which statement about the data is correct? (A) The amount earned for washing one car is $10. (B) The amount earned for washing 20 cars is one dollar per car. (C) The amount earned for washing one car is $20. (D) The amount earned for washing 10 cars is $50. Answer: (C) The amount earned for washing one car is $20. Question 8. Multi-Step Suppose LeAnn washes 2 cars on Friday and 3 cars on Saturday. How much does she earn? (A) $40 (B) $60 (C) $50 (D) $100 Answer: (D) $100 The amount earned for washing one car is $20. 2 cars = 2 × $20 = $40 3 cars = 3 × $20 = $60 $40 + $60 = $100 Go Math Grade 5 Lesson 14.4 Homework Answer Key Question 9. Multi-Step Kyle washes 4 cars. Erin washes 5 cars. Both plan to donate the amount earned to charity. How much more money do they need if they want to contribute $200 to charity? (A) $20 (B) $180 (C) $40 (D) $100 The amount earned for washing one car is $20. 4 cars = 4 × $20 = $80 5 cars = 5 × $20 = $100 $200 – $180 = $20 Thus the correct answer is option A. Leave a Comment You must be logged in to post a comment.
{"url":"https://gomathanswerkey.com/texas-go-math-grade-5-lesson-14-4-answer-key/","timestamp":"2024-11-05T03:14:35Z","content_type":"text/html","content_length":"256306","record_id":"<urn:uuid:a599fd3c-7255-481c-9f6e-62965452e260>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00064.warc.gz"}
Vector Measures [PDF] [6c5pv30mrg40] E-Book Content VECTOR MEASURES ■by N. D I N C U L E A N U Bucharest PERGAMON PRESS OXFORD . LONDON · E D I N B U R G H · NEW YORK TORONTO · SYDNEY · PARIS · BRAUNSCHWEIG Pergamon Press Ltd., Headington Hill Hall, Oxford 4 & 5 Fitzroy Square, London W.1 Pergamon Press (Scotland) Ltd., 2 & 3 Teviot Place, Edinburgh 1 Pergamon Press Inc., 44-01 21st Street, Long Island City, New York 11101 Pergamon of Canada, Ltd., 6 Adelaide Street East, Toronto, Ontario Pergamon Press (Aust.) Pty. Ltd., 20-22 Margaret Street, Sydney, N.S.W. Pergamon Press S.A.R.L., 24 rue des Écoles, Paris 5e Vieweg & Sohn GmbH, Burgplatz 1, Braunschweig Copyright © 1967 VEB Deutscher Verlag der Wissenschaften, Berlin Hochschulbücher für Mathematik, Band 64 Herausgegeben von H. Grell, K. Maruhn und W. Rinow Fir^t English edition 1967 The Library of Congress Catalog Card No. 66-28055 PREFACE This book contains the material of a special course of Measure Theory delivered at the University of Bucharest. As the title shows, the book is devoted especially to the study of measures with values in a Banach space. However, the positive measures (with finite or infinite values) are also treated in detail, for most of the properties of a vector measure are derived from the corresponding properties of its variation, which is a positive measure. Prerequisites for reading the book are a familiarity with general topology and ele­ mentary properties of Banach spaces. The first chapter contains: classes of sets, set functions, variation and semi-variation of set functions, and extension of set functions from a certain class to a wider one. It appears that the semi-tribe is the natural domain of definition of a vector measure. In fact, even for a positive measure which takes also the value oo, in the definition of the integral are used only the sets of finite measure, which form a semi-tribe. The extension of a positive measure is made from a clan to a tribe which contains always the whole space. This tribe contains all the Borei sets in case the initial clan is generated by the compact parts of a locally compact space. The second chapter is devoted to the integration of vector functions with respect to vector measures. Beside the study of the measurable functions (with respect to a tribe or to a measure) and of the integrable functions, this chapter contains : the spaces &p and the integral representation of the linear operations on the spaces ££v or on the spaces of totally measurable functions, vector measures defined by densities, absolutely continuous measures and the Lebesgue-Nikodym theorem (for vector measures), con­ ditional expectations, martingales and the recent results concerning the existence of a lifting of the space JS?00. The existence of the lifting made it possible to drop the countability hypotheses on the Banach spaces involved in the Lebesgue-Nikodym theorem and in the integral representation of the linear operations on J£?p. The last chapter is devoted to the regular measures on a locally compact space and to the integral representation of the dominated operations on the space of continuous functions with compact carrier. Many important topics, as, for example, product measures and the LebesgueFubini theorem, are not included in the present book. The book is greatly influenced by many excellent monographs on Measure Theory, especially by those of Halmos [1], Bourbaki [3], Dunford and Schwartz [1], and M. Nicolescu [3]. N. Dinculeanu §1. CLASSES O F SETS 1. CLANS Let T be a set. A set of subsets of T will be called a class of subsets of T. D e f i n i t i o n 1. A nonvoid class Ή of subsets ofTis Ì.A- 2. Au Be^, called a clan1) if: A,Be 71-* 00 I*= 1 1= 1 The properties 6 and 7 are now stated in the following form : oo 15. For every increasing sequence (An) of sets of^ such that {J Anetf μ we have U A I = lim μ(Αη) = sup μ(Α„). n -»oc In fact, μ is increasing, therefore the sequence of numbers (μ(Αη)) is increasing, hence lim μ(Αη) = ϊνίρμ(Αη). n-+ao I. Vector measures 16. For every decreasing sequence (An) of sets of *% such that Ç\ An e , m and n two additive set functions defined onfé*with values in X or in R+ . Ifm(A) = n(A)for every Ae£P, then m = n. In this case every set E efé7can be written in the form where Ai are mutually disjoint sets of 0> (§ 1, proposition 13), therefore m(E) = £ m(At) = £ n(At) = n(E), i=l hence m = n. We show now that a countably additive set function defined on a tribe or on a semitribe is uniquely determined by its values taken on a clan generating the semi-tribe or the tribe. P r o p o s i t i o n 6. Let ^ be a clan, Sf the semi-tribe generated by fé7, &~ the tribe generated byfé*,m and n two countably additive set functions defined on S? or on 3~ with values in X or in R+ . Ifm(A) = n(A)for every AeW and if m and n are σ-finite onfé7(in case m and n are positive), then m = n. Suppose first that m and n are finite on fé7. Denote by Φ the class of all the sets A e Sf such that m(A) = n(A). Then, by hypo­ thesis, fé' cz Φ e Se. If (An) is a monotone sequence of sets of Φ such that lim An e £f, then (properties 6 and 7 of the countably additive set functions on a clan) ni (lim An) = lim m(An) = lim n(An) = /i(lim A„), therefore lim An ΕΦ. It follows that Φ is monotone with respect to Sf9 hence (§ 1, proposition 16) Φ = £f. It follows that m(A) = n(A) for every ^ e ^ . §2. Set functions If m and n are defined on y , then every set E e y is the union of a sequence (En) of disjoint sets of S?9 therefore m(E) = Σ /*(£„) = X *(£,,) = *(£), consequently m = n. Suppose now that m and n are σ-finite onfé"and consider the clan fé'o c=fé7on which m and n are finite. Every set offé7is the union of a sequence of sets of féV By the pre­ ceding proof, m and n are equal on the semi-tribe Sf0 generated byféO· If w and n are defined on ^~, then m and #i are equal on the tribe ^~0 generated by ^ 0 · We remark that the union of a sequence of sets offéObelongs to 3~0 ; hencefé7c ^~ 0 , therefore 3~0 = J7~. It follows that if m and n are defined on ^", then m = n. On the other hand, every set of £f belongs to 0 and iffor every set A e & with A cz E we have either μ(Α) = 0 or μ(Α) = μ(Ε). We say that a set E e %? has the Darboux property (with respect to μ) if for every number oc such that 0 ^ a ^ μ(Ε) there exists a set A e Ή with A a E and μ(Α) = α. We say that μ has the Darboux property if every set E efé7has the Darboux property. We say that μ is atomic if there exists at least one atom infé7,and that μ is non-atomic if there exists no atom in fé7. Examples. 1°. Let t0 e T and 0 < a ^ oo and for every set A efé7put μ(Α) = a if t0 e A and μ(Α) = 0 if t0 $ A. Then every set E efé7such that t0 e E is an atom. I. Vector measures 2°. Suppose that put For every t e T let oc{t) ^ 0 μ(Α) = Σ «(0Then μ is a positive measure onfé7and every set {t} with 0 is an atom. 3°. The Lebesgue measure on the real line has the Darboux property and is nonatomic. We shall show that every a-finite non-atomic measure on a semi-tribe has the Dar­ boux property (corollary of the proposition 7) but the converse is not true (proposi­ tion 10). We remark that if μ is a measure, then a set E efé7with μ(Ε) < oo can have at most a countable family of disjoint atoms, and the same property has every set with σ-fìnite measure. P r o p o s i t i o n 7. Let μ be a positive measure on a semi-tribe Sf. IfEeSf finite measure and if E has no atom, then E has the Darboux property. has σ- Suppose first that μ(Ε) < oo and let 0 < oc < μ(Ε). We shall find by recurrence two sequences (An) and (Bn) of sets of £f having the following properties : 1. A0 cz A± cz ··· cz An cz ··· c= Bn cz ··· c B{ cz B0 cz E. 2. If we put an = sup {μ(Α)\ Αη_λ a A a B„-l9 bn = ΪΏΪ{μ(Β); AnczBcz μ{Α) z% *}, Bn_^ μ(Β) ^ oc} the sequence (an) is decreasing, the sequence (b„) is increasing and we have an ^ oc ^ b„, for n = 1, 2, ... 3. There exists two sequences εη \\ 0 and ηη \ i 0 such that an - en < μ(Αη) ^ an, bn z% μ(Βη) < bn + ηη. In fact, let rn \ j 0. Put a0 = sup {μ(Α) ; A cz E, μ(Α) ^ oc}. Then 0 ^ a0 S oc and for ε0 > 0 there exists a set ^40 e ^ with ^40 c £ and #0 - «o < M^o) ^ ^o Put b0 = ίηΐ{μ(Β); A0 cz B cz E, μ{Β) ^ oc}. Then oc S b0 ^ μ(Ε) and for ^ 0 > 0 there exists a set B0e£f £ 0 ^ μθ^ο) < *o + *7o · with A0 cz B0 cz E and §2. Set functions Put then αχ = sup {μ(Α); Α0 cz A cz B0, μ(Α) ^ oc}. Then a0 — ε0 < a1 ^ a 0 and if we take 0 < εί ^ rt such that a0 — ε0 < ax — εΐ9 there exists a set AL e 9* with A0 there exists a set B1e£f with ^ cz i?! cz 2?0 and &! ^ μ(^ι) B+1 ^ μ(Βη+1) < bn+i + ηη+1. Since εη ^ r„ and ηη ^ rw it follows that εη -> 0 and ??„ -► 0. The sequences (an) and (è„) are monotone, therefore they have limits a respectively b and we have 0 S a^ocz^b ^ μ(£) < oo. The sets OO A = U An B = Π Bn «=1 belong to £f and we have ^4W cz A cz .0 cz Bn cz E. From the condition 3 we deduce that μ(Α) = lim μ(Αη) = a ^ oc. I. Vector measures Since μ(Βη) < oo, we deduce also that μ(Β) = lim μ(Βη) = b ^ oc. n->oo Let C G £f be such that C ^ B — A. Then ^ B C i 4 c ^ u C c 5 c i?M, for every n. If μ(>4 u C) ^ oc from the conditions 2 and 3 we deduce that an — εη ^ μ(^4 u C) ^ tfn+i, for every n, consequently μ (Α u C) = #. It follows that //(C) = , φ * u C) - μ(Α) = 0. If μ(4 u C) ^ a, from the conditions 2 and 3 we deduce that bn+1 ^ μ(^4 u C) f^bn + ηη for every «, consequently μ(,4 u C ) = J, therefore μ{0) = μ(^4 u C) — μ(Α) = b — a = μ(Β — A). Since E has no atom we deduce that μ(Β — A) = 0, conse­ quently a = μ(Α) = μ(Β) = b, therefore oc = a = b, and finally μ(Α) = α. 00 Suppose now that E has cr-finite measure and put E = U En where En are sets n=l of Sf with finite measure. We can consider the sets E„ disjoint and μ(Εη) > 0 for every n. If 0 < oc < μ(Ε), we can find a sequence (ocn) with 0 < ocn < μ(Εη) and 00 Σ ocn = oc. For each « we find a set ^4n e Sf with ^4η c En and μ04„) = #„. The n= 1 sets ^4„ are disjoint, the set A = U ^n belongs to S? and we have A a E and 00 M-4) = Σ KAn) = α· C o r o l l a r y . ^4 a-finite non-atomic positive measure on a semi-tribe has the Darboux property. We shall give now equivalent conditions for a set to have the Darboux property. We shall consider first the case of a set containing only a finite number of disjoint atoms. P r o p o s i t i o n 8. Let μ be a positive measure on a semi-tribe Sf and let E e £f be a set with μ(Ε) < oo containing finitely many disjoint atoms. Let ocx > oc2 > ··· > ocn be such that oc^(E) are the distinct values of μ on the atoms of E. Let A\, Af, ..., A]* be n the atoms of E having the same measure οιίμ(Ε), let oc = £ SiOci9 At = A\ U A \ U ··· n u A\\ A = (J Ai9 β = μ(Ε) andβρ = ί=1 i= l Then E has the Darboux property if and only if ocn ^ 1 — oc and 1 — oc and select a number ξ such that 1 — oc < ξ < ocn. There exists then a set F e E of Sf such that //(F) = ξβ. Since ξ > 1 — oc it follows that F contains at least an atom. On the other hand, since ξ < ocn and since there does not exist an atom having measure octß with oc i < ocn, it follows that F contains no atom. This contradiction shows that ocn ^ ì —oc. Assume now that there exists p ^ n — 1 such that 1 — oc — βρ < ocp and choose a number ξ such that 1 — oc — βρ < ξ < ocp. There exists then a set F ocp+1. On the other hand, since ξ < ocp it follows that each atom contained in F has measure octß with oct < ocp. But this is a contradiction, since E contains no atom having measure octß with ocp+ x < oct < ocp. Thus we have Up ύ 1 — oc + βη, for each p ^ « — 1. Conversely, suppose that the inequalities of the statement are verified and prove that E has the Darboux property. Suppose first that 0 ^ ξ ^ 1 — oc. The set E — A is of measure (1 — oc) β and con­ tains no atom, therefore, by the proposition 7, there exists a set F e E — A of $f such that //(F) = ξβ. Now suppose that 1— oc „ ) - y ··· > oct > oci+1 > -" be such that oc^(E) are the distinct values of μ on the atoms of E. Let 00 Aj, A?, ..., A** be the atoms of E having the same measure oc^(E), let oc = £ «s^ g 1, oo Ai = A\ U ··· u A\\ A = VJ Ai9 μ(Ε) = β and βρ = £ i=l i= 1 Then E has the Darboux property if and only if ocp g 1 - oc + βρ9 for p = 1, 2, ... If E has the Darboux property, then we prove like in the proposition 8 that we have ocp ^ 1 — oc + βρ for every p. Conversely, suppose these inequalities verified and prove that E has the Darboux property. We consider first the case oc < 1. If 0 ^ ξ ^ 1 — oc, we proceed as in the correspond­ ing case of the proposition 8 and find a set F cz E — A of £? such that //(F) = ξβ. Let m be an integer satisfying ßm < 1 — oc. Such an integer exists, since the series £ SiOCi converges and 1 — oc > 0. Consider a number ξ satisfying 1— oc < ξ ^ I — oc + ßm. Let γ = ξ — ßm. We have 0 ^ y ^ 1 — oc and so, by the proposition 7 there exists a set G cz F — A of «5^ such that μ{ΰ) = y/3. If we put F = G u ^4m+i u ^t m+2 u ··· we have /-e(F) = (Y + ßm)ß = iß. Now suppose that 1 — oc + ßm < ξ ^ 1 — oc -\- ßm_1. Let y' be the least integer satisfying ξ (1 - oc + ßm + (y - 1) ocm) - jocm = 1 - oc + /?m - ocm ^ 0. Thus, as we have proved above, there exists a set F cz E — A of «9* such that, putting / / = G u ^4m+i u v4m+2 u ··· we have μ ( # ) = γβ. Next, putting F = H v A„ u Amv ··· u ^ we obtain /*(F) = (γ + > m ) 0 = fj8. Let us admit that we have proved, for every real ξ satisfying O^C^l-oc+ßp with/? + 1 < m, the existence of a measurable subset F cz E — (A1 u A2 u ··· u ^4P) such that ^a(F) = ξβ. Now consider a real ξ satisfying 1 — oc-\- βρ the theorem is proved; if not, let pt be i the greatest integer for which pt ^ st and £ /?fc#fc ^ f. If £ pkock = f we put i i*7 = U {Al u ··· u ^4£k) and we have //(i7) = ^/S. If the equality does not hold for fc=l any positive integer i, then we put F = U (^ί u ··· u ^Γ)· We shall prove that We reason by contradiction. Suppose that μ{Ρ) Φ ξβ. Since, for each n, μ I U (Λ,1 u ··· u ^fO) = ^ with λη < ξ, it follows that μ{Ε) = λβ, where λ < ξ. Since the series £ s ^ converges, we have lim oct = 0. Hence, there is an integer m' i=l such that, for i > m\ we have α, ^ ξ — λ. It follows that for each ι > m! we have Pi = Si. Denote by m the least ra' for which i > m' impliespt = st. Since ξ < oc, we have m > 0. It follows that /?m < sm and m Σ Λ*« + ßm = λ f°r i= l P = l>2> ··· I. Vector measures §3. VARIATION O F SET F U N C T I O N S Let T be a set, sé an arbitrary class of subsets of T with Qe sé, and X a normed space. In particular sé can be a clan, a semi-tribe or a tribe, and X can be the space of the reals or the space of the complex numbers. 1. D E F I N I T I O N O F THE VARIATION Let m be a set function defined on sé with values in X or in TÎ+ with m(0) = 0. For every set A c T we put m(A) = sup £ \m(At)\ I where the supremum is taken for all the families (Ai)ieI of mutually disjoint sets of sé contained m A. (There exist always such families, for instance the family consisting of the void set only). The number m(A) is called the variation of the set function m on the set A. The set function m is called the variation of HI. P r o p o s i t i o n 1. For every set A c T we have m(A) = sup X \m(Ai)\ where the supremum is taken for all the finite families (A f ) i e J of disjoint sets of sé con­ tained in A. We have only to remark that for an arbitrary family (At)leI of sets of sé we have Σ \m{At)\ = sup Σ \«K4t)\ where the supremum is taken for all the finite subsets J a I. C o r o l l a r y . In the definition ofm(A), the supremum can be taken for all the sequences (Ai) of disjoint sets of sé contained in A. P r o p o s i t i o n 2. If sé is a clan, then for every set A e sé we have m(A) = sup £ \m(Ai)\ J where the supremum is taken for all the finite families (Ai)ieJ of disjoint sets of sé such that [J Ai = A. ieJ §3. Variation of set functions In fact, if Al9 A2, ..., An are disjoint sets of sé contained in A, then taking n = A — U Ai9 the sets Al9 ..., An, An+1 belong to sé, are mutually disjoint, their union is equal to A and i=l Proposition 3. 7/*^ w 0 semi-tribe, then for every set A e sé, we have m(A) = sup £ \m(At)\ I where the supremum is taken for all the countable families(At)ieI such that U At = A. ofdisjoint sets of sé In fact, if (Ai)l ^ i < 0 0 i s a sequence of disjoint sets of sé contained in A, then taking 00 A0= A — (J At the sequence (^i)0^i 0 and I" is the set of the indices i e I for which μ(Αι) g 0 and if we put B' = U At and B" = U A iel' then the sets B' and B" belong to sé, are contained in A, and Σ HM = Σ KA) - Σ μ(4ι) = M*') - μ(Β") iel = \μ(Β')\ + \μ(.Β")\^2$ηρ\μ(Β)\, BczA BeSé hence μ{Α)^2*ν®\μ(Β)\. BczA BeStf I. Vector measures Suppose now that μ is complex valued and put μ = μ1 + ίμ2 where μ1 and μ2 are additive real valued set functions defined on sé. We have 1^(5)1^1/4(5)1 \μ2(Β)\ ^ \μ(Β)\, for every 5 G J / and μ = + ΐμ2 ^ β± + μ2, therefore β(Α) ^ βΜ) + βι{Λ) ^ 2 sup \μι(Β)\ BcA BeS/ + 2 sup \μ2(Β)\ ^ 4sup \μ(Β)\ Β X be a set function with m(0) = 0. We say that m is with finite variation (with respect to the class sé) if m{A) < + oo, for every A e sé. The restriction of the variation m to the class sé is called the modulus of HI and is denoted by |m|. Sometimes, we shall call \m\ also the variation of #w. From the text it will be clear if by variation of m we mean m or \m\. From the inequality |w(^[)| ^ HÏ(^) we deduce that \m(A)\ ^ |ffi| (4), for every It is the same to say that m is with finite variation or that the positive set function \m\ defined on sé infinite. From the relations nι + n ^ m + n and ÔHÎ = \x\ m we deduce now \m + w| ^ \m\ + \n\ = |a| \m\. It follows that the set of the set functions m: sé -» X with finite variation, is a vector space. There are set functions which are not with finite variation. E x a m p l e . Take Z t h e space of the bounded real functions defined on T, with norm ||/|| = sup | / ( 0 | and ^ the clan of the finite or countable subsets of T. teT The set function m: *€ -> X defined by the equality m(A) — φΑ9 A etf is countably additive and \m(A)\ = ||g^|| = 1 for every A e tf. It follows that m(A) = k if A is finite and consists of k points, and m(A) = + oo if A is infinite. §3. Variation of set functions Proposition 8. If A u Be sé for every disjoint sets A, Be sé and if m is with finite variation, then \m\ is the smallest of all the positive set functions v defined on sé, which are finite, increasing and superadditive and verify the inequality \m(A)\ ^ v(A), for every A e sé. We remark first that for every finite family (At)ieI of disjoint sets of sé we have If v is a positive, increasing and superadditive set function on sé verifying the preced­ ing inequality, then for every set A e sé and for every family (At)ieI of disjoint sets of sé contained in A we have Σ \fn{At)\ ^ Σ v(A>) ^ v (\J ΑΛ ^ v(A), hence \m\(A)iv(A). P r o p o s i t i o n 9. If A u Be sé for every disjoint sets A, Be sé and if m is with finite variation, then \m\ = m. If we put μ = |iii| then μ is a positive, finite, increasing and superadditive set function defined on sé, and \m(A)\ ^ μ(Α), for every A e sé'. We have to show that m = fi. Let A c T. For every finite family (At) of disjoint sets of sé contained in A, we have hence m(A) S β(Α). Conversely, ifBcA,Besé,we μ(Β) = m(B) ^ m(A). As μ is positive and superadditive, we deduce from the remark following proposition6 that fi(A) = sup μ(Β) ^ tn(A), BczA consequently, fi(A) = m(A) for every A c T, i.e. fi — fh. P r o p o s i t i o n 10. If A n Be sé for every A, Be sé and if m is finitely (or countably) additive then \m\ is finitely (respectively countably) additive. In fact, if A n Be sé for A, Be sé, we have sé e Z(sé) (§ 1, property 6 of the class t(sé)). I. Vector measures On the other hand from the property 9 of the variation we deduce that m is finitely (or countably) additive on Z(si) hence on si too, therefore |/w| is finitely (respectively countably) additive on si. Conversely, we have P r o p o s i t i o n 11. Let ^ be a clan and m: %> -* X an additive set function with finite variation. If \m\ is countably additive, then m is also countably additive. 00 Let(^i) be a sequence of disjoint sets of ^ such that A = U AiE^. we have '=1 m(A) - m(A) - m For every n As |/M| is countably additive, we have lim \m\ (\JAt) = lim (\m\ (A) I H-+00 \ £ \m\(A,)) = 0, i=l therefore lim m(A) - Σ m(At) = 0, hence m(A) = Σ iw(^i). 5. LOCALLY B O U N D E D SET F U N C T I O N S Let m: sé -> X be a set function. We say that m is bounded if sup |m(y4)| < oo and that m is locally bounded if for every set .4 e si we have sup \m(B)\ < oo. BeS/ Every positive, finite and increasing set function μ on j ^ is locally bounded, since sup \μ(Β)\ g μθ 4 ) < oo, for Be si B e A. We shall see (proposition 17) that every measure defined on a semi-tribe is locally bounded. § 3. Variation of set functions If m is with finite variation, then m is locally bounded. In fact, if A e sé, then for every Be sé with B c A we have \m(B)\ g m(B) ^ fn{A) < oo. For the scalar additive set functions we have also a converse property : P r o p o s i t i o n 12. Suppose that A u Be sé for every disjoint sets A, Be sé. A scalar additive set function μ on sé is locally bounded if and only if it is with finite variation. In fact, if μ is locally bounded, then from the inequality (proposition 7) μ(Α)£4*ιιρ\μ(Β)\ BcA BeSé we deduce that μ is with finite variation. R e m a r k . In particular, the proposition is true for a scalar additive set function on a clan. The real additive set functions with finite variation are differences of positive additive set functions: P r o p o s i t i o n 13. If μ is a real additive set function on sé, with finite variation \μ\, then the set functions ^+ = 2 (M ~ = 2 ^ 1 - μί are positive and additive on sé and we have μ = μ+ — μ~ \μ\ = μ+ + μ~. Since μ is additive, its variation \μ\ is also additive, whence μ+ and μ~ are additive. For every set A e sé we have \μ(Α)\ 5Ξ· \μ\ (A), whence -\μ\(Α)£μ(Α)£\μ\(Α), therefore - | μ | 4f = v4f n ^4". The sets y4j" are essentially positive and disjoint, the sets A^ are essentially negative and disjoint and we have A+ = \JAt Σ\μ(Αί)\ = A~ = = MU A+) - MU An = M^+) - μ(Α~). Taking the supremum in the left hand we deduce \μ\{Α) = μ{Α+)- R e m a r k . For every set A e £f we have μ+(Α~) = 0 and = 0. P r o p o s i t i o n 16. Let μ be a real measure on the semi-tribe Sf and let μχ, μ2 be two positive measures on F g In fact, let A a T. Let (At) be a finite family of disjoint sets of sé contained in A, and (oci) a family of numbers such that \F (^) 5 therefore mR,x(A) is a continuous linear functional x' e X': = (x,xfy, and we have |JC'| ^ \e\ | / | . Let now (^ f ) be a finite family of disjoint sets of sé contained in A, and (xt) a family of elements of E such that |x ( | ^ 1. Then |£lfl(^,)Xi| = SUp | Σ < Λ | ( ^ , ) ^ Ι ^ > | ^ therefore ^ £ > / Γ(^4) ^ 4 s* sup |X | = m^uO*) I. Vector measures P r o p o s i t i o n 2. If sé is a clan, then for every set A e sé we have m(A) = sup I £ m(At) xt I the supremum being taken for all the finite families (At)iel of disjoint sets of sé such that U Ai = A and for all the finite families (Xi)ieI of elements of E such that \XÌ\ :g 1 for iel each i e I. is a family of disjoint sets of sé contained'm A and if (Xi)iâiâ„ In fact, if (Ai)lâiân is a family of elements of E such that \xt\ ^ 1 for i = 1, 2, ..., n, then, taking n = A — [J Ai a n d x „ + 1 = 0, the family (Ai)1^i^n+i consists of disjoint sets of sé, U At = A and we have n+l £ Σ X m(Ai)Xi i=l r = l 2. P R O P E R T I E S O F THE S E MI-VARIATI O N Let m: sé - X » = 1 therefore nii is countably additive. 2. C O M P L E T I O N O F AN ADDITIVE SET F U N C T I O N Letfé7be a clan, X a vector space and m an additive set function defined on fé with values in X or in R+ . D e f i n i t i o n l . ^ s e f ^ e C(fé) w ^α irf to Z?e essentially m-null ifm(B) = 0/ör etw>> set B e^ with B cz A. The set function m is said to be complete on^if^ contains all the subsets of every essentially m-null set of%>. The class of the essentially m-null sets is a clan. We denote by JV{m) the hereditary clan generated by the essentially m-null sets, i.e. the class of the sets contained in essentially m-null sets. Iffé7is a tribe and m is a measure on fé, then e/T(m) is a tribe. To say that m is complete means that jV*(ni) n fé e fé. We show now that every additive set function can be extended to a complete additive set function. I. Vector measures P r o p o s i t i o n 2. The class JT ofthe sets ofthe form A u NwithA e 0, there < ε. Let E e Σ(μ) and ε > 0 and write E = B u TV with B e T and N negligible. There exists a sequence (At) of sets offé*such that B c:\JA, Σ μ(Α>) < μ(Β) + ^. There exists a number n such that Σ ΜΛ«) < f The set ^4 = U Λ· belongs to fé7 and we have μ(Ε -A) = μ(Β -Α)^μ* μ{Α -Ε) = μ(Α-Β)^μ*(υΑί-Β) Λ,- - Λ ) = μ* { U Λ, 1 = 1 = μ*(υΑ,)- whence μ(Ε ΑΑ) = μ(Ε - Α) + μ ( ^ - Ε) < ε. T h e o r e m 3. Let X be a Banach space, fé7 a clan, HI :fé7-* X a measure with finite variation μ and Ctif a clan such that ^ c / c Σ(μ). Then m can be extended to a measure mx\Ctif -> X with finite variation μ1 such that ^(μχ) = £(μ) and μ\ = μ*. The outer measure μ* is finite and countably additive on Σ(μ). Then the restriction μχ of//* is a finite measure on J f which extends μ. From the proposition 13 we deduce that *€ is dense in JT for the semi-distance ρμι, therefore we can apply the corollary of the theorem 1 and obtain a measure mx : Ctif -► Zwith finite variation and Im^ = μχ. The proof of the equalities £(μι) = Ζ(μ) and μ\ = μ* is devided in several parts: a) ^~(μι) T is measurable with respect to the classes sé and sé', if f(A') e sé for every A' e sé'. If, for example, T and T' are topological spaces, and sé, sé' are the classes of the open sets, then a function / i s measurable with respect to sé and sé' if and only if/ is continuous. According to this general definition, to say that a function/: T -► R is «^"-measurable (in the sense of the definition 3) means t h a t / i s measurable with respect to the In the sequel we shall study real functions with finite or infinite values, measurable with respect to a tribe y such that Te^. We shall say also "measurable functions" instead of "^"-measurable functions" if the tribe &" is understood. Examples. -1 1. Every constant function/(i) = oc is ^"-measurable, since f(A) = 0 if oc φ A and = T if oc e A. From this example we deduce the importance of the condition 2. A characteristic function ψΑ is «^"-measurable if and only if A e 3~. 3. Every ^"-step function n Σ i= l § 6. Measurable functions is measurable. In fact we can consider the sets A> disjoint and the numbers f(t) we have 0 ^ f(t) -fn(t) S — ; i f / ( 0 = oo, then/ n (0 = «; hence in both cases we have lim fn(t) = f(t). If / i s bounded by a certain number Af, then w-*oo for every n > M we have 0 ^ / ( i ) — /,(*) ^ —, hence the sequence (/„) converges uniformly to / If / i s not positive, then the functions / + = sup ( / 0) and / " = - inf ( / 0) are ^"-measurable and positive and we have f = f+ - / " . Let (/„+) and (/„") be two sequences of positive ^"-step functions tending respectively to / + and / " . Then fn=fn — fn are H)n -1 therefore f(F) n A = (f(F) nA')v is //-measurable. A' is //-measurable -1 , (f(F) n N) is //-measurable, consequently f(F) -1 § 6. Measurable functions If the condition 1 is verified for open spheres, it is also verified for closed spheres, since {x; d(x, x0) ^ a} = D \x; d(x, x0) < a + The proposition is completely proved. R e m a r k. In the condition 2 we can take only sets A e Ή, or only sets A with σ-finite measure. For functions with values in a normed space we have the following characterization. P r o p o s i t i o n 12. Let E be a normed space. A function f: T -► E is μ-measurable if and only if the following two conditions are verified: X.for every continuous linear functional x' e E\ the scalar function t -> (f(t), x'} is μ-measurable; 2. for every μ-integrable set A c T, there exists a μ-negligible set N cz A and a countable set H a E such that f (A — N) c H. Suppose first t h a t / i s //-measurable and let x' e E'.Ii -1 G a R is open, then the set Gi = x'(G) is open in E; hence f{Gx) is //-measurable, therefore the function x' o f = - {a, zn}\ rg r \z.\}. The functions t -> \(J(t), zn} — {a, z„>| are //-measurable, therefore the sets from the right side are //-measurable, consequently the set of the left side is //-measurable -1 too. It follows that f(Sr(a)) is //-measurable whence the condition 2 of the proposi­ tion 11 is verified. From this proposition we deduce t h a t / i s //-measurable. R e m a r k . In the condition 2 we can take only sets A e E, then: 1. the function f is μ-measurable ; 2. for every μ-integrable set A a T and every ε > 0, there exists a set B E ^(tf) with B c= A and μ(Α — B) < ε, such that sequence (fn) converges uniformly to f on B. Let d be a distance on E compatible with the topology of E. We shall prove first the second part. Let i c T b e a //-integrable set and s > 0. There exists a //-negligible set N cz A such that fn(t) -> f(t) for every t e A' = A — N and a countable set H since the sequence (fn) converges to / on A'. It follows that lim μ{Α' — Antt) = 0, consequently we can find a natural number nr such that μ(Α' - A„r,r) < ~ . Theset A0 = Π 00 ^„ rfr is //-measurable and we have A0 c: A' a A and μ(Α - A0) = / φ 4 ' -Α0) = μ(ΰι (Α' - 0 and every x e £ w e have then = Π Π /e5;rf(/p(0,x)^« + /($,(*)) n B = Π H Λ(5Λ+1/Γ(χ)) η 5 . r = 1 P ^ «r It follows that the set of the left hand side of the last equality is //-measurable. Using the second part of the theorem, for every //-integrable set A c T we find a sequence (Bn) of disjoint sets of ^ , such that A — U Bn is //-negligible and (/,) converges uniformly to / on each Bn. Namely, we take BXE^ with B1 a A and //(^4 — Bi) < 1 such that (J'n) converges uniformly on B^. For n ^ 2, we take Bne^ i J?*(E, F) is Z-weakly μ-measurable, if for every xe E and every z e Z, the function t -► (U(t) x, z> is μ-measurable. To say that U is Z-weakly //-measurable means that for every z e Z, the function ί / ο ζ : Γ - > "^ΟΕ", C) is simply //-measurable. We say that a function / : T -> F is Z-weakly //-measurable if, considered with values in J?*(E, F) is Z-weakly //-measurable means that for every x e £ the function / -► U(t)x is Z-weakly //-measurable. If F is the space of the scalars, to say that a function U: T -► J^CE, C) = Is7 is ^-weakly //-measurable means that U is simply // -measurable, i.e. that for every x e E the function Lfr is //-measurable. If U,V: Γ-» &*(E9F) are Z-weakly //-measurable then U + F and a t / are Z-weakly //-measurable. P r o p o s i t i o n 19. 7/* U: Γ-> J?*(E, F) is Z-weakly μ-measurable and if the func­ tionsf: T-> E and h : T-+Z are μ-measur able then the function (JJf K) is μ-measurable. Let Ae%>. Since/and h are //-measurable, there exist two sequences of //-measur­ able step functions (f„) of on A, there­ fore is //-measurable (corollary 3 of the theorem 1). If C/is weakly measurable a n d / i s measurable it does not follow that the functions Uf and \Uf\ are measurable. In certain cases these functions are also measurable. We consider first a particular case. P r o p o s i t i o n 20. Ifh: T -+ F is Z-weakly μ-measurable and if there exists a coun­ table norming set S a Z, then the function \h\ is μ-measurable. In fact, for every zeS, the function is //-measurable, therefore || is //-measurable, consequently !—f-—I is //-measurable. Since \z\ |Ä(0| = s u p l < ^ ^ > l : for every te T and since S is countable, we deduce that |Ä| is //-measurable. § 6. Measurable functions R e m a r k . More generally: if A: Γ - * F i s Z- weakly //-measurable and if for every Ae'ë there exists a countable set S a Z such that μ(,)|-ΜρΙ| zeS //-almost everywhere on A, then |/r| is //-measurable. P r o p o s i t i o n 21. 7f U: T^>&*(E, F) is Z-weakly μ-measurable and f: T'-+ E is μ-measurable and if there exists a countable norming subset S cz Z, then the function t -> 11/(0/(01 ^ μ-measurable. We apply the proposition 20 to the function h = Uf. R e m a r k . There exists a countable norming subset S c Z in each of the following cases : 1. F i s the conjugate space of a Banach space G of countable type and Z = G; in this case we can take S = Z. 2. F' is of countable type and Z = F ' ; in this case we take S = Z (we remark that in this case F is also of countable type). 3. F is of countable type and Z = F' ; in this case, if (xn) is a sequence dense in F, and if for each n we take zn e F ' with \zn\ = 1 and = | ΛΤ„|, then (z„) is a norm­ ing sequence in Z. In this last case we have more : P r o p o s i t i o n 22. If U: F-> J£?*(F, F) w Z-weakly μ-measurable and if for every xe E there exists a countable set H cz F such that U(t)x e H μ-almost everywhere {in particular if F is of countable type), then U is simply μ-measurable. In fact, for every xe E, the function/ = Ux: T -> F i s Z-weakly // -measurable. If we consider F cz Z' = J?*{E, F) is Z-weakly μ-measurable, if E is of countable type and if there exists a countable norming subset 5 c Z , then the function \U\ is μ-measurable. If (xn) is a sequence dense in F, then |17C0| = sup \Ξ01ΞΑ9 \Xn\ IL Integration Since every function t -> ' | U\ is //-measurable. "* is //-measurable (proposition 21), it follows that ' "' R e m a r k . Other sufficient conditions for \u\ to be//-measurable are given in pro­ position 5, § 11. P r o p o s i t i o n 24. If U: T -> S£{E, F) is Z-weakly μ-measurable and if there exists a countable set H c J?(E, F) such that U(t) e Ημ-almost everywhere (in particular if J?(E, F) is of countable type), then U is μ-measurable. Let x e E. The function/ = Ux is Z-weakly //-measurable, Hx is a countable subset of F a n d f(t) e Hx //-almost everywhere, therefore/is //-measurable, consequently U is simply //-measurable. From the corollary of the proposition 18 we deduce that U is // §7. I N T E G R A T I O N O F STEP F U N C T I O N S 1. D E F I N I T I O N A N D P R O P E R T I E S Let ^ be a clan of subsets of T, X a vector space and m : uv be a bilinear mapping ofXxE into F. Then we can integrate with respect to m step functions / : T -+ E, and the integral \fdm belongs to F. Examples. 1. Denote by J?*(E, F) the space of the linear mappings U: E -> F. We can take X c J?* (E, F) and the natural bilinear mapping (w, v) -* uv with we X and veE. The general situation of a bilinear mapping uv of X x Finto F can be always reduced to this case, identifying an element ueX with the linear mapping v -> uv of E into F and then X c J^*(F, F). It follows that we can integrate with respect to an additive operator valued set function HI: # -» J?*(E, F) step functions/: Γ-> 2Γ, and the integral J / i / w belongs toF. 2. We can take E c J^*(X, jp) and the natural mapping (r, w) -► vu with Î ) É £ a n d ueX. The general situation can be always reduced to this case, identifying an element ve E with the linear mapping u -> uv of X into F and then F c E-Book Information • Series: International series in pure and applied mathematics volume 95 • Year: 2,002 • Edition: 10. ed • City: Milano : Milano • Pages: 437 • Language: Italian • Identifier: 9788878247611,8878247618,9781483197623 • Org File Size: 18,301,487 • Extension: pdf • Tags: Measure theory. Banach spaces. Integrals, Generalized. MATHEMATICS -- Calculus. MATHEMATICS -- Mathematical Analysis.
{"url":"https://vdoc.pub/documents/vector-measures-6c5pv30mrg40","timestamp":"2024-11-10T17:52:32Z","content_type":"text/html","content_length":"77398","record_id":"<urn:uuid:f9f8ac5e-bf8d-47db-ad63-8b41ab838ccb>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00542.warc.gz"}
On This Day in Math - September 30 Big whirls have little whirls, That feed on their velocity; And little whirls have lesser whirls, And so on to viscosity. ~Lewis Richardson The 274th day of the year; 274 is a tribonacci number..The tribonacci numbers are like the Fibonacci numbers, but instead of starting with two predetermined terms, the sequence starts with three predetermined terms and each term afterwards is the sum of the preceding three terms. The first few tribonacci numbers are 0, 0, 1, 1, 2, 4, 7, Colin Maclaurin (1698–1746), age 19, was appointed to the Mathematics Chair at Marischal College, Aberdeen, Scotland. This is the youngest at which anyone has been elected chair (full professor) at a university. (Guinness) In 1725 he was made Professor at Edinburgh University on the recommendation of Newton. *VFR The University of Berlin opened. *VFR It is now called The Humboldt University of Berlin and is Berlin's oldest university. It was founded as the University of Berlin (Universität zu Berlin) by the liberal Prussian educational reformer and linguist Wilhelm von Humboldt, whose university model has strongly influenced other European and Western universities.*Wik In his desk notes Sir George Biddell Airy writes about his disappointment on finding an error in his calculations of the moon’s motion. “ I had made considerable advance ... in calculations on my favourite numerical lunar theory, when I discovered that, under the heavy pressure of unusual matters (two transits of Venus and some eclipses) I had committed a grievous error in the first stage of giving numerical value to my theory. My spirit in the work was broken, and I have never heartily proceeded with it since.” *George Biddell Airy and Wilfrid Airy (ed.), Autobiography of Sir George Biddell Airy (1896), 350. Felix Klein visits Worlds fair in Chicago, then visits many colleges. On this day the New York Mathematical society had a special meeting to honor him. *VFR an early manned rocket-powered flight was made by German auto maker Fritz von Opel. His Sander RAK 1 was a glider powered by sixteen 50 pound thrust rockets. In it, Opel made a successful flight of 75 seconds, covering almost 2 miles near Frankfurt-am-Main, Germany. This was his final foray as a rocket pioneer, having begun by making several test runs (some in secret) of rocket propelled vehicles. He reached a speed of 238 km/h (148 mph) on the Avus track in Berlin on 23 May, 1928, with the RAK 2. Subsequently, riding the RAK 3 on rails, he pushed the world speed record up to 254 km/ h (158 mph). The first glider pilot to fly under rocket power, was another German, Friedrich Staner, who flew about 3/4-mile on 11 Jun 1928.*TIS 2012 A Blue moon, Second of two full moons in a single month. August had full moons on the 2nd, and 31st. September had full moons on the 1st and 30th. After this month you have to wait until July of 2015 for the next blue moon. (The Farmer's Almanac uses a different notation for "blue moon", the third full moon in a season of four full moons.)*Wik 1550 Michael Mästin was a German astronomer who was Kepler's teacher and who publicised the Copernican system. Perhaps his greatest achievement (other than being Kepler's teacher) is that he was the first to compute the orbit of a comet, although his method was not sound. He found, however, a sun centred orbit for the comet of 1577 which he claimed supported Copernicus's heliocentric system. He did show that the comet was further away than the moon, which contradicted the accepted teachings of Aristotle. Although clearly believing in the system as proposed by Copernicus, he taught astronomy using his own textbook which was based on Ptolemy's system. However for the more advanced lectures he adopted the heliocentric approach - Kepler credited Mästlin with introducing him to Copernican ideas while he was a student at Tübingen (1589-94).*SAU 1715 Étienne Bonnot de Condillac (30 Sep 1715; 3 Aug 1780) French philosopher, psychologist, logician, economist, and the leading advocate in France of the ideas of John Locke (1632-1704). In his works La Logique (1780) and La Langue des calculs (1798), Condillac emphasized the importance of language in logical reasoning, stressing the need for a scientifically designed language and for mathematical calculation as its basis. He combined elements of Locke's theory of knowledge with the scientific methodology of Newton; all knowledge springs from the senses and association of ideas. Condillac devoted careful attention to questions surrounding the origins and nature of language, and enhanced contemporary awareness of the importance of the use of language as a scientific instrument.*TIS 1775 Robert Adrain born. Although born in Ireland he was one of the first creative mathematicians to work in America. *VFR Adrain was appointed as a master at Princeton Academy and remained there until 1800 when the family moved to York in Pennsylvania. In York Adrain became Principal of York County Academy. When the first mathematics journal, the Mathematical Correspondent, began publishing in 1804 under the editorship of George Baron, Adrain became one of its main contributors. One year later, in 1805, he moved again this time to Reading, also in Pennsylvania, where he was appointed Principal of the After arriving in Reading, Adrain continued to publish in the Mathematical Correspondent and, in 1807, he became editor of the journal. One has to understand that publishing a mathematics journal in the United States at this time was not an easy task since there were only two mathematicians capable of work of international standing in the whole country, namely Adrain and Nathaniel Bowditch. Despite these problems, Adrain decided to try publishing his own mathematics journal after he had edited only one volume of the Mathematical Correspondent and, in 1808, he began editing his journal the Analyst or Mathematical Museum. With so few creative mathematicians in the United States the journal had little chance of success and indeed it ceased publication after only one year. After the journal ceased publication, Adrain was appointed professor of mathematics at Queen's College (now Rutgers University) New Brunswick where he worked from 1809 to 1813. Despite Queen's College trying its best to keep him there, Adrain moved to Columbia College in New York in 1813. He tried to restart his mathematical journal the Analyst in 1814 but only one part appeared. In 1825, while he was still on the staff at Columbia College, Adrain made another attempt at publishing a mathematical journal. Realising that the Analyst had been too high powered for the mathematicians of the United States, he published the Mathematical Diary in 1825. This was a lower level publication which continued under the editorship of James Ryan when Adrain left Columbia College in 1826. *SAU 1870 Jean-Baptiste Perrin (30 Sep 1870; 17 Apr 1942) was a French physicist who, in his studies of the Brownian motion of minute particles suspended in liquids, verified Albert Einstein's explanation of this phenomenon and thereby confirmed the atomic nature of matter. Using a gamboge emulsion, Perrin was able to determine by a new method, one of the most important physical constants, Avogadro's number (the number of molecules of a substance in so many grams as indicated by the molecular weight, for example, the number of molecules in two grams of hydrogen). The value obtained corresponded, within the limits of error, to that given by the kinetic theory of gases. For this achievement he was honoured with the Nobel Prize for Physics in 1926.*TIS 1882 Hans Wilhelm Geiger (30 Sep 1882; 24 Sep 1945) was a German physicist who introduced the Geiger counter, the first successful detector of individual alpha particles and other ionizing radiations. After earning his Ph.D. at the University of Erlangen in 1906, he collaborated at the University of Manchester with Ernest Rutherford. He used the first version of his particle counter, and other detectors, in experiments that led to the identification of the alpha particle as the nucleus of the helium atom and to Rutherford's statement (1912) that the nucleus occupies a very small volume in the atom. The Geiger-Müller counter (developed with Walther Müller) had improved durability, performance and sensitivity to detect not only alpha particles but also beta particles (electrons) and ionizing electromagnetic photons. Geiger returned to Germany in 1912 and continued to investigate cosmic rays, artificial radioactivity, and nuclear fission.*TIS 1883 Ernst David Hellinger (1883 - 1950) introduced a new type of integral: the Hellinger integral . Jointly with Hilbert he produced an important theory of forms. *SAU 1894 Dirk Jan Struik (30 Sept 1894 , 21 Oct 2000) Dirk Jan Struik (September 30, 1894 – October 21, 2000) was a Dutch mathematician and Marxian theoretician who spent most of his life in the United States. In 1924, funded by a Rockefeller fellowship, Struik traveled to Rome to collaborate with the Italian mathematician Tullio Levi-Civita. It was in Rome that Struik first developed a keen interest in the history of mathematics. In 1925, thanks to an extension of his fellowship, Struik went to Göttingen to work with Richard Courant compiling Felix Klein's lectures on the history of 19th-century mathematics. He also started researching Renaissance mathematics at this time. Struik was a steadfast Marxist. Having joined the Communist Party of the Netherlands in 1919, he remained a Party member his entire life. When asked, upon the occasion of his 100th birthday, how he managed to pen peer-reviewed journal articles at such an advanced age, Struik replied blithely that he had the "3Ms" a man needs to sustain himself: Marriage (his wife, Saly Ruth Ramler, was not alive when he turned one hundred in 1994), Mathematics, and Marxism. It is therefore not surprising that Dirk suffered persecution during the McCarthyite era. He was accused of being a Soviet spy, a charge he vehemently denied. Invoking the First and Fifth Amendments of the U.S. Constitution, he refused to answer any of the 200 questions put forward to him during the HUAC hearing. He was suspended from teaching for five years (with full salary) by MIT in the 1950s. Struik was re-instated in 1956. He retired from MIT in 1960 as Professor Emeritus of Mathematics. Aside from purely academic work, Struik also helped found the Journal of Science and Society, a Marxian journal on the history, sociology and development of science. In 1950 Stuik published his Lectures on Classical Differential Geometry. Struik's other major works include such classics as A Concise History of Mathematics, Yankee Science in the Making, The Birth of the Communist Manifesto, and A Source Book in Mathematics, 1200-1800, all of which are considered standard textbooks or references. Struik died October 21, 2000, 21 days after celebrating his 106th birthday. *Wik 1905 Sir Nevill F. Mott (30 Sep 1905; 8 Aug 1996) English physicist who shared (with P.W. Anderson and J.H. Van Vleck of the U.S.) the 1977 Nobel Prize for Physics for his independent researches on the magnetic and electrical properties of amorphous semiconductors. Whereas the electric properties of crystals are described by the Band Theory - which compares the conductivity of metals, semiconductors, and insulators - a famous exception is provided by nickel oxide. According to band theory, nickel oxide ought to be a metallic conductor but in reality is an insulator. Mott refined the theory to include electron-electron interaction and explained so-called Mott transitions, by which some metals become insulators as the electron density decreases by separating the atoms from each other in some convenient way.*TIS 1913 Samuel Eilenberg (September 30, 1913 – January 30, 1998) was a Polish and American mathematician born in Warsaw, Russian Empire (now in Poland) and died in New York City, USA, where he had spent much of his career as a professor at Columbia University. He earned his Ph.D. from University of Warsaw in 1936. His thesis advisor was Karol Borsuk. His main interest was algebraic topology. He worked on the axiomatic treatment of homology theory with Norman Steenrod (whose names the Eilenberg–Steenrod axioms bear), and on homological algebra with Saunders Mac Lane. In the process, Eilenberg and Mac Lane created category theory. Eilenberg was a member of Bourbaki and with Henri Cartan, wrote the 1956 book Homological Algebra, which became a classic. Later in life he worked mainly in pure category theory, being one of the founders of the field. The Eilenberg swindle (or telescope) is a construction applying the telescoping cancellation idea to projective modules. Eilenberg also wrote an important book on automata theory. The X-machine, a form of automaton, was introduced by Eilenberg in 1974. *Wik 1916 Richard Kenneth Guy (born September 30, 1916, Nuneaton, Warwickshire - ) is a British mathematician, and Professor Emeritus in the Department of Mathematics at the University of Calgary. He is best known for co-authorship (with John Conway and Elwyn Berlekamp) of Winning Ways for your Mathematical Plays and authorship of Unsolved Problems in Number Theory, but he has also published over 100 papers and books covering combinatorial game theory, number theory and graph theory. He is said to have developed the partially tongue-in-cheek "Strong Law of Small Numbers," which says there are not enough small integers available for the many tasks assigned to them — thus explaining many coincidences and patterns found among numerous cultures. Additionally, around 1959, Guy discovered a unistable polyhedron having only 19 faces; no such construct with fewer faces has yet been found. Guy also discovered the glider in Conway's Game of Life. Guy is also a notable figure in the field of chess endgame studies. He composed around 200 studies, and was co-inventor of the Guy-Blandford-Roycroft code for classifying studies. He also served as the endgame study editor for the British Chess Magazine from 1948 to 1951. Guy wrote four papers with Paul Erdős, giving him an Erdős number of 1. He also solved one of Erdős problems. His son, Michael Guy, is also a computer scientist and mathematician. *Wik 1918 Leslie Fox (30 September 1918 – 1 August 1992) was a British mathematician noted for his contribution to numerical analysis. *Wik 1953 Lewis Fry Richardson , FRS (11 October 1881 - 30 September 1953) was an English mathematician, physicist, meteorologist, psychologist and pacifist who pioneered modern mathematical techniques of weather forecasting, and the application of similar techniques to studying the causes of wars and how to prevent them. He is also noted for his pioneering work on fractals and a method for solving a system of linear equations known as modified Richardson iteration.*Wik 1985 Dr. Charles Francis Richter (26 Apr 1900, 30 Sep 1985) was an American seismologist and inventor of the Richter Scale that measures earthquake intensity which he developed with his colleague, Beno Gutenberg, in the early 1930's. The scale assigns numerical ratings to the energy released by earthquakes. Richter used a seismograph (an instrument generally consisting of a constantly unwinding roll of paper, anchored to a fixed place, and a pendulum or magnet suspended with a marking device above the roll) to record actual earth motion during an earthquake. The scale takes into account the instrument's distance from the epicenter. Gutenberg suggested that the scale be logarithmic so, for example, a quake of magnitude 7 would be ten times stronger than a 6.*TIS *CHM=Computer History Museum *FFF=Kane, Famous First Facts *NSEC= NASA Solar Eclipse Calendar *SAU=St Andrews Univ. Math History *TIA = Today in Astronomy *TIS= Today in Science History *VFR = V Frederick Rickey, USMA *Wik = Wikipedia *WM = Women of Mathematics, Grinstein & Campbell
{"url":"https://pballew.blogspot.com/2012/09/on-this-day-in-math-september-30.html","timestamp":"2024-11-05T01:10:55Z","content_type":"application/xhtml+xml","content_length":"143292","record_id":"<urn:uuid:6dcb4dd1-e645-4803-a6d5-c59914342cda>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00432.warc.gz"}
How do you find the length of the median of an isosceles triangle? 7. In an isosceles triangle, medians drawn from equal angles are equal in length. The length of medians drawn from vertices with equal angles should be equal. Thus, in an isosceles triangle, ABC if AB = AC, medians BE, and CF originating from the vertex B and C respectively are equal in length. What is the median for an isosceles right triangle angles? In a triangle, a line that connects one corner (or vertice) to the middle point of the opposite side is called a median. A property of isosceles triangles, which is simple to prove using triangle congruence, is that in an isosceles triangle the median to the base is perpendicular to the base. What is the shortest median of a right angled triangle? 25 units The hypotenuse is the longest side of a right-angled triangle. Given that two of the sides of a right triangle are 10 cm and 10.5 cm. Question 11: The shortest median of a right-angled triangle is 25 What is the formula of a isosceles triangle? List of Formulas to Find Isosceles Triangle Area Formulas to Find Area of Isosceles Triangle Using base and Height A = ½ × b × h Using all three sides A = ½[√(a2 − b2 ⁄4) × b] Using the length of 2 sides and an angle between them A = ½ × b × c × sin(α) Can an isosceles triangle be a right triangle? An isosceles right triangle is an isosceles triangle and a right triangle. This means that it has two congruent sides and one right angle. Therefore, the two congruent sides must be the legs. Are the medians of an isosceles triangle equal? because the angles opposite the congruent sides of a triangle are themselves congruent. by the Side-Angle-Side triangle congruence postulate. Thus we have shown that 2 medians of an isosceles triangle are congruent and so the medians of an isosceles triangle do in fact form an isosceles triangle. What is a median of a triangle? In geometry, a median of a triangle is a line segment joining a vertex to the midpoint of the opposite side, thus bisecting that side. In the case of isosceles and equilateral triangles, a median bisects any angle at a vertex whose two adjacent sides are equal in length. What is medmedian of isosceles triangle? Median is a line, joining a vertex of an isosceles triangle to the mid point of the opposite side. Median of Isosceles triangle is same as altitude as it is drawn from vertex. The intersection of all three median is called as centroid. With two equal sides, the Euler line coincides with the axis of symmetry. How many medians can there be in a triangle? In any triangle there can only be three medians. b. In an equilateral triangle all the medians are of the same length. c. In an isosceles triangle, the two medians drawn from the vertices of the equal angles are equal in length. What are the different ways to find the length of median? The different ways to find the length of a median are as follows: The properties of the median are as follows: The median bisects the vertex angle in an isosceles and equilateral triangle where the two adjacent sides are the same. The three medians of a triangle intersect at a point called the centroid. How do you find the length of an isosceles right triangle? The most important formula associated with any right triangle is the Pythagorean theorem. According to this theorem, the square of the hypotenuse is equal to the sum of the squares of the other two sides of the right triangle. Now, in an isosceles right triangle, the other two sides are congruent. Therefore, they are of the same length “l”.
{"url":"https://sage-tips.com/recommendations/how-do-you-find-the-length-of-the-median-of-an-isosceles-triangle/","timestamp":"2024-11-10T16:31:00Z","content_type":"text/html","content_length":"118770","record_id":"<urn:uuid:99700892-c183-4a16-9b82-2f2c6cf47923>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00623.warc.gz"}
Integrating Trigonometric functions (part 2) - The Culture SGIntegrating Trigonometric functions (part 2) As promised, we will look at This requires double angle formula: Here we introduce trigo identity: Here we have a problem! But recall we did some really similar in part 1, and notice that Here we can apply double angle a few times to break it down before integrating. After seeing both part 1 and part 2, you should notice some intuitive method. Should n be even, we introduce the double angle formula to simplify things. Should n be odd, we introduce the trigonometry identities and integrate. We must apply Tell me what you think in the comments section! pingbacks / trackbacks • […] How to derive the sum to produce formula 2. Integrating Trigonometric Functions (1) 3. Integrating Trigonometric Functions (2) 4. Integrating Trigonometric Functions (3) 5. Integrating Trigonometric Functions (4) 6. […]
{"url":"https://theculture.sg/2015/07/integrating-trigonometric-functions-part-2/","timestamp":"2024-11-02T18:46:42Z","content_type":"text/html","content_length":"107455","record_id":"<urn:uuid:6ff63041-daac-46f5-990b-cac8e8f20625>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00620.warc.gz"}
You have 12 coins, one of which is counterteit and weights less than the legal coins. How can you use a simple balace 3 times to determine which coin is counterfeit???? Please help!!! Related posts: Categories: Trivia Tags: One Response to “money…miney…money!!!!?” 1. put 6 on one side of the balance, 6 on the other. whichever side is lighter, split them into 3 and 3 and put 3 on either side, then from the lighter side again weight two of the 3 you have left. if they are the same, its the other one. if they are different weights then the lighter one is the counterfeit! Popular Search Terms canadian coins worth money, rare canadian quarters, CANADIAN COINS, asos canada, 1867 confederation coin, half a crown in dollars, how to clean a loonie, nicknames for canada, powers of congress list , does coinstar take canadian coins, 1947 canadian penny, test of genius worksheet, 1982 canadian silver dollar, rare canadian coins, canadian nickel value, 1982 confederation silver dollar, how to clean loonies, canadian silver dollar values, canadian coins value, canadian pennies worth money, canadian nicknames, test of genius worksheet answers, 1950 canadian nickel value, 1973 canadian silver dollar, D G Regina, 1947 canadian penny value, terry fox loonie value, canadian coin exchange, coinstar canadian coins, value of old canadian coins
{"url":"https://canadian-coins.org/money-miney-money/","timestamp":"2024-11-11T10:19:09Z","content_type":"application/xhtml+xml","content_length":"40290","record_id":"<urn:uuid:5e32b161-2198-404b-be99-fd4602a740a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00064.warc.gz"}
Scaling Nonparametric Bayesian Inference via Subsample-Annealing Scaling Nonparametric Bayesian Inference via Subsample-Annealing Proceedings of the Seventeenth International Conference on Artificial Intelligence and Statistics, PMLR 33:696-705, 2014. We describe an adaptation of the simulated annealing algorithm to nonparametric clustering and related probabilistic models. This new algorithm learns nonparametric latent structure over a growing and constantly churning subsample of training data, where the portion of data subsampled can be interpreted as the inverse temperature β(t) in an annealing schedule. Gibbs sampling at high temperature (i.e., with a very small subsample) can more quickly explore sketches of the final latent state by (a) making longer jumps around latent space (as in block Gibbs) and (b) lowering energy barriers (as in simulated annealing). We prove subsample annealing speeds up mixing time N^2 →N in a simple clustering model and \exp(N) →N in another class of models, where N is data size. Empirically subsample-annealing outperforms naive Gibbs sampling in accuracy-per-wallclock time, and can scale to larger datasets and deeper hierarchical models. We demonstrate improved inference on million-row subsamples of US Census data and network log data and a 307-row hospital rating dataset, using a Pitman-Yor generalization of the Cross Categorization model. Cite this Paper Related Material
{"url":"http://proceedings.mlr.press/v33/obermeyer14.html","timestamp":"2024-11-08T16:08:35Z","content_type":"text/html","content_length":"18493","record_id":"<urn:uuid:44569300-39f4-411c-acbb-29c10fe44d3c>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00832.warc.gz"}
Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers PDF Download Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers PDF Download: Students of Standard 11 can now download Plus One Physics Chapter 3 Motion in a Straight Line question and answers pdf from the links provided below in this article. Plus One Physics Chapter 3 Motion in a Straight Line Question and Answer pdf will help the students prepare thoroughly for the upcoming Plus One Physics Chapter 3 Motion in a Straight Line exams. Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers Plus One Physics Chapter 3 Motion in a Straight Line question and answers consists of questions asked in the previous exams along with the solutions for each question. To help them get a grasp of chapters, frequent practice is vital. Practising these questions and answers regularly will help the reading and writing skills of students. Moreover, they will get an idea on how to answer the questions during examinations. So, let them solve Plus One Physics Chapter 3 Motion in a Straight Line questions and answers to help them secure good marks in class tests and exams. Board Kerala Board Study Materials Question and Answers For Year 2021 Class 11 Subject Hindi Chapters Physics Chapter 3 Motion in a Straight Line Format PDF Provider Spandanam Blog How to check Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers? 1. Visit our website - https://spandanamblog.com 2. Click on the 'Plus One Question and Answers'. 3. Look for your 'Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers'. 4. Now download or read the 'Class 11 Physics Chapter 3 Motion in a Straight Line Question and Answers'. Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers PDF Download We have provided below the question and answers of Plus One Physics Chapter 3 Motion in a Straight Line study material which can be downloaded by you for free. These Plus One Physics Chapter 3 Motion in a Straight Line Question and answers will contain important questions and answers and have been designed based on the latest Plus One Physics Chapter 3 Motion in a Straight Line, books and syllabus. You can click on the links below to download the Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers PDF. Question 1. Which of the following curves does not represent motion in one dimension? (b) In one-dimensional motion, the body can have at a time one value of velocity but not two values of velocities. Question 2. Free fall of an object (in vacuum) is a case of motion with (a) Uniform velocity (b) Uniform acceleration (c) Variable acceleration (d) Uniform speed (b) Uniform acceleration: Free fall of an object (in vacuum) is a case of motion with uniform acceleration. Question 3. The area under velocity-time graph fora particle in a given interval of time represents (a) velocity (b) acceleration (c) work done (d) displacement (d) displacement: Area under velocity-time graph represents displacement of a particle in a given interval of time. Question 4. The velocity-time graph of a body moving in straight line is shown in the figure. The displacement and distance travelled by the body in 6s are respectively (a) 8, 16m (b) 16m, 8m (c) 16m, 16m, (d) 8m, 8m (a) 8, 16m Displacement is equal to area under the velocity-time graph with proper sign. ∴ Displacement = 4 × 2 – 2 × 2 + 2 × 2 = 8m Distance is equal to total area under the speed time graph. ∴ Distance = 4 × 2 + 2 × 2 + 2 × 2 = 16m. Question 5. A cartravels half the distance with constant velocity of 40 kmph and the remaining half with a constant velocity 60 kmph. The average velocity of the car in kmph is (a) 40 (b) 45 (c) 48 (d) 50 (c) 48 Average velocity = ν[av] = \(\frac{s}{\frac{s}{80}+\frac{s}{120}}\) = 48kmph. Question 6. Two objects A and B travel from P to Q through two different paths as shown in figure. If both A and B takes the same time interval to travel from P to Q, then which of the following statements are (a) A and B have same speed. (b) A and B have same velocity (c) A and B have same average velocity. (d) The speed of A is greater than that of B (e) The speed of B isgreaterthanthatofA (d) The speed of A is greater than that of B. Question 7. The acceleration of a moving object is equal to the (a) gradient of a displacement-time graph (b) gradient of a velocity-time graph (c) area below a speed-time graph (d) area below a displacement – time graph (e) area below a velocity-time graph (b) Gradient of a velocity-time graph. Question 8. A ball is thrown vertically upwards and comes back. Which of the following graph represents the velocity-time graph of the ball during its flight? Question 9. The magnitude of average velocity is equal to average speed. In which case this condition is satisfied? When a particle is moving with constant velocity, the magnitude of its average velocity is equal to average speed. Question 10. Can a body be said to be at rest as well as in motion at the.same time? Yes, rest and motion are relative terms. A body at rest with respect to one body may be in motion with respect to another body. Question 11. What conclusion can you draw if the average velocity is equal to instantaneous velocity? The particle is moving with constant velocity. Question 12. Two cars are moving in such a way that their relative velocity is zero. Which of the following graph represent this situation? (a) b Question 13. The speed-time graph is shown in figure. Is it possible. No speed cannot be negative. Question 14. Why the speed of the object can never be negative? Speed is distance covered per unit time. Since distance cannot be negative, speed cannot be negative. Question 15. Is it possible that the velocity of an object be in a direction other than the direction of acceleration? If yes, give an example. Yes. A body is moving with decreasing velocity. Question 16. Is it possible to have the rate of change of velocity constant while the velocity itself changes both in magnitude and direction? If yes, give an example. Yes. Projectile motion. Question 17. If the acceleration of the particle is constant in magnitude but not in direction, what type of path does the body flow? Circular path. Question 18. Two stones of different sizes are dropped simultaneously from the top of a building. Which stone would reach earlier? Why? Both reach at ground simultaneously. Acceleration is same for both stones. Question 19. A piece of paper and iron piece are dropped simultaneously from the same point in vacum. Which one will reach at ground earlier? Both reach at ground simultaneously. Question 20. Is it possible that your cycle has a southward velocity but northward acceleration? If yes, give an example. Yes, when brakes are applied to a moving cycle, the directions of velocity and acceleration becomes opposite. Plus One Physics Motion in a Straight Line Two Mark Questions and Answers Question 1. An ant is moving through a graph paper along x-axis. A boy observes that the ant covers 1mm in every second. 1. What type of motion is this? 2. When the boy is in school bus he observe the speedometer of the bus. Which speed is observed by the speedometer? 1. Uniform motion or uniform velocity 2. Instantaneous speed (Ratio of the displacement to small interval of time). Question 2. Match the following a. (c) b. (d) c. (b) d. (a) Question 3. Some examples of motion are given below. State in each case if the motion is one, two or three dimension 1. A Kite flying on a windy day. 2. A speeding car on a long straight highway. 3. A carrom coin rebounding from the side of the board. 4. A planet revolving around its star. 1. 3 Dimensional motion 2. 1 Dimensional motion 3. 2 Dimensional motion 4. 2 Dimensional motion Plus One Physics Motion in a Straight Line Three Mark Questions and Answers Question 1. Two bodies start moving in the same straight line at the same instant of time from the same origin. The first body moves with a constant velocity of 40 m/s and the second starts from rest with a constant acceleration of 4 m/s^2. 1. What is uniform speed? 2. Find the time that elapses before the second catches the first body. 1. A body is said to be uniform if it travels equal displacements in equal intervals of time. 2. Distance travelled by first body in a time t S[1] = Vt S[1] = 40 × t Distance travelled by second body in a time t S[2] = ut + \(\frac{1}{2}\) at^2 S[2] = \(\frac{1}{2}\) × 4 × t^2 When these two bodies meet, S[1] = S[2] 40 × t = \(\frac{1}{2}\) × ^2 t = 20 s. Question 2. Velocity time graph of a moving object is shown below. 1. What is the acceleration of the object? 2. Draw displacement – time graph for the above motion shown in the graph. 1. Acceleration = 0 Question 3. Two straight lines drawn on the same displacement time graph make angles 30° and 60° with time axis respectively in the figure. 1. Which line represents greater velocity? 2. What is the ratio of the velocity of line A to the velocity of line B? 1. B Question 4. A particle starts from rest and its acceleration plotted against time t is shown below. 1. This body is at □ constant acceleration □ variable acceleration □ constant velocity □ rest 2. Plot the corresponding velocity (V) against time (t) 3. Plot the corresponding displacement (S) against time (t) 1. Constant acceleration Question 5. Displacement is a vector quantity which distance is a scalar quantity. 1. Distinguish between scalar and vector quantities. 2. An athlete runs along a circular track of radius 50m. Find the distance travelled and the displacement of the athlete when he coveres % of the circle. 3. What is the distance travelled by a body in a time t having an initial velocity u and moving with uniform acceleration ‘a’? 1. A physical quantity having both magnitudes and direction is called vecter. A physical quantity having only magnitude is called scalar. Displacement AD Distance, ABCD = AB + BC + CD Distance = \(\frac{3}{2}\) πr 3. c = ut + \(\frac{1}{2}\) at^2 Question 6. “The aerial distance between two towers is 4km. But speedometer of car shows 5.6km when travel from one tower to another” 1. By reading this statement explain the concept of distance and displacement. 2. What is the numerical ratio of displacement of object to distance? Explain. 3. A particle is moving along a circular trace of radius ‘R’. What is the distance travelled and displacement of the particle in half revolution? 1. Distance os the length of the path covered by the object. It is a scalar quantity. Displacement is the length between initial point and final point. 2. \(\frac{\text { displacement }}{\text { distance }} \leq 1\) For the straight-line path, displacement is equal to the distance travelled. But for the curved path displacement is less than the distance travelled. 3. distance = πR displacement = R + R = 2R. Plus One Physics Motion in a Straight Line Four Mark Questions and Answers Question 1. A stone is thrown upwards from the ground with a velocity ‘u’. 1. What is the maximum height attained by the stone? 2. Check the correctness of the equation obtained in (a) using the method of dimensional analysis. 3. Draw the position-time graph of the stone during its return journey. (g = 10m/s^2) 1. V^2 = u^2 + 2as 0 = u^2 + 2gH H = u^2/2g 2. H = \(\frac{u^{2}}{2 g}\) When we write the above equation in terms of dimension, we get This means that, H= \(\frac{u^{2}}{2 g}\) is dimensionally correct. Question 2. Gopal dropped an apple from the top of his flat at a height of 10m. He told his sister Seetha on the ground below that it will reach the ground in 2 seconds after he drops it. 1. Can she catch it after 2 seconds? 2. Derive suitable relation for time of fall. 3. Draw the velocity-time graph of the above body (assume the body rebounds from the floor) 1. No. 2. S = ut + 1/2at^2 h = 0 + 1/2gt^2 If ball is dropped from a height ‘h’, we can write. \(\sqrt{\frac{2 h}{g}}\) = t. Question 3. A car of mass 1000 kg starts from rest at t = 0 and under goes acceleration as shown in figure. 1. Draw the corresponding velocity-time graph. 2. What is the retarding force acting on the car? 3. What is the total distance travelled by the car during t = 0 to t = 4 sec. 2. Retarding acceleration, a = -3m/s^2 ∴ Retarding force F = ma = 1000 × 3 = 3000N. 3. Area of velocity time gives distance. ∴ Area = \(\frac{1}{2}\)bh = \(\frac{1}{2}\) × 4 × 6 Distance = 12m. Question 4. A tow rope used to pull the car of mass 700kg will break if the tension exceeds 1500N. 1. Calculate the maximum acceleration with which the car can be pulled through a level road 2. Calculate the minimum time required to bring the car to work station 500m away from the break point 1. T = ma 1500= 700 × a a = \(\frac{1500}{700}\) = 2.14 m/s^2. 2. S = ut + 1/2 at^2 500= 0 + 1/2 × (2.14) × t^2 t = \(\sqrt{\frac{2 \times 500}{2.14}}\) = 21.61sec. Question 5. 1. Figure shows the position-time graph of the one-dimensional motion of a particle. Is it correct to say from the graph that the particle moves in a straight line for t ≤ 0 and on a parabolic path fort> 0? Justify your answer. 2. Can a body have an acceleration without velocity? Justify your answer with a physical situation. 3. The table given below shows the velocity of a car at different times. a. Draw acceleration time graph b. Find the distance traveled by car in 6 sec. 1. No. initially the body remains at rest and then the body moves with constant acceleration in a straight line. 2. Yes, consider the oscillation of simple pendulum. At extreme position, velocity becomes zero and acceleration is non – zero value. 3. The velocity of a car at different times: a. acceleration a = \(\frac{16-4}{1-0}\) = 5 m/s^2 This value (acceleration is constant through-out the motion. Hence the acceleration time graph will be a straight line parallel to time axis. b. S = ut + \(\frac{1}{2}\) + 2at^2 = 11 × 6 + \(\frac{1}{2}\) × 5 × 6^2 S = 156m. Question 6. The relative velocity of body A with respect to a body B is the time rate at which body A changes its position with respect to body B. 1. If V[A] and V[B] are the velocities of A and B moving in opposite directions, what is the relative velocity of A with respect to B? 2. Two trains along the same straight rails are moving with a constant velocity of 60 km/h and 30 km/b towards each other. If at time t = 0, the distance between them is 90km, find the time when they collide. 3. The velocity-time graph of two bodies A and B make angles of 30° and 60° with the time axis, what is the ratio of their acceleration? 1. V[BA] = V[A] + V[B] 2. Relative velocity = 60 + 30 = 90km/h ∴ Hence t = \(\frac{\text { displacement }}{\text { relative velocity }}\) = \(\frac{90}{90}\) = 1h. 3. Slope of velocity time graph gives acceleration. Hence Plus One Physics Motion in a Straight Line Five Mark Questions and Answers Question 1. A balloon is ascending at the rate of 14 ms^-1 and at a height of 98 m above the ground. A stone is dropped from it. 1. State whether the motion of the balloon is accelerated or retarded. 2. After how much time does the stone reach the ground? 3. Determine the velocity with which the stone strikes the ground. 1. The motion of balloon is uniform motion. It has neither acceleration nor retardation. 2. u = 14m/s, a = -9.8, S = -98 m. S = ut + \(\frac{1}{2}\) at^2 98 = 14 t – \(\frac{1}{2}\) × 9.8 t^2 4.9 t^2 – 14 t – 98 = 0 Solvingthisweget t = 6.123 sec 3. ν^2 = u^2 + 2as ^ν2 = (14)^2 + 2 x ^–9.8 x ^–98 = 196 + 1920.8 ^ν2 = 2116.8 ^ν2 = \(\sqrt{2116.8}\) = 45.99 m/s. Question 2. A particle is moving along the x-axis with uniform positive acceleration. 1. Draw the position-time graph for its motion. 2. Obtain the expression for the displacement by drawing a velocity-time graph. 3. A ball is thrown vertically upwards with a velocity of 20 ms^-1 from the top of the tower of height 25m from the ground. How long does it remain in the air? (g = 10 ms^-2) Consider a body moving with an acceleration ‘a’. Let ‘u’ be the initial velocity at t = 0 and final velocity ‘v’ at t = t. The area of the velocity-time graph gives the displacement. This is a quadratic equation. Hence t can be found using this formula Question 3. 1. State the difference between speed and velocity. Can a body move with uniform speed but with variable velocity? Explain with the help of an example. 2. Show that a body thrown vertically upwards returns with the same magnitude of velocity. 1. Speed is a scalar quantity but velocity is a vector quantity. If a body moving along the circumstance of a circle with uniform speed, its velocity changes continuously with time. Consider a body projected upward from a point A with a velocity ‘u’. If the body reaches at B, the displacement becomes zero. Hence time taken to reach at B can be found. S = ut + 1/2 at^2 0 = ut + 1/2 × ^–10 t^2 ut = 5t^2 t = \(\frac{u}{5}\) ____(1) The velocity at B can be found using the formula V[B] = u + at _____(2) substitute eq(1) in eq(2) V[B] = u + ^–10\(\frac{u}{5}\) = u – 2u V[B] = -u ∴ V[A] = -V[B] Plus One Physics Motion in a Straight Line NCERT Questions and Answers Question 1. In which of the following examples of motion, can the body be considered approximately a point object: (a) a railway carriage moving without jerks between two stations. (b) a monkey sitting on top of a man cycling smoothly on a circular track. (c) a spinning cricket ball that turns sharply on hitting the ground. (d) a tumbling beaker that has slipped off the edge of a table. (a) (b) Question 2. The position time (x-t) graphs for two children A and B returning from their school O to their homes P and Q respectively are shown in the following figure Choose the correct entries in the brackets as follows: 1. (A/B) lives closer to the school than (B/A) 2. (A/B) starts from the school earlier than (B/A) 3. (A/B) walks faster than (B/A) 4. A and B reach home at the (same/different) time 5. (A/B) overtakes (B/A) on the road (once/twice). 1. It is clear from the graph that OQ > OP. So, A lives closer to the school than B. 2. The position-time graph of A starts from the origin (t = 0) while the position-time graph of B starts from C which indicates that B started later than A after a time interval OC. So, A started earlier than B. 3. The speed is represented by the steepness (or slope) of the position-time graph. Since the position-time graph of B is steeper than the position-time of graph A, therefore, we conclude that B is faster than A. 4. Corresponding to both P and Q, the time interval is the same, i.e., OD. This indicates that both A and B reach their homes at the same time. 5. The position-time graphs intersect at the point K. This indicates that B crosses A. Since there is only one point of intersection, therefore, the two cyclists cross each other only once. Question 3. A car moving along a straight highway with speed of 126 kmh^-1 is brought to a stop within a distance of 200m. What is the retardation of the car (assumed uniform), and how long does it take for the car to stop? Initial velocity, u = 126kmh^-1 = 126 × \(\frac{5}{18}\) ms^-1 = 35m^-1 Final velocity, ν = 0; Distance, S = 200m Using ν^2 – u^2 = 2aS, 0^2 – 35 × 35 = 2a × 200 a = \(-\frac{35 \times 35}{400}\)ms^-2 = 3.06ms^-2 So, the retardation of the car is 3.06 ms^-2 Using ν = u + at, 0 = 35-3.06 × t 3.06t = 35 t = \(\frac{35}{3.06}\)s = 11.4s. Question 4. On a two-lane road, car A is travelling with a speed of 36kmh^-1 Two cars B and C approach car A in opposite directions with a speed of 54 kmh^-1 each. At a certain instant, when the distance AB is equal to AC, both being I km, B decides to overtake A before C does. What minimum acceleration of car B is required to avoid an accident? ν[A] = 36kmh^-1 = 36 × \(\frac{5}{18}\)ms^-1 = 10ms^-1 ν[B] = ν[C] = 54ms^-1 = 54 × \(\frac{5}{18}\)ms^-1 = 15ms^-1 Relative velocity of B w.r.t. A, ν[BA] = 5 ms^-1 Relative velocity of C w.r.t.A, ν[CA]= 25ms^-1 Time taken by C to cover distance AC = \(\frac{1000 m}{25 m s-1}\) = 40s Now, forB, 1000 = 5 × 40 + \(\frac{1}{2}\)a × 40 × 40 On simplification, a = 1 ms^-2. Question 5. Read each statement below carefully and state with reasons and examples, if it is true or false; A particle in one-dimensional motion. 1. with zero speed at an instant may have non-zero acceleration at that instant, 2. with zero speed may have non-zero velocity, 3. with constant speed must have zero accelera¬tion, 4. with positive value of acceleration must be speed-ing up. 1. True 2. False 3. True 4. False For (1), consider a ball thrown up. At the highest point, speed is zero but the acceleration is non-zero, For (2), if a particle has non-zero velocity, it must have speed, For (3), if the particle rebounds instantly with the same speed, it implies infinite acceleration which is physically impossible. For (4), true only when the chosen positive direction is along the direction of motion. Question 6. A man walks on a straight road from his home to a market 2.5 km away with a speed of 5kmh^-1. Finding the market closed, he instantly turns and walks back home with a speed of 7.5kmh^-1. What is the magnitude of average velocity, and average speed of the man over the following intervals of time: 1. 0 to 30 min, 2. 0 to 50 min, 3. 0 to 40 min? Average speed overthe interval of time from 0 to 30min = \(\frac{2.5 \mathrm{km}}{30 \mathrm{min}}=\frac{2.5 \mathrm{km}}{\frac{1}{2} \mathrm{h}}\) = 5kmh^-1 Magnitude of average velocity overthe interval of time from 0 to 30 min is 5kmh^-1. This is because the “distance travelled” and the ‘magnitude of displacement’ over the interval of time from 0 to 30 min are equal. Distance covered from 30 to 50 minutes = 7.5kmh^-1 × \(\frac{20}{60}\) h = 2.5km Total distance covered from 0 to 50 minute = 2.5 km+ 2.5 km = 5km Total time = 50min = \(\frac{50}{60}\) h = \(\frac{5}{6}\) h Average speed overthe interval of time from 0 to 50min = \(\frac{5 \mathrm{km}}{5 / 6 \mathrm{h}}\) = 6kmh^-1 The displacement overthe interval of time from 0 to 50 min is zero. So the magnitude of average velocity is zero. Distance covered from 30 to 40 min = 7.5kmh^-1 × \(\frac{1}{6}\) h = 1.25km Total distance covered from 0 to 40 minute = 2.5 km + 1.25 km = 3.75km Average speed overthe interval of time from 0 to 40min = \(\frac{3.75 \mathrm{km}}{\frac{40}{60} \mathrm{h}}\) = 5.625kmh^-1 The “magnitude of displacement” is (2.5 -1.25) km, i.e., 1.25 km. Time interval = \(\frac{2}{3}\)h The ‘magnitude of average velocity’ = \(\frac{1.25 \mathrm{km}}{2 / 3 \mathrm{h}}\), = 1.875 kmh^-1. Question 7. Look at the graphs (a) to (d) in the following figure carefully and state, with reasons, which of these cannot possibly represent one-dimensional motion of the particle. None of the four graphs can represent one-dimensional motion of the particle. In fact, all the four graphs are impossible. 1. A particle cannot have two different positions at the same time. 2. A particle cannot have velocity in opposite directions at the same time. 3. Speed is always positive (non-negative). 4. Total path length of a particle can never decrease with time. Note: The arrows on the graphs are meaningless. Question 8. The x-t plot of one-dimensional motion of a particle. Is it correct to say from the graph that the particle moves in a straight line for t < 0 and on a parabolic path for t > 0? If not, suggest a suitable physical context for this graph. No, wrong, x-t plot does not show the trajectory of a particle. Context: A body is dropped from a tower (x = 0) at t = 0. Question 9. The velocity-time graph of a particle in one-dimensional motion is shown below. Which of the following formulae are correct for describing the motion of the particle over the time interval from t[1] to t[2]? (a) x(t[2]) = x(t[1]) + ν(t[1])(t[2] – t[1]) + \(\frac{1}{2}\) a(t[2] – t[1])^2 (b) ν(t[2]) = ν(t[1]) + a(t[2] – t[1]) (c) ν[average] = [x(t[2]) – x(t[1])]/(t[2] – t[1]) (d) a[average] = [ν(t[2]) – ν(t[1])]/(t[2] – t[1]) (e) x(t[2]) = x(t[1])+ ν[av] (t[2] – [1]) +\(\frac{1}{2}\) a[av](t[2] – t[1])^2 (f) x(t[2]) – x(t[1]) = Area underthe ν – 1 curve bounded by t-axis and the dotted lines. (c), (d), (f) Explanation: It is clear from the shape of ν – t graph that acceleration of the particle is not uniform between time intervals t[1] and t[2]. [Note that the given ν – t graph is not straight.] The equations (a), (b) and (e) represent uniform acceleration. Plus One Physics All Chapters Question and Answers Benefits of the Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers PDF The Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers PDF that has been provided above is extremely helpful for all students because of the way it has been drafted. It is designed by teachers who have over 10 years of experience in the field of education. These teachers use the help of all the past years’ question papers to create the perfect Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers PDF. FAQ’s Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers PDF Where can I download Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers PDF? You can download Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers PDF for the latest 2021 session. Can I download Plus One All subjects Question and Answers PDF? Yes - You can click on the links above and download subject wise question papers in PDF Is there any charge for the Plus One Physics Chapter 3 Motion in a Straight Line Question and Answers PDF? There is no charge for the model papers for you can download everything free
{"url":"https://www.spandanamblog.com/2021/10/plus-one-physics-chapter-3-motion-in-a-straight-line-question-and-answers.html","timestamp":"2024-11-12T00:43:04Z","content_type":"application/xhtml+xml","content_length":"247315","record_id":"<urn:uuid:d61717c1-516f-4a7a-a595-8bc2ba60284b>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00184.warc.gz"}
Relativity: The Special and General Theory - Free Computer, Programming, Mathematics, Technical Books, Lecture Notes and Tutorials Relativity: The Special and General Theory • Title: Relativity: The Special and General Theory • Author(s) Albert Einstein • Publisher: Gutenberg.org • License(s): The Project Gutenberg License • Hardcover: ? • eBook: HTML, PDF (56 pages), ePub, Kindle, etc. • Language: English • ISBN-10: 048641714X • ISBN-13: 978-0486417141 • Share This: Book Description From the age of Galileo until the early years of the 20th century, scientists grappled with seemingly insurmountable paradoxes inherent in the theories of classical physics. With the publication of Albert Einstein's "special" and "general" theories of relativity, however, traditional approaches to solving the riddles of space and time crumbled. In their place stood a radically new view of the physical world, providing answers to many of the unsolved mysteries of pre-Einsteinian physics. Acclaimed as the pinnacle of scientific philosophy, the theories of relativity tend to be regarded as the exclusive domain of highly trained scientific minds. The great physicist himself disclaimed this exclusionary view, and in this book, he explains both theories in their simplest and most intelligible form for the layman not versed in the mathematical foundations of theoretical physics. In addition to the theories themselves, this book contains a final part presenting fascinating considerations on the universe as a whole. Appendices cover the simple derivation of the Lorentz transformation, Minkowski's four-dimensional space, and the experimental confirmation of the general theory of relativity. Students, teachers, and other scientifically minded readers will appreciate this inexpensive and accessible interpretation of one of the world's greatest intellectual accomplishments. About the Authors Reviews, Ratings, and Recommendations: Related Book Categories: Read and Download Links:Similar Books: Book Categories Other Categories Resources and Links
{"url":"https://freecomputerbooks.com/Relativity-The-Special-and-General-Theory.html","timestamp":"2024-11-09T06:13:00Z","content_type":"application/xhtml+xml","content_length":"35511","record_id":"<urn:uuid:cc162700-bf55-489f-bf13-4a0631d87297>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00234.warc.gz"}
Theory of Combinatorial Algorithms Mittagsseminar (in cooperation with J. Lengler, A. Steger, and D. Steurer) Mittagsseminar Talk Information Date and Time: Tuesday, December 06, 2011, 12:15 pm Duration: 30 minutes Location: OAT S15/S16/S17 Speaker: Gabriel Nivasch (EPFL) Upper bounds for centerflats For every fixed d and every n, we construct an n-point set G in R^d such that every line in R^d is contained in a halfspace that contains only 2n/(d+2) points of G (up to lower-order terms). Apparently, the point set G satisfies the following more general property: For every k, every k-flat in R^d is contained in a halfspace that contains only (k+1) n / (d+k+1) points of G (up to lower-order terms). In 2008, Bukh, Matousek, and Nivasch conjectured the following generalization of Rado's centerpoint theorem: For every n-point set S in R^d and every k, there exists a k-flat f in R^d such that every halfspace that contains f contains at least (k+1) n / (d+k+1) points of S (up to lower-order terms). (The centerpoint theorem is obtained by setting k=0.) Such a flat f would be called a Thus, our upper bound construction shows that the leading constant (k+1)/(k+d+1) in the above conjecture is tight (certainly for k = 1, and apparently for all k). The set G of our construction is the "stretched grid" -- a point set which has been previously used by Bukh et al. for other related purposes. Joint work with Boris Bukh. Upcoming talks | All previous talks | Talks by speaker | Upcoming talks in iCal format (beta version!) Previous talks by year: 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 Information for students and suggested topics for student talks Automatic MiSe System Software Version 1.4803M | admin login
{"url":"https://ti.inf.ethz.ch/ew/mise/mittagssem.html?action=show&what=abstract&id=8de528d3bacce8d820121d88f762b32f88a3e677","timestamp":"2024-11-04T08:07:21Z","content_type":"text/html","content_length":"14316","record_id":"<urn:uuid:2590d73a-21b7-473a-9c92-a7f0ac0ab543>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00184.warc.gz"}
Online Final Grade Calculator | How to Calculate Final Grade? - Statisticscalculators.net Take a look at our easy to use Final Grade Calculator tool to find the final grade value. Simply you need to give the inputs of current grade, grade you want, and final exam weight in the input sections and press on the calculate button and get the accurate output instantaneously. Check our easy to operate Final Grade Calculator to get the final grade value easily and quickly. You just need to provide current grade, grade you want, and final exam weight in the given input sections of the calculator and press on the calculate button to get the accurate output immediately. Final Grade Formula Final grade is described as the grade allocated by the advisor at the end of an academic term. Below is the formula to calculate the final grade: F = (G - ((1 - w) x C)) / w F is the final exam grade G is grade you want for the class w is the weight of the final exam i.e. divided by 100. (Note: put the weight in the decimal form vs percentage form) C is your present grade value How to Find the Final Grade? Below we have explained the basic steps to calculate final exam grade for students. So that you can do it easily without any mistakes. • Primarily, take the current grade, grade you want and the final exam weight from the given problem. • Then, divide the final exam weight value by 100. • Now, you need to subtract 1 from the obtained value. • You need to multiply the result with present grade value. • Subtract the grade you want from the above step value. • Divide the acquired result by value we got from step 2 to get the final grade. Final Grade Calculation Examples Question 1: Suraj grade in statistics is 85%. He wants to get at least an A or 75% in the class for the term. What is the score he need to get in the final exam if it is worth 50% of his grade? Given that Grade Suraj want for the class = G = 75% Current grade in statistics C = 80% weight of final exam, w = 50/100 = 0.5 Final grade formula is F = (G - ((1 - w) x C)) / w Substitute all the values in the formula F = (75 - (1 - 0.5) x 85)) / 0.5 = (75 - (0.5 x 85) / 0.5 = (75 - 42.5) / 0.5 = 32.5 / 0.5 = 65 ∴ Suraj need to score 65 on the final exam if it is worth 50% of his grade. For more calculators just click on statisticscalculator.net that provides best trusted and reliable online calculators to solve all your complex calculations easy. FAQs on Final Grade Calculator 1. What are the steps to calculate final grade? • Calculate all the points that you have received in each grade. • Then, subtract total from the number of points required to obtain the grade for an average. • Finally, divide the result, by the number of points on the final exam. 2. What is the formula for final grade? Below is the formula for final grade: F = (G - ((1 - w) x C)) / w Where, F is the final grade, C is your current grade value, w is the weight of the final exam divided by 100 and G is the grade you want for the class 3. How to use Final Grade Calculator? How to find overall grade, Simply, enter the current grade, grades calculator, final exam weight and the grade you want for the class in input sections and press the calculate button which is available in blue color and get the final result instantly.
{"url":"https://statisticscalculators.net/statistics/final-grade-calculator/","timestamp":"2024-11-11T10:29:40Z","content_type":"text/html","content_length":"28267","record_id":"<urn:uuid:366d45d9-f080-477e-857f-e569b1e4a06a>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00566.warc.gz"}
General problem statement for this pattern will vary but most of the time you are given one string with length n and you would need return some result. Most of the problems on this pattern can be solved in in O(n ) complexity. Whenever you are given just 1 string in a DP problem, look for what you get when, for a random substring (str[i...j]) of the given string str, the first and the last characters of the substring happen to match. Can you arrive to some conclusion for that substring if you already know the result for the rest of the substring str[(i + 1)...(j - 1)] ? In most cases, thinking in this direction will give you the DP relationship. So what we will do here is, we will go on computing DP result for all possible substring of all possible lengths (if length of str is n then the possible substring-length can be 1 to n) in the increasing order of the length so that whenever we are computing result for longer length substring, the result for the shorter length substring will be already computed and ready to be used. Notice that for a substring str[i...j], depending on whether str[i] == str[j], you would need the result for str[(i + 1)...(j - 1)]. Length of str[(i + 1)...(j - 1)] is 2 unit less than the length of str[i...j]. So while computing the result for longer substrings you would need the result for the shorter substrings. ( Optimal Substructure ). Since we are considering the last character of substring, we would see that for some problems we are naturally thinking in terms of , and that is definitely the right way of thinking. The above mentioned concept will become very clear as we will see several examples being solved using this template in the next few chapters: Bottom-Up Approach: Get the results for the shorter length substrings ready (optimal substructure), so that when you are computing result for longer length substring (and eventually for the whole string) the results for the shorter length substrings will already be computed and memoized and ready to be used to compute the result for the higher length substrings. So we start from substring_length = 1 and iterate all the way up to substring_length = n. n = length(str) for (int len = 1; len <= n; ++l) { for (int beg = 0; i <= n - len; i++) { int end = beg + len - 1; if (str[beg] == str[end]) { dp[beg][end] = /*your code here*/; } else { dp[beg][end] = /*your code here*/;
{"url":"https://systemsdesign.cloud/Algo/DynamicProgramming/1StringDP","timestamp":"2024-11-07T15:20:28Z","content_type":"text/html","content_length":"43834","record_id":"<urn:uuid:0148cc8f-20ab-4d3a-b7a0-8f1351f8117e>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00063.warc.gz"}
You have 67.0 mL of a 0.400 M stock solution that must be diluted to 0.100 M. Assuming the volumes are additive, how much water should you add? | Socratic You have 67.0 mL of a 0.400 M stock solution that must be diluted to 0.100 M. Assuming the volumes are additive, how much water should you add? 1 Answer The thing to remember about dilutions is that the ratio that exists between the concentration of the concentrated solution and that of the diluted solution is equal to the ratio that exists between the volume of the diluted solution and that of the concentrated solution. This ratio is called the dilution factor. You thus have $\text{DF" = "concentration of the concentrated solution"/"concentration of the diluted solution}$ $\text{DF" = "volume of the diluted solution"/"volume of the concentrated solution}$ In your case, the solution must be diluted from an initial concentration of $\text{0.400 M}$ to a final concentration of $\text{0.100 M}$, the equivalent of a dilution factor equal to #"DF" = (0.400 color(red)(cancel(color(black)("M"))))/(0.100color(red)(cancel(color(black)("M")))) = color(blue)(4)# You can now say that the volume of the diluted solution must be $4$times greater than the volume of the concentrated solution, since $\text{DF" = "volume of diluted solution"/"67.0 mL}$ $\text{volume of diluted solution" = "DF" xx "67.0 mL}$ and therefore $\text{volume of diluted solution" = color(blue)(4) xx "67.0 mL" = "268 mL}$ Assuming that the volumes are addictive, you will have to add #"volume of water added" = "268 mL" - "67.0 mL" = color(darkgreen)(ul(color(black)("201 mL")))# of water to your concentrated solution in order to go from $\text{67.0 mL}$ of $\text{0.400 M}$ solution to $\text{268 mL}$ of $\text{0.100 M}$ solution. The answer is rounded to three sig figs. Impact of this question 43095 views around the world
{"url":"https://api-project-1022638073839.appspot.com/questions/you-have-67-0-ml-of-a-0-400-m-stock-solution-that-must-be-diluted-to-0-100-m-ass","timestamp":"2024-11-06T09:15:58Z","content_type":"text/html","content_length":"38041","record_id":"<urn:uuid:4d834958-b616-4870-87d4-74671dff7c9e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00023.warc.gz"}
THE TOMALLA FOUNDATION Prize Holders Prize Winners The Tomalla Prize is distributed about every three years for extraordinary contributions to general relativity and gravity. (For the latest prizes you can download the prize colloquium by clicking on the names and the laudatio by clicking on ‘Laudatio’.) The Tomalla prize holders are: V. Mukhanov (2009) for his contributions to inflationary cosmology and especially for the determination of the density perturbation spectrum from inflation. A. Starobinsky (2009) for his pioneering contributions to inflationary cosmology and especially for the determination of the spectrum of gravitational waves generated during inflation. D. Christodoulou (2008) for his important contributions to general relativity, especially for his rigorous demonstration of global non-linear stability of Minkowski spacetime. P.J. Peebles (2003) for his leading role in cosmology research, especially on the cosmic microwave background and the large scale structure of the Universe. G.A. Tammann (2000) for his efforts in measuring the expansion rate of the universe and especially for his pioneering work using Supernovae as standard candles. W. Israel (1996) for his work on mathematical relativity, especially on the uniqueness of black holes solutions. A.R. Sandage (1993) for his lifelong efforts in measuring the dynamics of the Universe. J.H. Taylor (1987) for his discovery and persistent study on the binary pulsar PSR1913+16, which led to the first (indirect) detection of gravitational waves. A. Sakharov (1984) for his fundamental contribution to the problem of the matter - antimatter asymmetry in the Universe and his new ideas on gravity at a fundamental level (induced gravity). S. Chandrasekhar (1981) for his contributions to relativistic astrophysics, especially for the discovery of a limiting mass for the final stages of stars.
{"url":"http://www.tomalla.ch/prize_winner.htm","timestamp":"2024-11-15T00:43:09Z","content_type":"text/html","content_length":"4754","record_id":"<urn:uuid:0a34011e-6f13-4ffe-b6c8-8f98292b1746>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00529.warc.gz"}
Joule Thomson Effect Calculator - Calculator Doc Joule Thomson Effect Calculator The Joule-Thomson Coefficient Calculator is a valuable tool in thermodynamics, particularly in understanding how gases behave when they are allowed to expand or compress. The Joule-Thomson effect describes the change in temperature that occurs when a gas is allowed to expand without doing any external work, and it is influenced by the gas’s specific properties, including the Joule-Thomson coefficient. This calculator helps engineers and scientists predict how the temperature of a gas will change under varying pressure conditions. The final temperature (Tf) can be calculated using the formula: Tf = Ti + μ * (Pi – Pf) • Tf is the final temperature in Kelvin, • Ti is the initial temperature in Kelvin, • μ is the Joule-Thomson coefficient, • Pi is the initial pressure in atm, • Pf is the final pressure in atm. How to Use 1. Enter the initial temperature (Ti) of the gas in Kelvin. 2. Input the initial pressure (Pi) in atm. 3. Enter the final pressure (Pf) in atm. 4. Provide the Joule-Thomson coefficient (μ) for the specific gas. 5. Click the “Calculate” button to determine the final temperature (Tf). Let’s consider an example to illustrate how to use the calculator: • Initial Temperature (Ti): 300 K • Initial Pressure (Pi): 5 atm • Final Pressure (Pf): 3 atm • Joule-Thomson Coefficient (μ): 0.5 K/atm Using the formula: Tf = Ti + μ * (Pi – Pf) Substituting the values: Tf = 300 + 0.5 * (5 – 3) Tf = 300 + 0.5 * 2 = 300 + 1 = 301 K This means the final temperature of the gas would be 301 K. 1. What is the Joule-Thomson effect? The Joule-Thomson effect is the temperature change of a real gas when it expands or compresses without external work. 2. What does the Joule-Thomson coefficient represent? The Joule-Thomson coefficient (μ) measures how much the temperature of a gas changes with a change in pressure. 3. How is the Joule-Thomson coefficient used? It is used to predict whether a gas will cool or heat during expansion or compression. 4. What factors influence the Joule-Thomson coefficient? The type of gas and its temperature and pressure conditions influence the coefficient. 5. What units are used in the calculator? The calculator uses Kelvin for temperature and atm for pressure. 6. Can the Joule-Thomson effect apply to all gases? No, the effect varies among gases; for instance, some gases may heat up when expanding at room temperature. 7. What applications use the Joule-Thomson effect? It is used in refrigeration, liquefaction of gases, and gas processing industries. 8. How accurate is the calculation? The accuracy depends on the correct input of initial conditions and the specific gas properties. 9. What should I do if I don’t know the Joule-Thomson coefficient? Look up the coefficient for your specific gas in scientific literature or databases. 10. What happens if the pressure decreases? The final temperature will depend on the Joule-Thomson coefficient; it may increase or decrease. 11. Is the Joule-Thomson effect always positive? No, it can be positive or negative depending on the gas and its conditions. 12. Can this calculator be used for any gas? Yes, but you need to know the specific Joule-Thomson coefficient for the gas you are analyzing. 13. How does temperature affect the Joule-Thomson coefficient? The coefficient can change with temperature; therefore, it must be used within a specific temperature range. 14. What is the significance of the Joule-Thomson effect in cooling systems? It allows for the efficient cooling of gases in refrigeration systems. 15. What are the limitations of using the Joule-Thomson coefficient? The coefficient is specific to conditions and may not be applicable across different pressure ranges. 16. How does the Joule-Thomson effect relate to real gas behavior? It illustrates how real gases deviate from ideal gas behavior during expansion and compression. 17. Can this calculator handle multiple gases? No, the calculator is designed for a single gas at a time based on its specific properties. 18. How do I verify the results from this calculator? Compare results with published data or experimental measurements for the gas under similar conditions. 19. What happens in an ideal gas scenario? For ideal gases, the Joule-Thomson effect is negligible, meaning temperature does not change with pressure. 20. Can the calculator help in designing refrigeration systems? Yes, it assists in understanding gas behavior, which is crucial for designing efficient cooling systems. The Joule-Thomson Coefficient Calculator is a useful tool for anyone studying or working with thermodynamics and gas behavior. By understanding how to calculate the final temperature of a gas during pressure changes, users can make informed decisions in engineering, chemical processing, and refrigeration applications. This knowledge enhances the design and efficiency of systems reliant on gas properties, leading to better outcomes in various industrial processes.
{"url":"https://calculatordoc.com/joule-thomson-effect-calculator/","timestamp":"2024-11-06T09:09:39Z","content_type":"text/html","content_length":"88486","record_id":"<urn:uuid:96290928-00f1-43e3-973d-9aef7143498b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00124.warc.gz"}
Yang-Le Wu I am a Senior Vice President and quantitative analyst at the D. E. Shaw group in New York. I apply statistical inference techniques to study global markets and develop machine learning algorithms to uncover and correct market inefficiencies. In a previous life, I was a theoretical physicist. I worked on strongly correlated electrons and topological phases of condensed matter, and in particular, I had expertise in certain mathematical and numerical aspects of the fractional quantum Hall effect. This page showcases relics and memorabilia from those years in academia. I graduated with a PhD in Physics from Princeton University in 2014. My advisor was Professor B. Andrei Bernevig, and my informal co-advisor was Professor Nicolas Regnault at École Normale Supérieure . From 2014 to 2017 I worked at the Condensed Matter Theory Center of the University of Maryland as a JQI Postdoctoral Fellow, mentored by Professor Sankar Das Sarma. Links to my research papers can be found at the bottom of this page, on arXiv, and also on my Google Scholar profile. You can reach me at (PGP key). -----BEGIN PGP PUBLIC KEY BLOCK----- Version: GnuPG v2 -----END PGP PUBLIC KEY BLOCK----- physics research non-Abelian quasiholes in fractional quantum Hall states Non-Abelian anyons are among the most striking manifestations of topological order. These exotic excitations have been at the center of a huge research effort in recent years, partly driven by the possibility of constructing topologically fault-tolerant quantum computers. In this context, of particular importance are the Fibonacci anyons. They provide a representation of the braid group rich enough to carry out universal quantum computations, which cannot be achieved with simpler non-Abelian anyons such as the Majorana fermions. It had long been conjectured that the quasiholes in the ℤ₃ Read-Rezayi state, which may describe the filling ν = 12/5 fractional quantum Hall plateau, are Fibonacci anyons when sufficiently separated. However, this conjecture had not been supported by any microscopic evidence, due to the sheer complexity of the ℤ₃ Read-Rezayi quasihole wave functions. In fact, beyond the conjectured universal topological properties, very little was known about these elusive excitations. braiding non-Abelian quasiholes In Phys. Rev. Lett. 113, 116801, we settled this long-standing problem and explicitly demonstrated the Fibonacci nature of the ℤ₃ Read-Rezayi quasiholes. Through a numerical study of the model wave functions, we established the exponential convergence of the braiding matrices with increasing quasihole separations, and we extracted the associated length scales as well as the quasihole radii. This puts an upper bound on the desirable quasihole density in interferometer devices — at a higher density, the ℤ₃ Read-Rezayi quasiholes exhibit clear non-universal deviations from the Fibonacci In addition, we also provided a microscopic diagnosis for the pathology of the Gaffnian wave function, which was conjectured not to give rise to sensible braiding statistics due to its root in non-unitary conformal field theories. We explicitly demonstrated that the non-universal, path-dependent contributions to the braiding matrices follow a power-law rather than exponential decay as quasihole separations increase. This signals the failure of plasma screening and highlights the gapless nature of the Gaffnian. Our results largely ruled out the possibility of salvaging this pathological wave function as the description of a gapped phase with topological order. matrix product state description of fractional quantum Hall effect The above progress was enabled by the recent breakthrough of the exact matrix product states for fractional quantum Hall effect. Essentially, the matrix product state is a factorization of the many-body wave function. This factorization makes it possible to exploit the entanglement area law and to store quantum information compactly, and it also greatly facilitates the calculation of physical observables. Its appearance in the quantum Hall context is deeply rooted in the bulk-edge correspondence. Namely, a large class of quantum Hall trial states are described by many-point correlation functions in chiral conformal field theories (CFT). Each electron / quasihole in the quantum Hall wave function is represented by a primary field operator in the conformal correlator. Since a field operator is essentially an infinite-dimensional matrix over the CFT Hilbert space, the conformal correlator, and thus the quantum Hall trial state, can be cast in the form of a matrix product state (MPS). After a truncation of descendants in the CFT Hilbert space, a field operator (or more precisely, the corresponding 3-point function in the CFT) can be well approximated by a finite-size (albeit large) matrix. Such matrices can be constructed numerically from the Virasoro algebra, even for interacting CFTs. This provides an extremely powerful numerical tool to study the strongly-correlated physics in fractional quantum Hall phases. For our purposes, the MPS technique enables a brute-force evaluation of the conformal-block wave functions with an arbitrary number of quasiholes, without relying on bosonization tricks. It grants access to much larger system sizes than previously attainable, and opens an avenue to the direct characterization of physical properties without confronting the exponentially large many-electron Hilbert space. In Phys. Rev. B 92, 045109, we provided a pedagogical description of the MPS construction. In particular, we discussed in detail how to build MPS for non-Abelian quasiholes with conformal-block normalization and monodromy. quantum Hall bilayers Stabilizing non-Abelian fractional quantum Hall states often requires careful softening of the Coulomb repulsion between electrons. In realistic setups, this task can be extremely challenging. One possible avenue for highly tunable interactions comes from multi-component quantum Hall systems, such as bilayers or wide quantum wells. These systems typically feature a rich variety of topological phases and interaction-driven topological phase transitions. Unfortunately, the internal degrees of freedom that provide tunability also impose a heavy burden on microscopic numerical calculations. As a result, many such systems have not been thoroughly explored. In Phys. Rev. B 92, 035103, we initiated a systematic numerical study of quantum Hall bilayers at filling ν = 2/3. Using a combination of exact diagonalization and variational Monte Carlo, we mapped out the phase diagram in both the lowest and the second Landau levels as a function of interlayer separation and tunneling strength. We found that the ℤ₄ Read-Rezayi state is highly competitive in the second Landau level. fractional topological insulators Interactions can stabilize strongly-correlated phases in topological flat bands with nonzero Chern number C. These so-called fractional Chern insulators (FCI) exhibit fractional quantum Hall effect at zero magnetic field. In Phys. Rev. B 85, 075116, using exact diagonalization and particle entanglement spectrum, we demonstrated numerically the existence of the FCI phase in an array of lattice models with a C = 1 flat band. This includes both Abelian and non-Abelian quantum Hall states. We found a correlation between the stability of the strongly-correlated phase and the uniformity of the Berry curvature in the band structure. The nature of the FCI ground state at C = 1 can be understood by the Wannier mapping between a Chern band and the lowest Landau level (LLL). In Phys. Rev. B 86, 085129, after a proper gauge fixing, we transcribed the continuum Laughlin state to the lattice, and achieved high overlaps with the FCI ground state. For FCI with C > 1, however, numerical studies revealed anomalous features distinct from the usual multicomponent quantum Hall states. In Phys. Rev. Lett. 110, 106802, we found that the correct one-body mapping for C > 1 involves a specially crafted set of boundary conditions for the multicomponent LLL. This new boundary condition sews together the C components into a single manifold with Chern number C. Using the modified one-body mapping, we constructed pseudopotential Hamiltonians and model wave functions for FCI with an arbitrary Chern number. Our model wave functions correctly capture the subtle differences between the lattice FCI states and the usual multicomponent quantum Hall states. In Phys. Rev. B 89, 155113, we analyzed the FCI pseudopotential Hamiltonian in the thin -torus limit. This revealed a generalized Pauli principle for the degeneracy of the FCI ground states in each Bloch momentum sector. A reference implementation of the corresponding counting rule is available here. earlier projects In the past, I have also studied the pairing mechanism of iron-selenide superconductor using functional renormalization group. Before coming to the US for graduate school, I worked with Professor Qi Ouyang as an undergraduate Chun-Tsung Scholar at Peking University, on the origin of the dynamical robustness of regulatory networks in living cells, and on the non-linear dynamics of reaction-diffusion systems.
{"url":"https://yangle.io/","timestamp":"2024-11-05T16:56:49Z","content_type":"text/html","content_length":"36964","record_id":"<urn:uuid:efc47d1c-5b11-4594-9930-a5b1d655c518>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00857.warc.gz"}
What is your stated annual interest rate 2 Oct 2019 The stated annual rate describes an annualized rate of interest that does not take into account the effect of intra-year compounding. Effective 4 Aug 2019 A stated annual interest rate is the return on an investment (ROI) that is expressed as a per-year percentage. more · Determining the Annual 5 Feb 2019 The effective interest rate is the usage rate that a borrower actually pays on a loan. It is likely to be either monthly, quarterly, or annually. 13 Jan 2019 Syllabus D4d). Explain and illustrate the difference between simple and compound interest, and between nominal and effective interest rates The stated annual interest rate (SAR) is the return on an investment (ROI) that is expressed as a per-year percentage. It is a simple interest rate calculation that does not account for any compounding that occurs throughout the year. The effective annual interest rate (EAR), on the other hand, The stated interest rate is just what it says. It is the simple interest rate that the bank gives you as the interest rate on loan. This interest rate does not take the effect of compound interest into account. Suppose the stated annual interest rate on a savings account is 10%, and you put $1,000 into this savings account. After one year, your money would grow to $1,100. But if the account has a quarterly compounding feature, your effective rate of return will be higher than 10%. For Share Savings accounts, the dividend rate and annual percentage yield may daily balance is not met, you will not earn the stated annual percentage yield. Capitalization: adding interest to the capital;. • Nominal interest rate: This rate, calculated on an annual basis, is used to determine the periodic interest rate. 17 Oct 2019 The effective rate is how much interest you will really owe or receive once compounding is considered. APR is the annual percentage rate: the 9 Nov 2015 However, the actual cost of this credit card account is determined by calculating the annual percentage yield (APY ), which is the same as the 2 Sep 2019 The Effective annual rate of interest is the true rate of return offered by an investment in a year, taking into account the effects of compounding. 5 Jan 2016 Typically an interest rate is given as a nominal, or stated, annual rate of interest. But when compounding occurs more than once per year, the She was thinking about comparing banks to open an account, but she was more than satisfied with the stated annual interest rate that her account would earn at 13 Apr 2019 Effective interest rate is the annual interest rate that when applied to the opening balance of a sum results in a future value that is the same as Understanding the distinct difference between coupon rates and market interest rates is an integral step on the path toward developing a comprehensive understanding of bonds and the debt security marketplace. A coupon rate can best be described as the sum, or yield, paid on the face value of the bond annual over its lifetime. The effective interest rate (EIR), effective annual interest rate, annual equivalent rate (AER) or simply effective rate is the interest rate on a loan or financial 6 Sep 2015 These statements answer the question of what is the stated annual rate that corresponds to an effective annual rate of 12% at various 6 Sep 2015 These statements answer the question of what is the stated annual rate that corresponds to an effective annual rate of 12% at various Calculate the effective annual rate (EAR) from the nominal annual interest rate and the number of compounding periods per year. Effective annual rate calculator The real APR, or annual percentage rate, considers these costs as well as the interest rate of a loan. The following two calculators help reveal the true costs of Understanding the distinct difference between coupon rates and market interest rates is an integral step on the path toward developing a comprehensive understanding of bonds and the debt security marketplace. A coupon rate can best be described as the sum, or yield, paid on the face value of the bond annual over its lifetime. How to Calculate Annual Percentage Rate. If you have credit cards or bank loans for your home, you pay interest (or a finance charge) on that money at a specific percentage over the course of the year. This is called APR, or annual If interest is compounded continuously, you should calculate the effective interest rate using a different formula: r = e^i - 1. In this formula, r is the effective interest rate, i is the stated interest rate, and e is the constant 2.718. 5 Jan 2016 Typically an interest rate is given as a nominal, or stated, annual rate of interest. But when compounding occurs more than once per year, the She was thinking about comparing banks to open an account, but she was more than satisfied with the stated annual interest rate that her account would earn at 13 Apr 2019 Effective interest rate is the annual interest rate that when applied to the opening balance of a sum results in a future value that is the same as
{"url":"https://cryptongsjkn.netlify.app/hubby86159gip/what-is-your-stated-annual-interest-rate-199.html","timestamp":"2024-11-03T09:26:43Z","content_type":"text/html","content_length":"29214","record_id":"<urn:uuid:2c8da711-6ed6-455c-a092-76e8175d3759>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00625.warc.gz"}
Transform of Difference Equations Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search Since z transforming the convolution representation for digital filters was so fruitful, let's apply it now to the general difference equation, Eq.(5.1). To do this requires two properties of the z transform, linearity (easy to show) and the shift theorem (derived in §6.3 above). Using these two properties, we can write down the z transform of any difference equation by inspection, as we now show. In §6.8.2, we'll show how to invert by inspection as well. Repeating the general difference equation for LTI filters, we have (from Eq.(5.1)) Let's take the z transform of both sides, denoting the transform by linear operator, it may be distributed through the terms on the right-hand side as follows:^7.3 , followed by use of the shift theorem, in that order. The terms in Factoring out the common terms Defining the polynomials the z transform of the difference equation yields Finally, solving for transfer function Thus, taking the z transform of the general difference equation led to a new formula for the transfer function in terms of the difference equation coefficients. (Now the minus signs for the feedback coefficients in the difference equation Eq.(5.1) are explained.) Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
{"url":"https://ccrma.stanford.edu/~jos/filters05/Z_Transform_Difference_Equations.html","timestamp":"2024-11-02T14:48:48Z","content_type":"text/html","content_length":"15010","record_id":"<urn:uuid:4cda5102-5f5c-48c1-b414-64877a2cc729>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00176.warc.gz"}
Inside God's Toolbox Jon Adams critical ecologies Jon Adams rifles through the instrument cabinet of the man upstairs by way of William J. Jackson's Heaven's Fractal Net. Adams finds more problems than solutions in Jackson's position that fractals are a fundamental and universal structure of life - a position Jackson stakes out by vacillating between scholarly proof and speculative guruism. An illustration from an article in The College Mathematics JournalSee John Ewing's "Can we see the Mandelbrot Set?" (92). shows an image of the Mandelbrot Set. Beside it is a magnified image of part of the boundary. It is labelled "Figure 3. Part of the boundary, size of hydrogen atom." This second image is equally detailed, and with a sufficiently powerful computer, a successive series of such magnifications could be made. The detail would continue, even as the diameter of the original image exceeded the width of the known universe. The Mandelbrot Set has been called "God's thumbprint." The thumbprint is a good point of comparison, for while every thumbprint looks different, every thumbprint also looks like a thumbprint. While such fractals never repeat themselves, nor do they ever (in any meaningful sense of the word) change. So if the Mandelbrot Set is the thumbprint of God because it possesses a complexity that never precisely repeats, then it may also be the map of hell, for it possesses a complexity that never advances, no matter how hard you push. Everywhere is different; everywhere is the same. We always know where we're going; it's just that we'll never quite get there. They are the visual equivalent of a Zeno paradox - never quite finishing, but not really changing either. Before Benoit Mandelbrot called them "fractals" in the late 1970s, they were known as "monster curves." These pre-fractal fractals include the Koch Curve (or Snowflake), and the Hilbert and Peano Curves (one-dimensional lines so complex they can fill two and even three dimensional space). Appearing less rigidly geometric (although in construction no more elaborate) is the Dragon Curve, which begins as a simple right-angled hook, hung on a replica of itself, but with successive iterations widens quite suddenly into a spiral of organic complexity. (Readers of Jurassic Park might recognise this as the pattern Michael Crichton employed to illustrate how order and simplicity rapidly escapes into chaos and complexity. As the monsters in the book spiralled out of control, so did the monster curve.) Alongside them are such oddities as Cantor Dust, the Menger Sponge (like an enormous office building, the footprint of which is called Sierpinski's Carpet), and Pascal's Triangle, an isomorphic grid of numbers which, when shaded, yields the Sierpinski Gasket.Along with black and white images throughout the text, the book contains the obligatory section of colour pictures, and further fractals are included on a DVD (although these are at a surprisingly low resolution). Considering the variety of fractal visualisations now available, the overall content is repetitious and unimpressive. There are many websites with far more impressive material, and the reader would be advised to look here for more interesting examples. The mechanics of actually drawing fractals requires such laborious calculation (and such vast canvasses) that it was only with the advent of the computer that mathematicians were able to graph all the products. Out of the 1960s came visualisations of the more complex images; the Lorentz Attractor, the Julia Set, Henon's attractor, and, of course, the Mandelbrot Set. As computing power increases, so do the possibilities, allowing visualisations such as the magnificent "quaternion Julia fractals" which look like loops of torn dough, like shorn metal, like the layered skin of a wasps' nest. That this mess has a precise mathematical construction and could be perfectly recreated, time and time again, seems impossible. But much about fractals seems impossible. In the fractal, infinite detail is contained within a finite boundary.Measured with sufficient exactitude, and taking in every turn, every nook, the length of a coastline approaches infinity. So does the outline of a tree. Or a leaf. Of course, that being said, like in a Zeno paradox, the "infinite" is an illusion produced by mapping the mathematical realm over the physical realm - for the detail in a fractal is only infinite in the number-theory sense, and there is a big difference between infinite values and infinite things. Infinity occupies a curious ontological position as a mathematical entity that, like e, i, and π, has no place in the real world of integers but seems to do some work there nonetheless. Of course, these aren't really problems for mathematicians to think about, they are problems for philosophers of mathematics. The questions that actual mathematicians ask are about the relations that these numbers have to each other. They especially like dense relations, like e^i π + 1 = 0. But the philosophers of mathematics - along with the applied mathematicians, engineers, economists, statisticians, and so on - are interested in the relations that may or may not hold between these numbers and the world. Those who think that the way the world works is mathematical are mathematical realists. Those who think that mathematics is a closed system which describes only the relations between its components are mathematical formalists. If you ask mathematicians about this problem, you'll probably find a split between the formalists and the realists, and a blur of positions in between. For the everyday business of being a mathematician, it doesn't really matter very much. Mathematical truths are true analytically; which is to say that e^i π + 1 = 0 holds in virtue of the meaning of 0, 1, =, +, e, i, and π. Whether it has any relation to the physical world is as irrelevant to its truth as whether the king on a chessboard has any real territory, or whether the bishop is really limited to moving diagonally. But if you're making a claim for the application of mathematics to the world, it matters quite a lot which side you are on. Most people think fractals are significant largely because they think fractals are beautiful. Most people would claim that fractals were about the most beautiful branch of mathematics. But the idea that fractals are the most beautiful branch of mathematics is one put about by people with little understanding of what beauty means to mathematicians - who, on the whole, are less interested in the appearance of the fractal than the mathematics that generate this appearance. The fractal (as some put it) isn't the image at all, it's the mathematics. Most mathematicians find e^i π + 1 = 0 equally beautiful, probably more so.Mathematicians and physicists frequently describe formulae as "beautiful," and have their own distinctive aesthetic criteria. In October 2004, Physics World published a list of "the 20 Most Beautiful Equations."The difference, of course, is that there's no obvious way to show this beauty to a non-mathematician. The fractal is something everyone can see because its beauty has been "translated" out of mathematics into a visual representation of astonishing complexity. Fractals are interesting because everyone can "get" them, in a way that only a few people can "get" e^i π + 1 = 0. In contrast to the Mandelbrot Set's status as modern iconography, not nearly so many people could recognise a Fourier transformation, a normal distribution, a Fibonacci sequence - although each have significant and profound connections to the machinations of the natural world. One of the curious things about the Mandelbrot Set is that while its appearance awaited the advent of the computer age, seeing it for the first time everyone felt a strange sense of familiarity, and this familiarity is where William Jackson begins Heaven's Fractal Net. Jackson thinks that the fractal has been intuitively known to humanity for millennia, and that evidence for this implicit understanding can be found in most civilisations in all places and at all times dating back to the origins of art and culture. It's an interesting hypothesis: fractal forms do indeed seem resonant with the interwoven loops within interwoven spirals seen in the illuminations of the Lindisfarne Gospels, with the towers upon towers of certain East Asian architecture, with the logical structures of certain Zen koans. And these in their turn with the easy complexity of nature: a head of broccoli, the outline of a fern frond. How much of this mathematics was known before, how much has since been lost? And if our ancestors did possess a pre-theoretical understanding of the complexity of such forms, what might we gain by their rediscovery? Heaven's Fractal Net presents the reader with hundreds of examples of fractals, and fractal-like forms, and forms which seem fractal-like. This apparently endless sequence of cases ranges across religious beliefs, religious art, literature, and architecture. Like the fractals, the detail is seemingly without end. Unfortunately, also like the fractals, it doesn't seem to take us anywhere. In the "introductory reflections" Jackson wonders, "what is the best pattern to use in presenting an explanation of patterns?" and decides that "the most appropriate is fractal-like. My ideas and findings are not confined to a conclusion at the end but are spread throughout the book, reiterated and illustrated in a variety of ways" (6). The foregoing perhaps explains the slightly chaotic structure that results. It's clear that Jackson has spent a good deal of time preparing the research for this book. What's less clear is what he wants the book to do. For a start, he seems unsure of quite who he is addressing (perhaps various sections were written independently for different audiences). In one chapter he will present a balanced and well-researched scholarly argument (such as the interesting case for Cartesian dualism being a consequence of Descartes' own sedentary habits [194-200], which will surely appeal to followers of George Lakoff's increasingly strong position on the necessity of embodiment); only to spend much of the following section engaged in what he calls his "playful riffs." Here is a typical fragment from towards the end of the book: The One God is Life Breath, Brahman, That. The One consciousness, atman Self, exists in all beings, making its form manifold; the wise find it in themselves and find endless joy. One who knows the One that's in fire, heart, and sun attains the Oneness of the One. (221) What could such passages mean? "One who knows the One...attains the Oneness of the One." When you try to parse this, sense collapses in a semantic vacuum. The self-referentiality that creeps in to the language here - the thing is like itself - is reminiscent of Gertrude Stein's a-rose-is-a-rose-is-a-rose patter. The aim is to refer past the word to the world, to show that a thing is a thing before it is a word, and Stein makes a brief (if typically frustrating) appearance in the chapter on fractal literature. At a stretch, language which identifies itself as language does seem to share some of the properties of self-similarity that we find so intriguing in the fractal images. But there's an important difference between what Stein is doing with her attempts to damage our habits of analogy, and what Jackson is supposed to be doing writing an ostensibly scholarly text in which ideas are to be explained, not performed. Jackson trades on his professorial status, but at the same time, distances himself from the academic community. Fairly early on, he declares "epistemological crises" a "fancy term" (19) only to use "holotropic consciousness" on the following page. It seems quite certain that "epistemological crises" has a clearer definition and makes more sense to more readers than "holotropic consciousness." (The latter arising from a misunderstanding of the significance of holograms.) Consequently, in tone the book sits somewhat uncomfortably between scholarly academic work and the type of vague theorising often propounded by new-age gurus. One gets the sense that Jackson would quite like to be seen as a guru, especially during his "riffs." But although these sections may perhaps be treated as creative writing, even in sections that maintain a relatively steady academic stance, there are lazy factual errors. What, for example, are we to make of this: Comprehension (understanding of meaning) and comprehensiveness (wholeness of the One) are intertwined. From one confused point of time and space you see nebulous randomly scattered stars; from another point you can see spirals of stellar orders - nebulae. (236) It seems to be the case that Jackson has confused nebulae (which are amorphous clouds of stellar dust and gas, usually lacking firm structure) with galaxies (which do sometimes form spirals). Later on the same page, he reinforces such inaccuracies by writing, "Far-off nebula lights are many, their spiral is one." Such terminological errors are important when the subject is the alleged fit between structural patterns on different orders of scale. Elsewhere there are muddled blendings between bona fide science and what is presumably Jackson's own spiritual belief system: "the 'soup' of neutrons (which were undifferentiated in the beginning of the universe)" seems like a feature of orthodox cosmology, until we are told that "protoplasm's creative potential was enfolded in those neutrons" (135), at which point, it all becomes very confusing. But it's typical of his attitude to the sciences: helping himself to the authority of its claims when it suits him, but when it contradicts him either ignoring it altogether or drawing on some other source. Although it is not explicitly stated, Jackson is apparently an adherent of intelligent design, declaring his belief that "something like 'mind' or consciousness is at work in both selective processes: survival of good ideas and survival of life forms" (176). (And this in spite of the fact that fractals are one of the many natural mechanisms which overdetermine ID.) Meanwhile, unperturbed by the general drift away from theories that rely on the inheritance of acquired characteristics, he endorses a version of the Jungian collective unconscious: the shadows of the unconscious...bringing figures from earlier days of the human race, the days of caves and mysteries, the days of developing the wits to use fire to preserve the spark of human life. All the past still smoulders in our unconscious, flickering when we sleep at night. (174) There is further sloppy thinking here. His criticism of Wilsonian sociobiology throws up the claim that "instead of interdependence it is cause and effect that seems most important" (177). Cause and effect being interdependent, it is not clear what is the substance of the alleged contrast. Subsequent criticisms of Wilson are increasingly baffling: "Wilson is a great authority on ants.... He is not so adept at imagining the depths involved in the experience of a shaman, a yogi, a poet-visionary, or a Buddha" (179). Surely the writings of E. O. Wilson are not so error-free that we have to resort to criticisms of his shamanic powers? Yet elsewhere, science is an authority we should trust - "science has been telling us that..." (274) - being a typical construction that relies for its persuasive power on our consenting to the reliability of scientific opinion. The problem is one common to much radical/unorthodox thinking, and that is a vacillating relationship with academic authorities. "Radical thinkers" of all persuasions do something like this (see, for example, Rupert Sheldrake, Graham Hancock, and Erich Von Däniken): science is cited as a reliable authority on one point in order to support an argument whose truth relies on most other scientists being wrong. Jackson wants science to support him at some times, and at others, he wants to dismiss science as slow-witted, close-minded, insufficiently In opposition to those thinkers he considers scientific reductionists, Jackson sets himself up as someone alert to natural harmonies and open to "oneness." ("Adults habituated to a deadening rigidity, having killed wonder, go on to build walls through which mountains, rivers, stars, and wind cannot penetrate. Spiritual paths...can sometimes offer ways out of this sad impasse" (54).) Some of the writing consequent from these claims makes for very uncomfortable reading: Woman's body mediates between the disconnected male and the realms of nature and the beyond. Entering the warm split-up-the-middleness of elusive beauty, awakening man traces depths and learns how his longings are involved in cosmic correspondences." (146 - note also the habit common to many mystical thinkers of eliding articles, as in "awakening man.") Does Jackson say this type of thing apropos (or worse, during) sex? It makes toes curl for all the wrong reasons. And what could it mean? That intercourse awakens men to the unity of the universe? Women, presumably, are already sufficiently attuned to the cosmos. Such indulgent material has no place in a book that purports to have an academic or educative agenda. Although classified as "popular science / philosophy / religion" and published by a University Press, Heaven's Fractal Net is not aimed at the academic or general intelligent reader. More likely that the target audience here are the credulous followers of alternative medicine. Acupuncture. Crystal healing. Ear candling. The laying on of hands. Trial by fire. Jackson even includes a diagram, unaccompanied by any explanation in the main body of the text, that demonstrates the affinities between the shape of the human ear and the shape of the human embryo. For anyone who hasn't seen it before, it is a striking image. A special branch of acupuncture exists dealing directly with this. It's called "auriculotherapy." The inclusion of the diagram is supposed to indicate another level of self-similarity in the human. One thinks of the illustration of the "body-politic" that served as the frontispiece for Hobbes's Leviathan, and later as the cover illustration for Shapin and Schaffer's Leviathan and the Air Pump. It all seems very intriguing; and yet, with even the briefest reflection, it should be immediately apparent that the putative affinities between ear and embryo are entirely coincidental. This coincidental affinity becomes clear when we try to extend the connection to non-human species, only to discover that (unfortunately for would-be practitioners of veterinary auriculotherapy) it doesn't work for many other animals. Mammalian embryos all look strikingly similar, but mammalian ears do not. The embryonic elephant, for example, doesn't look much like it's adult ear. But perhaps humans have a special relationship to embryology? It seems unlikely. We don't look like our cells, and the "body politic" doesn't really look like the Hobbes frontispiece. In organisational terms, it's perhaps true that we clump together in "organs," comprised of individual "cells" with common purposes. But, again, this social structure doesn't exist for solitary animals, despite the biological (and moreover functional) similarity of their internal anatomy. The links between the parts of the body, the individual, the social unit, the society, the population, and the ecosystem may well be complex, interesting, and interdependent, but there's little here to suggest that they could be accurately or even usefully characterised as fractal. Some cases are quite interesting, but if Jackson is to succeed, he must persuade us that the relationship between fractal geometry and cultural/religious history is a somehow special case of the wider relationship that obtains between mathematics and culture (i.e., distinct from pictorial numerology). A Chinese illustration shows a figure in meditation, from whose head sprout five more figures in meditation, each of which spawns another five meditating figures (35). These images do indeed resemble the manner in which the edges of the Mandelbrot Set are studded with miniature replicas of the whole, each miniature in turn adorned with even more miniature versions, and so on. But is the Chinese illustration really a case of fractal geometry? It seems to have more in common with the riddle of the man met on the way to St Ives. The principle in both cases (figures upon figures; kits, cats, sacks, wives) seems to be the same as in the story of the rice on the chessboard (whereby one grain on the first square is doubled on the second square, then the same rule applied for each of the sixty two remaining squares successively, yielding a huge quantity). Are all three cases fractal? Surely, the common thread isn't that each demonstrate recursion, but that each illustrate the power of a simple geometric sequence. Making the broader case that number theory has impacted upon culture is a worthwhile endeavour. There are many examples of how myths and religious stories can possess mathematical structures (for example, the Christian fable of the loaves and the fishes is another version of the Zeno paradox), and many unexpected ways in which number theory has impacted upon culture (John Barrow's popular history of mathematics, Pi In The Sky does a good job of presenting some of these). There are also many ways in which the peculiar properties of recursion seem to have deep affinities in both art and nature. But these are better explored in Douglas Hofstader's Gödel, Escher, Bach. Hofstader's book was also capricious, but his playful sections were also frequently profound. Anyone speculatively interested in Heaven's Fractal Net would be well advised to divert their attentions to Gödel, Escher, Bach instead. We don't learn enough about fractals to be persuaded that the relationship Jackson feels he has identified has been properly established. Jackson does not give the impression of knowing enough about fractals, and his definition of fractal is so labile that it sometimes includes simple nestings or one-fold repetitions. A typical sentence will insulate the connection behind two or three layers of analogy: "The model of the ultimate One being found in all the beings generated, at various scales of recognition, seems to have fractal-like aspects" (222). Elsewhere he employs switcharoo arguments whereby set membership is implied then suddenly withdrawn, leaving a thaumotropic ghost of sense. Too often, one reads a construction such as, "Although not a fractal..." - such cases should not be necessary if, as the book claims, the human mind in general and the religious mind in particular have such an affinity with the fractal. Even after the glut of examples we are offered over the 250+ pages of the book, the connection between fractals and the logical structure of religious belief and the decorative arts remains too loose and speculative. The most compelling cases are those that don't appear to have been consciously modelled on natural forms. For example, the fact that certain African villages are laid out on a plan that, when viewed from above, reveals a pattern of fractal-like organisation is startling (especially given that such an intricate pattern would doubtless be invisible on ground level - which is the only level available) (193). But such a case is hardly representative of the central thesis, this being the correlations between religious thought and imagery and the fractal. Closer to the theme are the Indian temples that in cross section reveal a nested sequence of rooms-within-rooms. However, in an age before I-beams, these constructions are surely the result of structural necessity as much as any spiritual affinity with the construction methods of the universe. The additional and overriding problem with Jackson's search for fractals in art and culture is the question of causal priority. If fractal geometry really is the mechanism through which natural forms are created, then we might well expect that the decorative arts will have employed a similar symmetry or patterning - not because the artists have a pre-theoretical understanding of a sophisticated branch of pure math, but simply because artists copy first what they know. The decorative arts imitate nature, and if nature is fractally constructed, then the decorative arts will presumably retain some of that structuring. In other words, what's being created isn't directly fractal; it's transitively fractal. The resonance isn't with fractals; it is with natural forms. The difference is between an explicit and a tacit understanding. That this distinction matters becomes clear when we ask what, exactly, might a "pre-theoretical understanding" of fractals actually amount to? Would it be anything more than the claim that we can spot a natural form; that we have a sense of what looks organic and what looks artificial?The notion that there is something in the human mind that facilitates recognition of (and perhaps even preference for) fractal-organic patterning is an interesting one - and the evidence for the existence of pattern recognition as a program in our innate cognitive software has been fruitfully explored for some time now (e.g., the environmental aesthetics section of the Adapted Mind, 552+ or the discussion in Wilson's Consilience regarding a preference for a twenty percent level of detail redundancy in visual art - something found in Chinese ideograms and genuine Mondrians - see Consilience 245-46). But Jackson is not interested in this, and when he grazes this argument, it is in favour of a Jungian account. Would it amount to a claim any more startling than the truism that a folk theory of inheritance (like-father-like-son) preceded the genetic theory of inheritance? In other words, what's remarkable about fractals is not they have pre-theoretical antecedents, but precisely that our understanding of them has graduated from a pre-theoretical to an explicit formulation. In the end, the actual mathematics of fractals are more impressive than the (sometimes quite distant) approximations Jackson unearths from comparative religious studies. Here's a quotation from philosopher of science Bas Van Fraassen that says it more clearly: There is a reason why metaphysics sounds so passé, so vieux jeu today; for intellectually challenging perplexities and paradoxes it has been far surpassed by theoretical science. Do the concepts of the Trinity, the soul, haecceity, universals, prime matter, and potentiality baffle you? They pale beside the unimaginable otherness of closed space-times, event-horizons, EPR correlations, and bootstrap models. (258) This, in short, is the problem that Jackson has. Religion is presented alongside mathematics, and there is an asymmetry in the available wonder. Once you have begun to think about fractals and mathematics seriously, then the history of religion seems parochial and artificial. A Zen koan remains logically impenetrable, but only because that's what a Zen koan is meant to do. The fractal, on the other hand, and more generally, the world of mathematics, seems to offer something far more magical: access to the actual mechanisms of nature, to god's toolbox. Unfortunately, there's only a secondary trace of that magic here. Works Cited Barrow, John. Pi in the Sky: Counting, Thinking and Being. Oxford: Oxford UP, 1992. Ewing, John. "Can we see the Mandelbrot Set?" The College Mathematics Journal 26.2 (March 1995): 90-99. Hofstadter, Douglas R. Gödel, Escher, Bach: an Eternal Golden Braid. NY: Basic Books, 1979. Lakoff, George and Mark Johnson. Philosophy in the Flesh: The Embodied Mind and its Challenge to Western Thought. New York: Basic, 1999. Van Fraassen, Bas C. "Empiricism in the Philosophy of Science." Images of Science: Essays on Realism and Empiricism, with a Reply from Bas C. Van Fraassen. eds. Paul M. Churchland and Clifford A. Hooker. Chicago: U of Chicago P, 1985. 245-308. Wilson, E. O. Consilience: The Unity of Knowledge. London: Little Brown, 1998. Cite this Essay: . “Inside God’s Toolbox”, Electronic Book Review, January 17, 2008, https://electronicbookreview.com/essay/inside-gods-toolbox/. Readers wishing to respond to an essay in may send ripostes or short glosses to the journal’s Managing Editor, Will Luers
{"url":"https://electronicbookreview.com/essay/inside-gods-toolbox/","timestamp":"2024-11-05T22:24:58Z","content_type":"text/html","content_length":"148046","record_id":"<urn:uuid:5c3fa964-88a4-4d4d-ad6f-f712caa5aeb1>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00582.warc.gz"}
14.12.04 Martin T. Huber (Klinik für Psychiatrie und Psychotherapie, Philipps-Universität Marburg) Influence of positive feedback and noise on the responses of neurobiological sensitization systems Sensitization means an increasing responsiveness after stimulation and represents a rather ubiquitous biological principle relevant for adaptation and mal-adaptation. We are interested in the basic mechanisms of sensitisation which underly longterm neuronal activity changes under normal and pathological conditions. For that, we recently have explored the dynamics of intrinsic oscillatory neurons engaged in positive feedback loops. The feedback loop becomes activated when a neuron spikes with the result of further excitation. The dynamics depend on the intrinsic properties of the respective neuron (including applied current or temperature range), the "synaptic" or "sensitization" strength, the associated time constant and external factors such as added noise. Dependent on the parameter region we find different periodic and chaotic spiking patterns with stable feedback behavior. As result of the chaotic dynamics of the model, the parametric range for stable spiking patterns is remarkably large. Implications for neurophysiology as well as pathological conditions will be discussed. Huber MT, Braun HA, Krieg JC (2004): Recurrent affective disorders: nonlinear and stochastic models of disease dynamics. International Journal of Bifurcation and Chaos 14:635-652. Sainz-Trapaga M, Masoller C, Braun HA and Huber MT (2004): Influence of time-delayed feedback in the firing patterns of thermally sensitive neurons. Physical Reviews E 70:031904. Braun HA, Schäfer K, Voigt K and Huber MT (2003): Temperature encoding in peripheral cold receptors: oscillations, resonances, chaos and noise. In: Nonlinear dynamics and the spatiotemporal principles in biology. Nova Acta Leopoldina NF Bd. 88. Nr. 332.Huber MT, Braun HA, Krieg JC (2003): On episode sensitization in recurrent affective disorders: the role of noise. Neuropsychopharmacology 28: S13 -S20.Huber MT, Braun HA, Krieg JC (2001): Impact of episode sensitization on the course of recurrent affective disorders. Journal of Psychiatric Research 35: 49 - 57. 09.12.04 Antonio Politi (Istituto Nazionale di Ottica, Florence, Italy) Complete synchronization in extended systems and percolation transition Transition to (complete) synchronization in extended systems can occur via two different scenarios One corresponds to multiplicative noise transition and is described by a Kardar-Parisi-Zhang equation with a repulsive wall. A second scenario is that of directed percolation. The problem can also be mapped onto (non-equilibrium) wetting transitions. A finite-amplitude Lyapunov analysis helps understanding the macrosocpic behaviour. 07.12.04 Michael Schnabel (Max-Planck-Institut für Strömungsforschung, Abteilung für Nichtlineare Dynamiken) A Symmetry Approach for the Layout of Orientation Preference Maps Experimental and theoretical evidence suggests that the development of orientation preference maps (OPMs) constitutes an activity-dependent self-organization process. The formation of OPMs in the visual cortex can be modelled by dynamic field equations [1,2]. Key features of such models strongly depend on the symmetries of the dynamics [2]. We presented a new class of Gaussian random maps which allows to study the consequences of shift-twist symmetry (STS), a fundamental symmetry of visual cortical circuitry [3], on the layout of orientation maps. This symmetry mathematically describes that the position of stimuli in the visual field and the preferred orientation of visual cortical neurons ought to be represented in a common coordinate system. Here we use this approach to identify signatures of this new symmetry which are accessible to experimental testing. We find that STS predicts a locking of the layout of the OPM to the retinotopic map. We calculate the joint probability density of the relative orientation preference of separate columns, as a function of their relative distance and direction. We find that this distribution exhibits a characteristic cloverleaf-like shape. The theoretical predictions are compared to OPMs obtained from tupaia and galago visual cortex. [1] Swindale, N.V. Network, 7:161 (1996) [2] Wolf & Geisel, Nature (1998) 395:73 [3] Bressloff, Cowan, Golubitsky, Thomas, Wiener, Phil.Trans.R.Soc.London.B (2001) 356:299 30.11.04 09.11.04 Pierre Bayerl (Abteilung Neuroinformatik, Fakultät für Informatik, Universität Ulm) Disambiguating Visual Motion through Contextual Feedback Modulation Motion of an extended boundary can be measured locally by neurons only orthogonal to its orientation (aperture problem) while this ambiguity is resolved for localized image features, such as corners or nonocclusion junctions. The integration of local motion signals sampled along the outline of a moving form reveals the object velocity. We propose a new model of V1-MT feedforward and feedback processing in which localized V1 motion signals are integrated along the feedforward path by model MT cells. Top-down feedback from MT cells in turn emphasizes model V1 motion activities of matching velocity by excitatory modulation and thus realizes an attentional gating mechanism. The model dynamics implement a guided filling-in process to disambiguate motion signals through biased on-center, off-surround competition. Our model makes predictions concerning the time course of cells in area MT and V1 and the disambiguation process of activity patterns in these areas and serves as a means to link physiological mechanisms with perceptual behavior. We further demonstrate that our model also successfully processes natural image sequences. In this talk I will also present some recent extensions and results obtained with our model. 26.10.04 Henry Tuckwell (MPI MiS, Leipzig) Random activity of neurons and networks Some recent results for Hodgkin-Huxley neurons with stochastic input will be described, including those obtained by both analytical and simulation methods. Theorems on network activity where the dynamics of single neurons are given will also be discussed. 19.10.04 Mihaela Enculescu (Brandenburgische Technische Universität Cottbus, Theoretische Physik, Germany + MPI MiS, Leipzig) Traveling Waves of Excitation in a Firing Rate Model of a Neural Network Recent experiments have studied waves of electrical activity propagating in various brain regions. Neural field models provide a mathematical framework for the theoretical description of this type of activity. In the mean-firing-rate approach, the activity of a continuously distributed neural network is modeled by an integral equation for the mean membrane potential. We discuss traveling wave solutions of this equation, and how they are influenced by the distance-dependent axonal propagation delay. 20.07.04 Marc Timme (MPI für Strömungsforschung, Abteilung für nichtlineare Dynamiken, Göttingen) Synchrony, Unstable Attractors, and Beyond - Does the Structure of a Neural Network Control its Dynamics? Pulse-coupled oscillators constitute a paradigmatic class of dynamical systems interacting on networks because they model a variety of biological systems including flashing fireflies and chirping crickets as well as pacemaker cells of the heart and neural networks. Synchronization is one of the most simple and most prevailing kinds of collective dynamics on such networks. Here we demonstrate, how breaking different symmetries of the network dynamics affects collective synchronization, often leading to the breaking of synchrony. Globally coupled, symmetric networks without interaction delays attract every random initial condition towards the completely synchronous state. However, we show that the presence of delays or structured network connectivity lead to completely different phenomena: exponentially many periodic attractors, attracting yet unstable periodic orbits, long chaotic transients, and the coexistence of irregular, asynchronous with regular, synchronous dynamics. Furthermore, we investigate the speed of synchronization in structured networks using random matrix theory. Although, as might be expected, the speed of synchronization increases with increasing coupling strengths, it stays finite even for infinitely strong interactions. The source of this speed limit is determined by the connectivity structure of the network. 13.07.04 Melanie Wilke (MPI für biologische Kybernetik, Abteitung für Physiologie kognitiver Prozesse, Tübingen) Neural correlates of induced perceptual disappearance Which conditions are necessary and sufficient for the brain's generation of a visible percept under natural viewing conditions? We might take as a necessary requirement the presence of a physical pattern of light striking the retina. The activation of retinal neurons will cause a cascade of activity coursing its way through the visual system which can then be registered by the brain, and ultimately contribute to perception. But is this sort of automatic sensory response a sufficient condition for a stimulus to be perceived? This question is underscored by the variety of visual suppression phenomena, in which normally visible targets are rendered completely invisible. We developed a paradigm that permits a host of salient and attended patterns to suddenly disappear from view, and remain invisible for up to several seconds and investigated it with psychophysical methods in humans and monkeys (Wilke et al., 2003). In addition, multielectrode recordings were performed in the visual cortex (V1, V2 and V4) of awake and reporting/fixating monkeys under visual stimulation leading to perceptual suppression. We found that whereas the early visual cortex plays an important role in the detection of congruent vs. incongruent visual stimulation, the changes in neuronal firing rate according to stimulus visibility are rather subtle in comparison with a physical stimulus removal. 06.07.04 Dirk Brockmann (MPI für Strömungsforschung, AG für Nichtlineare Dynamiken, Göttingen) Dynamics of Modern Epidemics I will discuss the geographical spread of infectious diseases in a modern world in which humans travel on all scales. As an appetizer I will present a model for the geographical spread of the Severe Acute Respiratory Syndrome (SARS) on the entire civil aviation network and show that this network can be employed to identity endangered regions of future epidemics. I will show that scale free dispersal is linked to a class of random walks known as Levy flights which leads to a description in terms of fractional reaction-dispersal equations which exhibit dynamics vastly different from ordinary reaction diffusion systems. 15.06.04 Galina Ivanova (Institut für Biomedizinische Technik und Informatik, Technische Universität Ilmenau) ABCI: A System for Adaptive Cortical Self Regulation The accelerating progress in hard and software technology, and the decreasing production costs open new opportunities, and at the same time arise new conceptual and methodological demands on an interface between the real and artificial intelligence. The individual is differentiated from the statistical averages, and stands with all his/her particularities, capabilities and deficits, more and more at the focus of the technological developments. On the other hand, conditioned by the desire of being able to control diverse components, such as multimedia applications, environmental conditions (e.g., light and sound in a room) and external devices, a general technology is requested. In order to fulfill these two contradictory requirements, a concept for a flexible and adaptive brain computer interface (ABCI) is developed and technically implemented. Our studies concentrate on the development of a methodology which can enable the configuration and realization of a subject specific interface. Based on the advanced neurobiological and psychophysiological findings on the role and generation mechanisms of the slow cortical potentials (SP), and in consultation with experienced researchers in the field, an SP-based interface is constructed as a sample application. It is successfully used for self-regulation of brain activity by means of multimedia feedback. 18.05.04 Raul Kompass (AG "Künstliche Intelligenz", Institut für Informatik, FU Berlin) Non-negative matrix factorization as a principle for recurrent processing in an oscillating neural network: problems and directions of solution Non-negative Matrix Factorization (NMF) for its restriction of computation to positive vectors and components appears to be biologically plausible. In my talk I will argue that NMF even may serve as an example of how a recurrent neural network which employs spike-timing dependent neural plasticity and synaptic adaptation might work. Resulting problems and directions for their solution will be 04.05.04 Christian Kärnbach (Allgemeine Psychologie, Universität Leipzig, & Institut für Psychologie, Universität Bonn) Memory is a Mudtrap - On the Time Course of Forgetting The time course of forgetting is known since Ebbinghaus' classical studies in the late 19th century and has been confirmed hundreds of times. In contrast to many relaxation processes in physics the time course is not exponential but more of the form of a power law. Many formulas have been suggested to describe the time course of forgetting, but no theory exists to predict any of them. In this talk I will present a neuronal net as a tool to study the dynamics of memory. The equivalence between this net and diffusion processes is explained, and then new developments in the domain of kinetics are applied to this memory model, resulting in the well-known forgetting curves of Ebbinghaus and his followers. 27.04.04 Jörg Lücke (Institut für Neuroinformatik, Ruhr-Universität Bochum) Rapid Processing and Receptive Field Self-Organization in Column Based Neural Networks A neural network is presented which is based on a columnar interconnection architecture. Motivated by neuroanatomical and neurophysiological findings we model a cortical macrocolumn as a collection of inhibitorily coupled minicolumns, which themselves consist of randomly interconnected spiking neurons. A stability analysis of the system's dynamical equations shows that minicolumns can act as monolithic functional units for purposes of critical, fast decisions and learning. Oscillating inhibition (in the gamma frequency range) leads to a phase-coupled population rate code and high sensitivity to small imbalances in minicolumn inputs. If afferent fibers to the minicolumns are subject to Hebbian plasticity, minicolumns self-organize their receptive fields to become classifiers for the input patterns. The presentation will include the analytical treatment of the dynamics along with bifurcation diagrams and various computer simulations. 17.02.04 Bernhard Englitz (Computational Neurobiology Laboratory, The Salk Institute for Biological Studies and Mathematisches Institut, Universität Leipzig) The irregular firing of cortical interneurons in vitro arises from stochastic processes. Pharmacologically isolated GABAergic interneurons in the mouse visual cortex display highly irregular spike times (coefficient of variation ? 1) in response to DC de-polarization. This is in marked contrast to cortical pyramidal cells. We used non-linear time series analysis methods to distinguish between the presence of non-linear deterministic processes or the amplification of sub threshold noise giving rise to the observed dynamics. No evidence for non-linear deterministic processes was found. This leaves a high sensibility of the interneuronal spike initiation process for membrane potential noise as the most likely explanation for the high CV. We propose that this intrinsically irregular spiking of an important subpopulation of cortical neurons contributes to the overall irregularity of cortical activity. 10.02.04 Nils Bertschinger (Institut für Grundlagen der Informationsverarbeitung, TU Graz, Österreich) Computation at the Edge of Chaos in Recurrent Neural Networks Recurrent Neural Networks are powerful, biologically inspired models for computations on time varying input signals. Due to their high-dimensionality it is difficult to utilize their power for information processing. In this talk a new framework will be presented that allows to investigate the computational capabilities inherent in large, randomly connected networks. Using this framework a link between the network dynamics and its computational capabilities is found. The results illustrate the idea that dynamical systems support computations optimally if they operate at the "Edge of Chaos", a notation which can be formally defined in networks of McCulloch-Pitts neurons. This allows to analyse how the dynamics of such networks depends on the parameters controlling the connectivity distribution. In particular the critical boundary is calculated where the dynamics changes from ordered to chaotic. 03.02.04 Axel Hutt (Weierstrass Institute for Applied Analysis and Stochastics, Berlin) Pattern formation in neural fields subject to propagation delay Neural activity can be measured by different experimental techniques, as single cell measurements on a microscopic spatial scale (~0.05-0.2 mm) or local field potentials at a mesoscopic spatial scale of some millimeters. As these different spatial scales exhibit different neural mechanisms, most neural models focus to a single scale. The presented talk discusses the stability of mesoscopic activity in synaptically coupled neural fields subject to propagation delays. Since concrete synaptic connectivities are unknown in most neural areas, the work derives stability conditions for arbitrary homogeneous connectivities. The application to gamma-distributed connectivity kernels reveal a novel condition for stationary Turing instabilities. 27.01.04 Gabriele Lohmann (Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig) On the formation of the human cortical folding One of the most intriguing problems in brain research is the high interpersonal variability of the human cortical folding. In this talk, a sequence of image analysis procedures applied to magnetic resonance imaging data will be presented that may help to shed some light on this problem. The aim of these procedures is to extract a generic model of the human cortical folding and to infer rules that govern the formation of cortical folds. 13.01.04 Hermann Cuntz (Max Planck Institute of Neurobiology, Martinsried) Modeling dendritic networks in the fly visual system Convolution is one of the most common operations in image processing. For a nervous system to perform such an operation on a topographic map, e.g. to blur a sensory representation of the visual field, would require an extensive network of local cells where each cell connects with all others. Based on experimental findings on two large-field visual interneurons of the fly, I will show by realistic compartmental modeling that a linear dendro-dendritic electrical coupling has the capability to perform this operation.
{"url":"https://www.mis.mpg.de/de/events/series/arbeitsgemeinschaft-neuronale-netze-und-kognitive-systeme","timestamp":"2024-11-04T13:22:34Z","content_type":"text/html","content_length":"675754","record_id":"<urn:uuid:ea05afdf-203c-4f46-8bad-380fa4c94a69>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00329.warc.gz"}
How do satellites stay in orbit? - Technical Capsule - impulso.space The most common questions we receive are about how a spacecraft stays in orbit. More specifically, the physics and mathematics involved. In this first part, we will delve deep into the orbiting phenomenon. In the second one, instead, we will discuss other hot topics linked to how spacecrafts orbit. So let’s get to the first question: How exactly do satellites stay in orbit? Qualitative Description of the Phenomenon Satellites are able to stay in orbit thanks to their velocity. However, let’s take this one step further. In the XVII century, Sir Isaac Newton was able to explain how satellites remain in Earth’s orbit starting from the concept of projectile motion. For example, if you throw a stone from a mountaintop, it will fall on the ground at some set distance. If you then throw that same stone from the same height but with greater speed, it will now travel further than it did in the first throw. Now, let’s expand this example to include a much larger scale, such as Earth. As our beloved planet is not flat, a projectile that has enough speed will continue to “fall” without ever intersecting the ground. So, a satellite that fulfills these conditions will end up traveling around the Earth, meaning that it will start orbiting our planet. As satellites are placed outside the Earth’s atmosphere where there is no resistance due to air — also known as drag — they are able to stay orbiting our planet for many years. Furthermore, due to Newton’s first law, an object that moves at a constant speed will stay at said constant speed unless acted upon by some outside force. And that is how satellites are able to maintain a constant orbit around the Earth. Of course, this is an oversimplification, as we are not accounting for every force that’s acting on them — such as solar radiation pressure, and the gravitational pulls of the Sun and Moon. However, we have many capsules in store that will delve deeper into these topics, so stay tuned for those! Still, the following question arises: How fast do these satellites have to go? To answer this, let’s take the case of a circular orbit. To obtain the velocity needed for the satellite to remain in orbit, we need to equate the gravitational force with the pseudo outward “force”, commonly known as the centrifugal force. The equations for these two quantities are, respectively: Therefore, the only variable influencing the velocity of a circular orbit is the radius from the center of the Earth to the orbit itself. For a practical example, we can look at the International Space Station or ISS, which, in order to stay at an orbit of 400 km above the Earth’s surface, needs the following velocity of V = 7.8 km/s = 28,000 km/h or 17,500 mi/h. That is because the R in this problem is given by Earth’s radius plus the orbiting distance. Meaning, R = 6371 +400 = 6771 km. To better understand the change in velocity between two different orbits, it could be useful to also see what is the required velocity for a satellite in a GEO orbit (for more info about this orbit check the Altitude Classification). We have to repeat the previous calculation for an orbit, but with an altitude of 35,786 km. This corresponds to a radius of R = (6,371 + 35,786) km = 42,157 km. The resulting velocity is then V = 3.075 km/s =11,070 km/h. So, between an orbit of 400 km and one of 35,786 km, the delta between the two velocities is ΔV = (28,000 – 11,000) km/h = 17,000 km/h. More than twice as slow! And now it is time to proceed to the most common questions linked to this topic… What keeps a satellite up in its orbit? In the previous paragraph, we just touched on this point by explaining the math behind the velocity of a spacecraft. But this topic is very interesting and meaningful when it comes to a spacecraft’s As we saw before, the satellite needs to have a centrifugal “force” that equalizes the gravitational pull. This centrifugal “force” is a pseudo force that appears to act on all objects in a rotating frame of reference. And, in this particular case, the speed of the spacecraft generates is what generates it. If the spacecraft is sent into orbit too slowly, the centrifugal force will not balance the gravitational force, causing the spacecraft to fall back down to Earth. So the answer is: “Nothing keeps the satellite into orbit. It is simply a matter of setting the speed that generates the appropriate centrifugal force to balance out the gravitational pull”. Do satellites eventually fall out of orbit? This question is really interesting and, in fact, “yes, satellites could fall out of their orbit”. This phenomenon can be intentional or “casual”. In the first case, the fall of a satellite can be imposed from the satellite operator; this can be due to damage on the satellite or to the end of its lifespan. To impose the fall of a satellite you need to have the possibility to change the speed of it, e.g. with a propulsion system. In the second case, the satellite can have some trouble reaching orbit, something that happened some months ago with the launch of a Starlink batch which had to reenter the atmosphere due to a solar storm. The satellites reentering the atmosphere were destroyed. However, this is not the only case for de-orbiting. Another very important, already mentioned in this article one is the ISS. The ISS has such a great mass (more than 400,000 kg!) and volume that atmospheric drag plays a bigger role here. In fact, the ISS’s orbit decays 2 km every month. After all, this year the ISS should have already crashed back to Earth. However, thank to its own propulsion, it is able to counter this decay. That is it for this Technical Capsule on “How do satellites stay in orbit?”. If you are interested in other aspects of satellites please check out the Satellite tag on our Blog page.
{"url":"https://impulso.space/tools/blog/posts/how-do-satellites-stay-in-orbit","timestamp":"2024-11-14T03:27:13Z","content_type":"text/html","content_length":"241899","record_id":"<urn:uuid:d44750a5-4ae0-4fec-a3b3-80951184fbcd>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00869.warc.gz"}
GM1.3: Measure Twice, Cut Once Ember has a pile of unsorted glass fragments (sides and angles from triangles). Find three pieces that match the second window from the left, place them into the Triangle Template to cut a new glass panel. Fit the new panel into the window using the Architect's View. Begin this quest line before introducing the concept of congruence and congruent triangles. In using the broken angle and side fragments to create new glass panels, they are determining the necessary pieces that make two congruent triangles and in effect are learning about congruence postulates/theorems (SAS, SSS, ASA, AAS). Learning Objectives: • Players will be able to use congruence postulates/theorems in order to create congruent triangles Connecting Questions: 1. How did you determine which fragments to use to create the new glass panel? 2. Did you try any combinations that didn’t work? 3. Did it matter what order you put them in to be able to cut the correct piece? 4. What orders did you try? Which ones worked? (this is where connections should be made to the theorems – using three side fragments to create the triangle = SSS congruence postulate) Additional Resources: These resources can give students additional practice on congruent triangles:
{"url":"https://www.radixendeavor.org/resources/quests/gm13-measure-twice-cut-once.html","timestamp":"2024-11-11T08:02:28Z","content_type":"text/html","content_length":"26938","record_id":"<urn:uuid:98edea75-043f-4485-a1e4-c2ee1ac05e0e>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00506.warc.gz"}
Least squares support tensor machine Least squares support vector machine (LS-SVM), as a variant of the standard support vector machine (SVM) operates directly on patterns represented by vector and obtains an analytical solution directly from solving a set of linear equations instead of quadratic programming (QP). Tensor representation is useful to reduce the overfitting problem in vector-based learning, and tensor-based algorithm requires a smaller set of decision variables as compared to vector-based approaches. Above properties make the tensor learning specially suited for small-sample-size (S3) problems. In this paper, we generalize the vectorbased learning algorithm least squares support vector machine to the tensor-based method least squares support tensor machine (LS-STM), which accepts tensors as input. Similar to LS-SVM, the classifier is obtained also by solving a system of linear equations rather than a QP. LS-STM is based on the tensor space, with tensor representation, the number of parameters estimated by LS-STM is less than the number of parameters estimated by LS-SVM, and avoids discarding a great deal of useful structural information. Experimental results on some benchmark datasets indicate that the performance of LS-STM is competitive in classification performance compared to LS-SVM. Publication series Name 11th International Symposium on Operations Research and its Applications in Engineering, Technology and Management 2013, ISORA 2013 Conference 11th International Symposium on Operations Research and Its Applications in Engineering, Technology and Management 2013, ISORA 2013 Country/Territory China City Huangshan Period 23/08/13 → 25/08/13 • Alternating projection • Least squares support tensor machine • Least squares support vector machine • Support tensor machine • Tensor representation Dive into the research topics of 'Least squares support tensor machine'. Together they form a unique fingerprint.
{"url":"https://pure.bit.edu.cn/en/publications/least-squares-support-tensor-machine-2","timestamp":"2024-11-09T12:54:37Z","content_type":"text/html","content_length":"51770","record_id":"<urn:uuid:902824db-23df-454e-8f3a-4080141413f7>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00300.warc.gz"}
Add & Subtract Fractions 4th grade | 4.NF.B.3.a, 4.NF.B.3.b | 1 minute, 51 seconds In this framing video, 4th graders are introduced to the concept of adding and subtracting fractions with like denominators using puppy food. Unlock this activity and thousands more in eSpark’s playfully personalized learning environment. Asking and Answering Questions, RI.3.1, 3.6.F, 3.RI.1, 3.RI.2.3.B, 3.RI.1, 3.R4.4.1, 3.R.1 Multiply Multi-Digit Numbers, 5.NBT.B.5, 5.3.B, 5.4.B, 5.NBT.5, 5.NBT.1.5.B.2, 5.NBT.5, 5.1.1, 5.NBT.5 Common Prefixes and Suffixes, RF.3.3.a, RF.3.3.b, 3.2.A, 3.3.C, 3.RF.4.a, 3.FS.1.3.D, 3.RF.3.a, 3.R1.1.1.a, 3.RF.3.a
{"url":"https://www.esparklearning.com/activities/math/add-and-subtract-fractions-with-like-denominators/","timestamp":"2024-11-11T11:51:43Z","content_type":"text/html","content_length":"1049078","record_id":"<urn:uuid:cb3e550a-c002-4243-a864-d6e5d7965de1>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00124.warc.gz"}
In algebraic number theory, a quadratic field is an algebraic number field of degree two over ${\displaystyle \mathbf {Q} }$, the rational numbers. Every such quadratic field is some ${\displaystyle \mathbf {Q} ({\sqrt {d}})}$ where ${\displaystyle d}$ is a (uniquely defined) square-free integer different from ${\displaystyle 0}$ and ${\ displaystyle 1}$. If ${\displaystyle d>0}$, the corresponding quadratic field is called a real quadratic field, and, if ${\displaystyle d<0}$, it is called an imaginary quadratic field or a complex quadratic field, corresponding to whether or not it is a subfield of the field of the real numbers. Quadratic fields have been studied in great depth, initially as part of the theory of binary quadratic forms. There remain some unsolved problems. The class number problem is particularly important. Ring of integers For a nonzero square free integer ${\displaystyle d}$ , the discriminant of the quadratic field ${\displaystyle K=\mathbf {Q} ({\sqrt {d}})}$ is ${\displaystyle d}$ if ${\displaystyle d}$ is congruent to ${\displaystyle 1}$ modulo ${\displaystyle 4}$ , and otherwise ${\displaystyle 4d}$ . For example, if ${\displaystyle d}$ is ${\displaystyle -1}$ , then ${\displaystyle K}$ is the field of Gaussian rationals and the discriminant is ${\displaystyle -4}$ . The reason for such a distinction is that the ring of integers of ${\displaystyle K}$ is generated by ${\displaystyle (1+{\sqrt {d}})/2}$ in the first case and by ${\displaystyle {\sqrt {d}}}$ in the second case. The set of discriminants of quadratic fields is exactly the set of fundamental discriminants (apart from ${\displaystyle 1}$ , which is a fundamental discriminant but not the discriminant of a quadratic field). Prime factorization into ideals Any prime number ${\displaystyle p}$ gives rise to an ideal ${\displaystyle p{\mathcal {O}}_{K}}$ in the ring of integers ${\displaystyle {\mathcal {O}}_{K}}$ of a quadratic field ${\displaystyle K}$ . In line with general theory of splitting of prime ideals in Galois extensions, this may be^[1] ${\displaystyle p}$ is inert ${\displaystyle (p)}$ is a prime ideal. The quotient ring is the finite field with ${\displaystyle p^{2}}$ elements: ${\displaystyle {\mathcal {O}}_{K}/p{\mathcal {O}}_{K}=\mathbf {F} _{p^{2}}}$ . ${\displaystyle p}$ splits ${\displaystyle (p)}$ is a product of two distinct prime ideals of ${\displaystyle {\mathcal {O}}_{K}}$ . The quotient ring is the product ${\displaystyle {\mathcal {O}}_{K}/p{\mathcal {O}}_{K}=\mathbf {F} _{p}\times \mathbf {F} _{p}}$ . ${\displaystyle p}$ is ramified ${\displaystyle (p)}$ is the square of a prime ideal of ${\displaystyle {\mathcal {O}}_{K}}$ . The quotient ring contains non-zero nilpotent elements. The third case happens if and only if ${\displaystyle p}$ divides the discriminant ${\displaystyle D}$ . The first and second cases occur when the Kronecker symbol ${\displaystyle (D/p)}$ equals ${\ displaystyle -1}$ and ${\displaystyle +1}$ , respectively. For example, if ${\displaystyle p}$ is an odd prime not dividing ${\displaystyle D}$ , then ${\displaystyle p}$ splits if and only if ${\ displaystyle D}$ is congruent to a square modulo ${\displaystyle p}$ . The first two cases are, in a certain sense, equally likely to occur as ${\displaystyle p}$ runs through the primes—see Chebotarev density theorem.^[2] The law of quadratic reciprocity implies that the splitting behaviour of a prime ${\displaystyle p}$ in a quadratic field depends only on ${\displaystyle p}$ modulo ${\displaystyle D}$ , where ${\ displaystyle D}$ is the field discriminant. Class group Determining the class group of a quadratic field extension can be accomplished using Minkowski's bound and the Kronecker symbol because of the finiteness of the class group.^[3] A quadratic field ${\ displaystyle K=\mathbf {Q} ({\sqrt {d}})}$ has discriminant ${\displaystyle \Delta _{K}={\begin{cases}d&d\equiv 1{\pmod {4}}\\4d&d\equiv 2,3{\pmod {4}};\end{cases}}}$ so the Minkowski bound is^[4]${\ displaystyle M_{K}={\begin{cases}2{\sqrt {|\Delta |}}/\pi &d<0\\{\sqrt {|\Delta |}}/2&d>0.\end{cases}}}$ Then, the ideal class group is generated by the prime ideals whose norm is less than ${\displaystyle M_{K}}$ . This can be done by looking at the decomposition of the ideals ${\displaystyle (p)}$ for ${\displaystyle p\in \mathbf {Z} }$ prime where ${\displaystyle |p|<M_{k}.}$ ^[1] ^page 72 These decompositions can be found using the Dedekind–Kummer theorem. Quadratic subfields of cyclotomic fields The quadratic subfield of the prime cyclotomic field A classical example of the construction of a quadratic field is to take the unique quadratic field inside the cyclotomic field generated by a primitive ${\displaystyle p}$ th root of unity, with ${\ displaystyle p}$ an odd prime number. The uniqueness is a consequence of Galois theory, there being a unique subgroup of index ${\displaystyle 2}$ in the Galois group over ${\displaystyle \mathbf {Q} }$ . As explained at Gaussian period, the discriminant of the quadratic field is ${\displaystyle p}$ for ${\displaystyle p=4n+1}$ and ${\displaystyle -p}$ for ${\displaystyle p=4n+3}$ . This can also be predicted from enough ramification theory. In fact, ${\displaystyle p}$ is the only prime that ramifies in the cyclotomic field, so ${\displaystyle p}$ is the only prime that can divide the quadratic field discriminant. That rules out the 'other' discriminants ${\displaystyle -4p}$ and ${\displaystyle 4p}$ in the respective cases. Other cyclotomic fields If one takes the other cyclotomic fields, they have Galois groups with extra ${\displaystyle 2}$ -torsion, so contain at least three quadratic fields. In general a quadratic field of field discriminant ${\displaystyle D}$ can be obtained as a subfield of a cyclotomic field of ${\displaystyle D}$ -th roots of unity. This expresses the fact that the conductor of a quadratic field is the absolute value of its discriminant, a special case of the conductor-discriminant formula. Orders of quadratic number fields of small discriminant The following table shows some orders of small discriminant of quadratic fields. The maximal order of an algebraic number field is its ring of integers, and the discriminant of the maximal order is the discriminant of the field. The discriminant of a non-maximal order is the product of the discriminant of the corresponding maximal order by the square of the determinant of the matrix that expresses a basis of the non-maximal order over a basis of the maximal order. All these discriminants may be defined by the formula of Discriminant of an algebraic number field § Definition. For real quadratic integer rings, the ideal class number, which measures the failure of unique factorization, is given in OEIS A003649; for the imaginary case, they are given in OEIS A000924. Order Discriminant Class number Units Comments ${\displaystyle \mathbf {Z} \left[{\sqrt {-5}} ${\displaystyle ${\ Ideal classes ${\displaystyle (1)}$ , ${\displaystyle (2,1+ \right]}$ -20}$ displaystyle ${\displaystyle \pm 1}$ {\sqrt {-5}})}$ ${\displaystyle \mathbf {Z} \left[(1+{\sqrt ${\displaystyle ${\ {-19}})/2\right]}$ -19}$ displaystyle ${\displaystyle \pm 1}$ Principal ideal domain, not Euclidean ${\displaystyle \mathbf {Z} \left[2{\sqrt ${\displaystyle ${\ {-1}}\right]}$ -16}$ displaystyle ${\displaystyle \pm 1}$ Non-maximal order ${\displaystyle \mathbf {Z} \left[(1+{\sqrt ${\displaystyle ${\ Ideal classes ${\displaystyle (1)}$ , ${\displaystyle (1, {-15}})/2\right]}$ -15}$ displaystyle ${\displaystyle \pm 1}$ (1+{\sqrt {-15}})/2)}$ ${\displaystyle \mathbf {Z} \left[{\sqrt {-3}} ${\displaystyle ${\ \right]}$ -12}$ displaystyle ${\displaystyle \pm 1}$ Non-maximal order ${\displaystyle \mathbf {Z} \left[(1+{\sqrt ${\displaystyle ${\ {-11}})/2\right]}$ -11}$ displaystyle ${\displaystyle \pm 1}$ Euclidean ${\displaystyle \mathbf {Z} \left[{\sqrt {-2}} ${\displaystyle ${\ \right]}$ -8}$ displaystyle ${\displaystyle \pm 1}$ Euclidean ${\displaystyle \mathbf {Z} \left[(1+{\sqrt ${\displaystyle ${\ {-7}})/2\right]}$ -7}$ displaystyle ${\displaystyle \pm 1}$ Kleinian integers ${\displaystyle \mathbf {Z} \left[{\sqrt {-1}} ${\displaystyle ${\ ${\displaystyle \pm 1,\pm i}$ (cyclic of order ${\ \right]}$ -4}$ displaystyle displaystyle 4}$ ) Gaussian integers ${\displaystyle \mathbf {Z} \left[(1+{\sqrt ${\displaystyle ${\ {-3}})/2\right]}$ -3}$ displaystyle ${\displaystyle \pm 1,(\pm 1\pm {\sqrt {-3}})/2}$ . Eisenstein integers ${\displaystyle \mathbf {Z} \left[{\sqrt ${\displaystyle ${\ Class group non-cyclic: ${\displaystyle (\mathbf {Z} /2\ {-21}}\right]}$ -84}$ displaystyle mathbf {Z} )^{2}}$ ${\displaystyle \mathbf {Z} \left[(1+{\sqrt ${\displaystyle ${\ ${\displaystyle \pm ((1+{\sqrt {5}})/2)^{n}}$ (norm ${\ {5}})/2\right]}$ 5}$ displaystyle displaystyle (-1)^{n}}$ ) ${\displaystyle \mathbf {Z} \left[{\sqrt {2}}\ ${\displaystyle ${\ ${\displaystyle \pm (1+{\sqrt {2}})^{n}}$ (norm ${\ right]}$ 8}$ displaystyle displaystyle (-1)^{n}}$ ) ${\displaystyle \mathbf {Z} \left[{\sqrt {3}}\ ${\displaystyle ${\ ${\displaystyle \pm (2+{\sqrt {3}})^{n}}$ (norm ${\ right]}$ 12}$ displaystyle displaystyle 1}$ ) ${\displaystyle \mathbf {Z} \left[(1+{\sqrt ${\displaystyle ${\ ${\displaystyle \pm ((3+{\sqrt {13}})/2)^{n}}$ (norm ${\ {13}})/2\right]}$ 13}$ displaystyle displaystyle (-1)^{n}}$ ) ${\displaystyle \mathbf {Z} \left[(1+{\sqrt ${\displaystyle ${\ ${\displaystyle \pm (4+{\sqrt {17}})^{n}}$ (norm ${\ {17}})/2\right]}$ 17}$ displaystyle displaystyle (-1)^{n}}$ ) ${\displaystyle \mathbf {Z} \left[{\sqrt {5}}\ ${\displaystyle ${\ ${\displaystyle \pm ({\sqrt {5}}+2)^{n}}$ (norm ${\ right]}$ 20}$ displaystyle displaystyle (-1)^{n}}$ ) Non-maximal order Some of these examples are listed in Artin, Algebra (2nd ed.), §13.8. See also • Buell, Duncan (1989), Binary quadratic forms: classical theory and modern computations, Springer-Verlag, ISBN 0-387-97037-1 Chapter 6. • Samuel, Pierre (1972), Algebraic Theory of Numbers (Hardcover ed.), Paris / Boston: Hermann / Houghton Mifflin Company, ISBN 978-0-901-66506-5 □ Samuel, Pierre (2008), Algebraic Theory of Numbers (Paperback ed.), Dover, ISBN 978-0-486-46666-8 • Stewart, I. N.; Tall, D. O. (1979), Algebraic number theory, Chapman and Hall, ISBN 0-412-13840-9 Chapter 3.1. External links
{"url":"https://www.knowpia.com/knowpedia/Quadratic_field","timestamp":"2024-11-13T02:16:45Z","content_type":"text/html","content_length":"278135","record_id":"<urn:uuid:2475e3f7-3e97-4956-ac9c-c8524b87b3e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00638.warc.gz"}
How to Find the Derivative of a Function Using the Quotient Rule The quotient rule is a way to find the derivative of a function. It works by comparing two different functions. One function is f(x), and the other is g(x). Then, we find the derivative of f using the quotient rule. The article will describe all the detail which express all the basics of the derivative of a function. What is the Quotient Rule? The quotient rule is a method of differentiation that allows you to calculate the derivative of a function based on its definition. In order to use this formula, you must know how to calculate the derivatives of various functions. Steps: To differentiate a function using the quotient rule, you must first determine the derivative of each term in the equation. This involves finding the derivative of each variable and then using the product rule. It is used to check out integral calculus problems. To remember this rule, you must begin with the bottom function and square it. You can also express the quotient rule as the product of the denominator and its derivative, where the denominator is the square of the original denominator function. Step-by-step solution to Find the Derivative of a Function The quotient rule is a simple rule that allows you to differentiate tangent functions between two points. A basic identity in trigonometry defines a tangent function: sin(x) = cos(x), and the quotient is defined by the product of the cosines of the two points. The quotient of two functions is equal to the sum of their domains, A and B, except for the case of g(x) = 0. You can practice the quotient rule by solving practice questions. A sample question would be: Find the derivative of f(x) in two variables. You should choose the top term f(x) and the denominator g (x). You should name the top term f(x) and the derivative of g(x). Once you’ve completed the steps, you should be able to write the quotient rule equations. The quotient rule is similar to the product rule. You’ll see that the definition of the quotient rule follows the definition of the limit of the derivative. The bottom function is the denominator’s derivative, and the top function is its square. Math is often used for calculating the derivative of functions, and it’s easy to understand. A quotient rule is an important tool in solving differentiation problems. Quick Method of finding the Derivative of Complex Quotient Function The quotient rule is a shortcut for finding the derivative of a complex quotient function. To use the quotient rule, you first need to know the functions in the denominator and numerator. Then, you can multiply the result by the denominator to find its derivative. Its formula essentially shows that the derivative of an x is equal to f(x). This derivative is close to the quotient terms. It is important to note that this formula has two possible outcomes: a positive derivative for x and a negative derivative for x. The quotient rule can also be applied to function expressions that are expressed as the difference or sum of rational expressions. The quotient rule simplifies these expressions. Product Rule which Shows Similarity to the Product Rule A product rule is similar to the quotient rule in that they both define the derivative as the first factor times the derivative of the second factor. In addition, the power rule can be used in situations where the first factor is negative. These three rules can be combined to find the derivative of a polynomial or rational function. This rule can be proved visually, and the denominator must be different from the numerator when you use this rule. The quotient rule can also be proven by applying the product rule to the numerator. Itis similar to the product rule because it differentiates x cos(x). The product rule also differentiates x2 log x and x2 sin x. The derivative of a function is the slope of its graph. In other words, the slope of the line best fits the data points in the graph of that function. In order to find the derivative of a function, all you need to do is look at the rate at which the data points change from the previous point to the next one.
{"url":"http://higheducations.com/how-to-find-the-derivative-of-a-function/","timestamp":"2024-11-09T19:08:36Z","content_type":"text/html","content_length":"90715","record_id":"<urn:uuid:a127b087-1ece-477a-bf76-89cbcfecca7e>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00650.warc.gz"}
Excel stuff I would really like to have some actual data to work with, but I can not find any that I would 1-know how to read, 2-know how to work with, 3-would not get frustrated just by looking at csv file full of digits and and commas. Creating my own dataset just by typing words and numbers in a google sheet is possible(and probably I will end up doing that and using what I come up with for the next two weeks), but… I forgot what I wanted to say. The thing is, according to my "plan" I am supposed to be learning about excel and statistics right now, but for some reason as I watch a tutorial or read a book (think stats) I come up to a road block which tells me that to continue you need to know how to use python or how to use excel. 1-I dont know how to use python and I am supposed to learn in in 2 weeks(according to my plan :D) and 2-I dont have excel on my computer (need to install in on linux through vine after finding a free ISO…) and google sheets lacks some functions that I see tutorials about… Again, I don't feel that I am progressing anywhere, I am just looking at various ways to get data, learning about various types of data and basically finding out ways of how NOT to start learning DS :D But it's alright, I'll get there. Lets focus on excel for the rest of today. 16:20 Filters in excel a quite useful. Filters in - excel are quite useful. Watching in 1.5x speed just to get an idea of what is possible. Totals, nice. Transform your data into a table to get an easy totals/averages and so on. You get not only filering options but also your forumulas or charts referencing the data - they both get updated. Kay, will keep in mind. Formulas and Functions by the guy who works at Microsoft :D. Been seeing median here and there, so here is a refresher: Concatenate - =CONCAT(x1,x2) takes value from one row, then from the other one that you selec and separate with comman and puts them together in one "langeli". If - IF(D13"smile", "yay", "boo") good times Csongor, good times ;) IF a field equals to a word "smile", the returned value will be "yay", if the field equals to anything else than "smile", returns the value "boo". Countif - Selecting a range of values and asking the formula to calculate how many instances that we have specied are in the given range. For example. =COUNTIF(B1:G7, 1) formula presents me with a number that tells me how many numbers "1" are in the field that I asked to look in. Vlookup - things are getting more interesting. So Vlookup looks at the table, lest say the table has two values… you know, let me just put an image rly quick (more vim and html practice on top of So the formula that found out the favorite color of ciongibongi is as such - =VLOOKUP(D18,D15:E18,2,FALSE) Sumif - is a nice function I guess but wont make an example, I am aware of it alright. p.s. damn it takes a lot of time to write this html document in a proper way. I sometimes miss Word document, but sometimes I don't. Vim has it's own advantages, but I am new with it so I am a little bit slow. Learned how to yank, paste a word or a whole line today.
{"url":"http://arvydas.dev/20210315T165100--excel-stuff__learning.html","timestamp":"2024-11-09T03:54:57Z","content_type":"text/html","content_length":"7071","record_id":"<urn:uuid:106a1bee-f2d0-4fa6-aafb-ef3e929ffb40>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00647.warc.gz"}
Lesson 3 Making Scaled Copies 3.1: More or Less? (5 minutes) This warm-up prompts students to use what they know about numbers and multiplication to reason about decimal computations. The problems are designed to result in an answer very close to the given choices, so students must be more precise in their reasoning than simply rounding and calculating. Whereas a number talk typically presents a numerical expression and asks students to explain strategies for evaluating it, this activity asks a slightly different question because students don't necessarily need to evaluate the expression. Rather, they are asked to judge whether the expression is greater than or less than a given value. Although this activity is not quite the same thing as a number talk, the discussion might sound quite similar. Display the problems for all to see. Give students 2 minutes of quiet think time. Tell students they may not have to calculate, but could instead reason using what they know about the numbers and operation in each problem. Ask students to give a signal when they have an answer and a strategy for every problem. Student Facing For each problem, select the answer from the two choices. 1. The value of \(25\boldcdot (8.5)\) is: 1. More than 205 2. Less than 205 2. The value of \((9.93)\boldcdot (0.984)\) is: 1. More than 10 2. Less than 10 3. The value of \((0.24)\boldcdot (0.67)\) is: 1. More than 0.2 2. Less than 0.2 Anticipated Misconceptions Students may attempt to solve each problem instead of reasoning about the numbers and operations. If a student is calculating an exact solution to each problem, ask them to look closely at the characteristics of the numbers and how an operation would affect those numbers. Activity Synthesis Discuss each problem one at a time with this structure: • Ask students to indicate which option they agree with. • If everyone agrees on one answer, ask a few students to share their reasoning, recording it for all to see. • If there is disagreement on an answer, ask students with differing answers to explain their reasoning and come to an agreement on an answer. 3.2: Drawing Scaled Copies (10 minutes) Optional activity Students continue to work with scaled copies of simple geometric figures, this time on a grid. When trying to scale non-horizontal and non-vertical segments, students may think of using tracing paper or a ruler to measure lengths and a protractor to measure angles. Make sure they have a chance to see how the structure of the grid can be useful for scaling the lengths of non-vertical and non-horizonal segments. To create scaled copies, students need to attend to all parts of the original figure, or else the copy will not be scaled correctly. Use of the grid for scaling non-horizontal and non-vertical segments is a good example of using tools strategically (MP5). As students work, monitor for students who find a way to scale segment lengths properly but neglect to consider the size of corresponding angles (especially in making a copy of Figure B and D). Give students 3 minutes of quiet time to draw and another 3 minutes to share their drawings with a partner, check each other's work, and make revisions. Provide access to their geometry toolkits. Representation: Internalize Comprehension. Check in with students after the first 2-3 minutes of work time. Check to make sure students have attended to all parts of the original figures. Supports accessibility for: Conceptual processing; Organization Speaking, Representing: MLR1 Stronger and Clearer Each Time. Use this routine to support productive discussion when students share their drawings with a partner. Give students time to meet with 2–3 partners, to share and get feedback on their scaled copies. Provide students with prompts for feedback that will help their partners strengthen their ideas and clarify their drawings (e.g., “How did you know how long to make each side length?”, “How did you measure to make each angle”?”, “How did you use the grid to create your scaled copy?”). Students can borrow ideas and language from each partner to strengthen their work. This provides students with an opportunity to produce verbal mathematical language in service of refining their ideas and their drawings. Design Principle(s): Optimize output (for justification) Student Facing 1. Draw a scaled copy of either Figure A or B using a scale factor of 3. 2. Draw a scaled copy of either Figure C or D using a scale factor of \(\frac12\). Give students 3 minutes of quiet time to draw and another 3 minutes to share their drawings with a partner, check each other's work, and make revisions. Provide access to their geometry toolkits. Representation: Internalize Comprehension. Check in with students after the first 2-3 minutes of work time. Check to make sure students have attended to all parts of the original figures. Supports accessibility for: Conceptual processing; Organization Speaking, Representing: MLR1 Stronger and Clearer Each Time. Use this routine to support productive discussion when students share their drawings with a partner. Give students time to meet with 2–3 partners, to share and get feedback on their scaled copies. Provide students with prompts for feedback that will help their partners strengthen their ideas and clarify their drawings (e.g., “How did you know how long to make each side length?”, “How did you measure to make each angle”?”, “How did you use the grid to create your scaled copy?”). Students can borrow ideas and language from each partner to strengthen their work. This provides students with an opportunity to produce verbal mathematical language in service of refining their ideas and their drawings. Design Principle(s): Optimize output (for justification) Student Facing 1. Draw a scaled copy of either Figure A or B using a scale factor of 3. 2. Draw a scaled copy of either Figure C or D using a scale factor of \(\frac12\). Anticipated Misconceptions Some students may think that Figure C cannot be scaled by a factor of \(\frac12\) because some vertices will not land on intersections of grid lines. Clarify that the grid helps us see lengths in whole units but segments we draw on them are not limited to whole units in length. Activity Synthesis Invite students to share their strategies of how they used the grid (or other tools) to make sure their drawings were scaled copies. Consider asking questions like: • How did you know how long to make each side in your scaled copy? • How did you know how big to make each angle in your scaled copy? • If you made a mistake while drawing your scaled copy, how could you tell? Model, prompt, and listen for the language students are using to distinguish between scaled and not scaled figures. Emphasize the usefulness of the grid in drawing and checking right angles, and for drawing and checking lengths of segments. All correct answers will be the same size and shape, but they could be drawn in different positions on the grid. 3.3: Which Operations? (Part 1) (10 minutes) The purpose of this activity is to contrast the effects of multiplying side lengths versus adding to side lengths when creating copies of a polygon. To find the corresponding side lengths on a scaled copy, the side lengths of a figure are all multiplied (or divided) by the same number. However, students often mistakenly think that adding or subtracting the same number to all the side lengths will also create a scaled copy. When students recognize that there is a multiplicative relationship between the side lengths rather than an additive one, they are looking for and making use of structure Monitor for students who: • notice that Diego's copy is no longer a polygon while Jada's still is • notice that the relationships between side lengths in Diego's copy have changed (e.g., Side 1 is twice as long as Side 2 in the original but is not twice as long as Side 2 in the copy.) while in Jada's copy they have not • notice that all the corresponding angles have equal measures (i.e., 90 or 270 degrees) • describe Jada's copy as having all side lengths divided by 3 • describe Jada's copy as having all side lengths a third as long as their original lengths • describe Jada's copy as having a scale factor of \(\frac13\) Give students 2–3 minutes of quiet think time, and then 2 minutes to share their thinking with a partner. See MLR 3 (Clarify, Critique, Correct) and use the strategy "Critique a Partial or Flawed Engagement: Internalize Self Regulation. Demonstrate giving and receiving constructive feedback. Use a structured process and display sentence frames to support productive feedback. For example, “How did you get…?,” “How do you know…?,” and “That could/couldn’t be true because…” Supports accessibility for: Social-emotional skills; Organization; Language Student Facing Diego and Jada want to scale this polygon so the side that corresponds to 15 units in the original is 5 units in the scaled copy. Diego and Jada each use a different operation to find the new side lengths. Here are their finished drawings. 1. What operation do you think Diego used to calculate the lengths for his drawing? 2. What operation do you think Jada used to calculate the lengths for her drawing? 3. Did each method produce a scaled copy of the polygon? Explain your reasoning. Activity Synthesis Invite previously-selected students to share their answers and reasoning. Sequence their explanations from most general to most technical. Before moving to the next activity, consider asking questions like these: • What is the scale factor used to create Jada’s drawing? What about for Diego’s drawing? (\(\frac13\) for Jada's; there isn't one for Diego's, because it is not a scaled copy.) • What can you say about the corresponding angles in Jada and Diego’s drawings? (They are all equal, even though one is a scaled copy and one is not.) • Subtraction of side lengths does not (usually) produce scaled copies. Do you think addition would work? (Answers vary.) Note: There are rare cases when adding or subtracting the same length from each side of a polygon (and keeping the angles the same) will produce a scaled copy, namely if all side lengths are the same. If not mentioned by students, it is not important to discuss this at this point. Representing, writing, and speaking: Math Language Routine 3 Clarify, Critique, Correct. This is the first time Math Language Routine 3 is suggested as a support in this course. In this routine, students are given an incorrect or incomplete piece of mathematical work. This may be in the form of a written statement, drawing, problem-solving steps, or another mathematical representation. Students analyze, reflect on, and improve the written work by correcting errors and clarifying meaning. Typical prompts are: “Is anything unclear?” and/or “Are there any reasoning errors?” The purpose of this routine is to engage students in analyzing mathematical thinking that is not their own, and to solidify their knowledge through communicating about conceptual errors and ambiguities in language. Design Principle(s): Support sense-making; Optimize output (for reasoning) How It Happens: 1. Play the role of Diego and present the following statement along with his flawed drawing to the class. “I used a scale factor of minus 10, and Jada used a scale factor of one third. So my drawing is a different kind of scaled copy from Jada’s.” Ask students, “What steps did Diego take to make the drawing?” and “Did he create a scaled copy? How do you know?” 2. Give students 1 minute of quiet think time to analyze the statement, and then 3 minutes to work on improving the statement with a partner. As pairs discuss, provide these sentence frames for scaffolding: “I believe Diego created the drawing by ___ because ___.”, “Diego created/did not create a scaled copy. I know this because ___.”, “You can’t ___ because ___.” Encourage the listener to ask clarifying questions by referring to the statement and the drawings. Allow each partner to take a turn as the speaker and listener. Listen for students identifying the type of operation used and justification for whether or not a scaled drawing was produced. Have the pairs reach a mutual understanding and agreement on a correct statement about Diego’s drawing. 3. Invite 3 or 4 pairs to present their improved statement to the class, both orally and in writing. . Ask students to listen for order/time transition words (first, next, then, etc.), and any elements of justifications (e.g., First, ___ because ___.). Here are two sample improved statements: “I subtracted 10 from each side length and Jada used a scale factor of one third. So my drawing is not a scaled copy and Jada’s is. Jada’s is a scaled copy because I know that multiplying—not subtracting—creates a scaled copy. Her drawing created a polygon with no gaps.” “I minused 10 from each side, but I should have realized that in order to scale 15 units in the original down to 5 units in the copy, you have to divide by 3. Jada used a scale factor of one third, which is the same as dividing by 3. My drawing is not a scaled copy and Jada’s is because hers is not a polygon with no gaps, and minusing 10 is not a scale factor.” Call attention to statements that generalize that the method for finding the side lengths of a scaled copy is by multiplying or dividing, not adding or subtracting. Revoice student thoughts with an emphasis on knowing whether or not they created a scaled polygon. 4. Close the conversation on Diego’s drawing, discuss the accuracy of Jada’s scaled copy, and then move on to the next lesson activity. 3.4: Which Operations? (Part 2) (10 minutes) In the previous activity, students saw that subtracting the same value from all side lengths of a polygon did not produce a (smaller) scaled copy. This activity makes the case that adding the same value to all lengths also does not produce a (larger) scaled copy, reinforcing the idea that scaling involves multiplication. This activity gives students a chance to draw a scaled copy without a grid and to use paper as a measuring tool. To create a copy using a scale factor of 2, students need to mark the length of each original segment and transfer it twice onto their drawing surface, reinforcing—in a tactile way—the meaning of scale factor. The angles in the polygon are right angles (and a 270 degree angle in one case) and can be made using the corner of an index card. Some students may struggle to figure out how to use an index card or a sheet of paper to measure lengths. Before demonstrating, encourage them to think about how a length in the given polygon could be copied onto an index card and used as an increment for measuring. If needed, show how to mark the 4-unit length along the edge of a card and to use the mark to determine the needed lengths for the Have students read the task statement and check that they understand which side of the polygon Andre would like to be 8 units long on his drawing. Provide access to index cards, so that students can use it as a measuring tool. Consider not explicitly directing students as to its use to give them a chance to use tools strategically (MP5). Give students 5–6 minutes of quiet work time, and then 2 minutes to share their work with a partner. Student Facing Andre wants to make a scaled copy of Jada's drawing so the side that corresponds to 4 units in Jada’s polygon is 8 units in his scaled copy. 1. Andre says “I wonder if I should add 4 units to the lengths of all of the segments?” What would you say in response to Andre? Explain or show your reasoning. 2. Create the scaled copy that Andre wants. If you get stuck, consider using the edge of an index card or paper to measure the lengths needed to draw the copy. Student Facing Are you ready for more? The side lengths of Triangle B are all 5 more than the side lengths of Triangle A. Can Triangle B be a scaled copy of Triangle A? Explain your reasoning. Anticipated Misconceptions Some students might not be convinced that making each segment 4 units longer will not work. To show that adding 4 units would work, they might simply redraw the polygon and write side lengths that are 4 units longer, regardless of whether the numbers match the actual lengths. Urge them to check the side lengths by measuring. Tell them (or show, if needed) how the 4-unit length in Jada’s drawing could be used as a measuring unit and added to all sides. Other students might add 4 units to all sides and manage to make a polygon but changing the angles along the way. If students do so to make the case that the copy will not be scaled, consider sharing their illustrations with the class, as these can help to counter the idea that “scaling involves adding.” If, however, students do this to show that adding 4 units all around does work, address the misconception. Ask them to recall the size of corresponding angles in scaled copies, or remind them that angles in a scaled copy are the same size as their counterparts in the original figure. Activity Synthesis The purpose of the activity is to explicitly call out a potential misunderstanding of how scale factors work, emphasizing that scale factors work by multiplying existing side lengths by a common factor, rather than adding a common length to each. Invite a couple of students to share their explanations or illustrations that adding 4 units to the length of each segment would not work (e.g. the copy is no longer a polygon, or the copy has angles that are different than in the original figure). Then, select a couple of other students to show their scaled copies and share how they created the copies. Consider asking: • What scale factor did you use to create your copy? Why? • How did you use an index card (or a sheet of paper) to measure the lengths for the copy? • How did you measure the angles for the copy? Speaking: Math Language Routine 7 Compare and Connect. This is the first time Math Language Routine 7 is suggested as a support in this course. In this routine, students are given a problem that can be approached using multiple strategies or representations, and are asked to prepare a visual display of their method. Students then engage in investigating the strategies (by means of a teacher-led gallery walk, partner exchange, group presentation, etc.), compare approaches, and identify correspondences between different representations. A typical discussion prompt is “What is the same and what is different?”, comparing their own strategy to the others. The purpose of this routine is to allow students to make sense of mathematical strategies by identifying, comparing, contrasting, and connecting other approaches to their own, and to develop students’ awareness of the language used through constructive conversations. Design Principle(s): Maximize meta-awareness How It Happens: 1. Use this routine to compare and contrast different methods for creating scaled copies of Jada’s drawing. Before selecting students to share a display of their method with the whole class, first give students an opportunity to do this in a group of 3–4. Invite students to quietly investigate each other’s work. Ask students to consider what is the same and what is different about each display. Invite students to give a step-by-step explanation of their method using this sentence frame: “In order to create the copy, first I…. Next,…. Then, …. Finally,….”. Allow 1–2 minutes for each display and signal when it is time to switch. 2. Next, give each student the opportunity to add detail to their own display for 1-2 minutes. As students work on their displays, circulate the room to identify at least two different methods or two different ways of representing a method. Also look for methods that were only partially successful. 3. Consider selecting 1–2 students to share methods that were only partially successful in producing scaled copies. Then, select a couple of students to share displays of methods that did produce scaled copies. Draw students’ attention to the approaches used in each drawing (e.g., adding the same value to each side length, not attending to the angles, multiplying by a common factor, not creating a polygon, etc.). Ask students, “Did this approach create a scaled copy? Why or why not?” 4. After the pre-selected students have finished sharing with the whole class, lead a discussion comparing, contrasting, and connecting the different approaches and representations. In this discussion, demonstrate using the mathematical language “scale factor”, “corresponding”, and “multiplicative” to amplify student language. Consider using these prompts: □ “How did the scale factor show up in each method?”, □ “Why did the different approaches lead to the same outcome?”, □ “What worked well in _____’s approach/representation? What did not work well?”, and □ “What role does multiplication play in each approach?” 5. Close the discussion by inviting 3 students to revoice the incorrect method for creating a scaled drawing, and then invite 3 different students to revoice the correct method for creating a scaled drawing. Then, transition back to the Lesson Synthesis and Cool Down. Lesson Synthesis • How do we draw a scaled copy of a figure? • Can we create scaled copies by adding or subtracting the same value from all lengths? Why or why not? Scaling is a multiplicative process. To draw a scaled copy of a figure, we need to multiply all of the lengths by the scale factor. We saw in the lesson that adding or subtracting the same value to all lengths will not create scaled copies. 3.5: Cool-down - More Scaled Copies (5 minutes) Student Facing Creating a scaled copy involves multiplying the lengths in the original figure by a scale factor. For example, to make a scaled copy of triangle \(ABC\) where the base is 8 units, we would use a scale factor of 4. This means multiplying all the side lengths by 4, so in triangle \(DEF\), each side is 4 times as long as the corresponding side in triangle \(ABC\).
{"url":"https://im-beta.kendallhunt.com/MS/teachers/2/1/3/index.html","timestamp":"2024-11-04T18:45:23Z","content_type":"text/html","content_length":"129828","record_id":"<urn:uuid:7984a1ea-9ddc-4150-b8a4-615094e91d39>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00896.warc.gz"}
A variable line passing through the origin intersects two given straight lines 2x + y = 4 and x + 3y = 6 at R and S respectively. A point P is taken on this variable line. Find the equation to the locus of the point P if? | Socratic A variable line passing through the origin intersects two given straight lines 2x + y = 4 and x + 3y = 6 at R and S respectively. A point P is taken on this variable line. Find the equation to the locus of the point P if? A) OP is the arithmetic mean of OR and OS. B) OP is the geometric mean of OR and OS. C) OP is the harmonic mean of OR and OS. Impact of this question 1115 views around the world
{"url":"https://socratic.org/questions/a-variable-line-passing-through-the-origin-intersects-two-given-straight-lines-2-1#522065","timestamp":"2024-11-09T16:33:53Z","content_type":"text/html","content_length":"31500","record_id":"<urn:uuid:49f21358-466e-45c2-a7bf-05b26fa6ab11>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00206.warc.gz"}
GMAT Question of the Day - DS Word Problem - April 8th GMAT Question of the Day – DS Word Problem – April 8th Bill’s Burger shop’s profit from the sale of its famous “Billy Burger” was what percent greater this year than it was last year? (1) The ratio of this year’s “Billy Burger” revenue to last year’s was 3 to 2 (2) The price of the “Billy Burger” was the same this year as it was last year Solution: In order to know the percent change in profit for two years of Billy Burger sales you will need to know either the profit from both years or the relative change in both revenue and Statement (1) gives us the relationship between the revenues but tells us nothing about the costs. Insufficient. Statement (2) gives us information about the revenue per unit but tells us nothing about how many were sold nor the cost per unit. Insufficient. Statement (1) + Statement (2) Putting them together you are still missing information on the expenses so you have no way to relate the profit’s from the two years. Leave a Comment
{"url":"https://atlanticgmat.com/gmat-question-day-percent-change/","timestamp":"2024-11-05T22:31:21Z","content_type":"text/html","content_length":"292715","record_id":"<urn:uuid:1e5e9a87-d99d-4ec8-b534-0f3cd1cd11af>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00244.warc.gz"}
McGraw Hill Math Grade 1 Chapter 11 Lesson 9 Answer Key Solid Figures All the solutions provided in McGraw Hill Math Grade 1 Answer Key PDF Chapter 11 Lesson 9 Solid Figures as per the latest syllabus guidelines. McGraw-Hill Math Grade 1 Answer Key Chapter 11 Lesson 9 Solid Figures Question 1. Circle the cylinder. A cylinder is round and has a top and bottom in the shape of a circle. so, circled the cylinder. Question 2. Color the rectangular prism green. It is also called a cuboid. A rectangular prism has six faces, and all the faces are in a rectangle shape and have twelve edges. so, colored the rectangular prism. Circle the name of the solid figure. Then write the name. Question 3. A cone is a 3D shape consisting of a circular base and once continuous curved surface tapering to a point Question 4. A cube is a three-dimensional shape made up of width, height, and depth. It’s made up of 6 squares, or faces, and these are equal in size. and a cube has 8 vertices
{"url":"https://gomathanswerkeys.com/mcgraw-hill-math-grade-1-chapter-11-lesson-9-answer-key/","timestamp":"2024-11-04T14:19:54Z","content_type":"text/html","content_length":"141354","record_id":"<urn:uuid:0ae62b34-27ff-40d1-adc8-7d34d4d1c6ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00568.warc.gz"}
Weakly extractable one-way functions A family of one-way functions is extractable if given a random function in the family, an efficient adversary can only output an element in the image of the function if it knows a corresponding preimage. This knowledge extraction guarantee is particularly powerful since it does not require interaction. However, extractable one-way functions (EFs) are subject to a strong barrier: assuming indistinguishability obfuscation, no EF can have a knowledge extractor that works against all polynomial-size non-uniform adversaries. This holds even for non-black-box extractors that use the adversary’s code. Accordingly, the literature considers either EFs based on non-falsifiable knowledge assumptions, where the extractor is not explicitly given, but it is only assumed to exist, or EFs against a restricted class of adversaries with a bounded non-uniform advice. This falls short of cryptography’s gold standard of security that requires an explicit reduction against non-uniform adversaries of arbitrary polynomial size. Motivated by this gap, we put forward a new notion of weakly extractable one-way functions (WEFs) that circumvents the known barrier. We then prove that WEFs are inextricably connected to the long standing question of three-message zero knowledge protocols. We show that different flavors of WEFs are sufficient and necessary for three-message zero knowledge to exist. The exact flavor depends on whether the protocol is computational or statistical zero knowledge and whether it is publicly or privately verifiable. Combined with recent progress on constructing three message zero-knowledge, we derive a new connection between keyless multi-collision resistance and the notion of incompressibility and the feasibility of non-interactive knowledge extraction. Another interesting corollary of our result is that in order to construct three-message zero knowledge arguments, it suffices to construct such arguments where the honest prover strategy is unbounded. Publication series Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Volume 12550 LNCS ISSN (Print) 0302-9743 ISSN (Electronic) 1611-3349 Conference 18th International Conference on Theory of Cryptography, TCCC 2020 Country/Territory United States City Durham Period 16/11/20 → 19/11/20 Funders Funder number Alon Young Faculty Fellowship Blavatnik Foundation 1789/19 Blavatnik Family Foundation Israel Science Foundation 484/18 Tel Aviv University Dive into the research topics of 'Weakly extractable one-way functions'. Together they form a unique fingerprint.
{"url":"https://cris.tau.ac.il/en/publications/weakly-extractable-one-way-functions","timestamp":"2024-11-10T08:00:39Z","content_type":"text/html","content_length":"57921","record_id":"<urn:uuid:e57b0523-cfd0-46de-b820-24ed26ef4528>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00185.warc.gz"}
b) Rotor-angle stability The analysis of electromechanical dynamics relies significantly on the generator rotor swing following a fault in the grid. To analyze electromechanical dynamics, it can be assumed that the shaft of the generating unit is rigid, and the rotor swing can be described as: where the time derivative of the rotor angle dδ/dt = Δω = ω − ω[s] is the rotor speed deviation in electrical radians per second (rad/s), D is the damping coefficient, E’ is transient internal emf, V[s] is infinite busbar voltage, x[d]’ is d-axis transient reactance between generator and the infinite busbar, is the power (or rotor) angle with respect to the infinite busbar, and P[m] and P[e] are the mechanical and electrical power, respectively. The coefficient M is defined as: Where H is the inertia constant, S[n] is the generator nominal power. The response of a system to a significant disturbance, such as a short circuit or line tripping, is very dramatic from a stability standpoint. When such a fault happens, substantial currents and torques are generated, and swift action is often necessary to preserve system stability. This challenge is commonly referred to as the issue of large-disturbance stability. Four distinct types of short circuits, namely single-phase short circuit, phase-to-phase short circuit, phase-to-phase-to-earth short circuit, and three-phase short circuit, are examined on the single-machine-infinite-busbar (SMIB) system depicted in Figure 1. The short circuit occurs at the beginning of the line. Figure 1. Schematic diagram of the SMIB system The initial step involves determining the power-angle curve P[e_pre] for the normal grid. Assuming E’ and V[s] remain constant, the focus is on finding the equivalent system reactance as shown in Figure 2. Figure 2. Equivalent circuit for the pre-fault state The second step involves determining the power-angle curve P[e_fault] for the fault state. Assuming E’ and V[s] remain constant, the focus is on finding the equivalent system reactance during fault () according to Figure 3. Figure 3. Equivalent circuit for the fault state Utilizing symmetrical components enables the representation of any type of fault in the positive-sequence network by introducing a fault shunt reactance (Δx[F]) connected between the point of the fault and the neutral, as illustrated in Figure 3. The value of Δx[F] is contingent on the type of fault and is provided in Table 1, where x[i] and x[0] are the negative and zero-sequence Thévenin equivalent reactances observed from the fault terminals. The power-angle curve P[e_fault] for the fault state is: Finally, the last step is to determine the power-angle curve P[e_post] post-fault, which, in the case of this grid, is the same as the power-angle curve for the normal grid (pre-fault) i.e.: To assess rotor-angle stability during the fault, it’s essential to analyze the yellow (P_acc) and blue (P_dcc) areas in the interactive graph below. For stable operation, the deceleration (blue) area must be larger than the acceleration (yellow) area. The sizes of both areas primarily depend on the time it takes to clear the fault, i.e., the angle delta_cl when the fault is cleared.
{"url":"https://transitproject.eu/2023/11/10/rotor-angle-stability/","timestamp":"2024-11-06T16:58:50Z","content_type":"text/html","content_length":"64815","record_id":"<urn:uuid:995b796f-460c-42d0-91b8-ac0bc8abc78c>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00609.warc.gz"}
Module Size Distribution and Defect Density - P.PDFKUL.COM Module Size Distribution and Defect Density Yashwant K. Malaiya and Jason Denton Computer Science Dept. Colorado State University Fort Collins, CO 80523 +1 970 491 7031 malaiya| [email protected] ABSTRACT Data from several projects show a significant relationship between the size of a module and its defect density. Here we address implications of this observation. Does the overall defect density of a software project vary with its module size distribution? Even more interesting is the question- can we exploit this dependence to reduce the total number of defects? We examine the available data sets and propose a model relating module size and defect density. It takes into account defects that arise due to the interconnections among the modules as well as defects that occur due to the complexity of individual modules. Model parameters are estimated using actual data. We then present a key observation that allows use of this model for not just estimation the defect density, but also potentially optimizing a design to minimize defects. This observation, supported by several data sets examined, is that the module sizes often follow exponential distribution. We show how the two models used together provide a way of projecting defect density variation. We also consider the possibility of minimizing the defect density by controlling module size distribution. Module size, defect density, reliability, module size distribution 1 INTRODUCTION The defect density is one of the most important of the software reliability attributes. It is often one of the measures used to ascertain release readiness. There are two factors that control the defect density at release. One of them is the extent and effectiveness of the testing and This work was supported in part by a BMDO project funded by ONR and also by an ONR AASERT project. debugging effort [1]. The other is the initial defect density present at the beginning of testing [2]. A study of the factors that influence the initial defect density is important due to two reasons. First it provides a quantitative method of identifying possible techniques for reducing the occurrence of defects. Secondly it allows one to estimate initial defect density, which can be used to plan the testing effort required. There are several models that attempt to estimate the defect density. Some of them consider only a single factor while others take multiple factors into account. Models can be additive where influence of several factors is added [3], or they can be multiplicative. Multiplicative models allow a submodel for each factor to be developed independently. Multiplicative modeling is standard for hardware failure rate estimation. Such models for software cost estimation have been widely used. Multiplicative models to estimate software defect density include the RADC model [4,5] and the ROBUST model [6,7]. The sub-models considered in the past include the effect of the maturity of the development process, the skill of the programmers involved, and the complexity of the program. The problem including the impact of requirement volatility has been studied [8]. Development of such models is needed for estimation of defect densities. They also offer an interesting possibility. They can allow an organization to assess the possibility of controlling defect density even before testing begins. This paper considers the problem of developing a model to account for variation in module size distribution, which can be used as a submodel in a multiplicative model. A common simplifying assumption is that the defects are distributed randomly in a software system. However we can intuitively reason that size of a module may in some way influence the defect density. It has indeed been supported by several studies. It is popularly believed that decomposing a software system into small modules improves the design. Surprisingly the projects studied show exactly the opposite for a large range of module sizes. Basili and Perricone [9] studied a project with 90,000 lines of code. They studied 370 modules divided into 5 groups based on module size with increments of 50. They observed, contrary to their expectation, that larger modules were less error prone. This was true even when the larger modules were more complex as measured by cyclomatic complexity. Shen et al. [10] studied three IBM software projects, with three separate releases of one of them. The sizes ranged from 7 to 326 thousand lines. They give a plot of defect densities for 108 modules. While they did not provide scales for the plot, they mention that for 24 modules with sizes exceeding 500 lines, the program length did not influence the defect density, which remains relatively constant. For the rest of 84 modules, the plot clearly shows that defect density declines as size grows. They also suggest a simple quantitative model for defect density in terms of module size. Banker and Kemerer [11] have presented a hypothesis that for any given environment there is an optimal module size. For lesser sizes, there is rising economy, and for greater sizes the economy declines due to rising number of communication paths. Withrow [12] examined the data for 362 ADA modules with total 114,000 lines of code. She divided the modules into 8 groups and gave a plot between module size and error density. This plot shows a remarkable minimum for modules with sizes 161-250, after which the defect density starts increasing with module size. Her results thus support the hypothesis by Banker and Kemerer. Hatton [13] gave plots of data from a NASA Goddard project along with Withrow’s data. He suggested two different models for the two regions. For sizes up to 200 lines, he suggested that the total number of defects grow logarithmically with module size, giving a declining defect density. For larger modules, he suggests a quadratic model. In contrast, Rosenberg [14] has argued that the observed decrease in defect density with rising module sizes is misleading. We examine his argument and show that his observations can be restated to confirm with a model we propose. Fenton and Ohlsson [15] have studied randomly selected modules from a large telecommunications project. They did not observe a significant dependence. We will see a reason of their observation. In the next section we propose a composite defect density model that takes into account both declining and rising defect density trends. We then apply it to actual data to obtain parameter values. This model would be of little value if we did not know the module sizes vary in a project. We present a pleasant surprise. For several projects examined, module sizes distribution is quite similar. This observation is used to obtain an expression for the total defect content in a project with many modules. This allows us to examine the influence of module size distribution to the overall defect density. We discuss how module size distribution can be characterized in a defect density model that takes several factors into account. Finally we consider the intriguing possibility that defect density may be reduced simply by controlling module sizes. 2 A COMPOSITE DEFECT DENSITY MODEL Here we construct a model that explains the data presented in the literature. A software system is built using a number of modules, which are themselves built using a number of instructions. There are two mechanisms that give rise to defects. Some faults termed module-related are related to how the project is partitioned into modules and how the modules interact. Other faults termed instruction-related are associated with the lower level building blocks. These faults arise because of imperfect interaction of instructions within a module and their individual implementations. We first obtain models of each of the two fault-types. A. Module-related faults: We can term these interface faults because these will primarily be associated with parameters passed among the modules. However some of these may be related with assumptions made by modules regarding each other. They may also be associated with handling of global data. We assume that such faults are uniformly distributed among the modules. If a module has size s, its defect density Dm for module-related faults is given by Dm ( s ) = a s where the minimum possible values of s is one and a is a suitable parameter. In terms of defect density, such defects represent overhead that proportionately declines as module size grows. The model of equation one is consistent with the model given by Shen at al. [10]. Here it is interesting to examine Rosenberg’s analysis [14]. He assumes that two random variables X and Y are statistically independent. He gives a simulated scatter plot of Y/X against X, which looks similar to the defect density versus module size plot given by Shen et al. [10]. However his assumption implies that the total number of defects in module is not related to its size, i.e. the defect density is inversely proportional to size. His basis assumption is thus equivalent to the model given in Equation 1. As we see soon, such behavior will be overcome by another factor in large modules. B. Instruction-related faults: These are the faults that will dominate larger modules. We can term these faults bulk faults [8]. Let us assume that the probability that an instruction is incorrect has two components. The first component is a constant b. The other component depends on the number of other instructions a given instruction may interact with. We can assume that the second component is proportional to the module size s. We can then express the defect density Di due to instruction-related defects as Di ( s ) = b + cs where c is another parameter. Using Equations (1) and (2) we can express the total defect density D(s) as D( s) = Dm ( s ) + Di ( s ) = a + b + cs s a +c=0 s2 giving the module size smin for minimum defect density s min = a c and the minimum defect density is given by Dmin = (2 ac + b) It should be noted that the model implies two different regions. Region A: For modules with s< smin Region B: For modules with s>smin In region A, defect density declines with rising module size and in region B the defect density rises. 3 ANALYSIS OF MODULE SIZE-DEFECT DENSITY DATA Here we will analyze the available data given in the tables below. We apply the model given in (3) to the data to determine the parameter values. The data given by Basili and Perricone [9] shows a declining defect density. This is in spite of the fact that the larger modules were more complex. The region of rising defect density is not encountered. As Withrow [12] points out, this is because there are only three modules larger than 200 lines. In this case, we had set parameter c equal to zero for curve fitting. The observed and fitted values are shown in Fig. 1. The data points all appear to be from region A, as mentioned above. Table 1: Basili data [9] The model given in (3) specifies that the defect density tends to decline due to the first term as s increases. The third linear term will cause the defect density to rise. The middle term represents defect density that remains unaffected. To locate the minimum, we take the derivative of the RHS and equate it to zero. We get require us to make further assumptions that will require justification. It is possible to have a model more complex than in Eq. (3) using additional parameters. However that will Module Size (max) Module count Cyclomatic Complexity Defect Density (/KLOC) The Withrow data given in Table 2, [12] for Ada modules is plotted in Fig. 3. The data exhibits both declining and rising defect density trends. There is a noticeable jump from the third to the fourth data point in the plot. A possible explanation is that Withrow’s study includes data from the test phase. It is possible that larger modules were not tested as thoroughly tested as the smaller modules resulting in relatively higher defect density. Table 2: Withrow data [12] Modules Defect Density Defect density Source lines Observed Fitted Module size The Columbus Assembly data given by Hatton [13] is plotted in Fig. 2 along with fitted curve as given by our model. The defect density drops sharply until the module size of about 400 and starts rising gradually. The data fits the model very well. Fitting a model with three parameters to experimental data can be difficult because often one of the three can be used to compensate the effect of another one. Depending on the initial estimates, the estimated parameter values can converge to different combination of values. In this case, that can be avoided by initially setting the parameter c to zero while the other two are adjusted. After a and b have converged to specific values, c can be made non-zero for fitting. Defect Density Figure 2: Defect density variation for Columbus data. Observed Fitted Module Size Figure 1: Defect density variation for Basili data. Table 3 gives the values of the parameters obtained. The second column gives the approximate value for Smin, the module size corresponding to the minimum defect density. Since the data available only gives ranges, it should only be regarded as an approximate round number. The parameter a is controlled by the defect density of small modules. The parameter c accounts for the rise in defect density in larger modules. Its value is found to be quite small for the Columbus and Withrow data, and for Basili data there were no sufficiently large modules. The parameter b is largely influenced the minimum defect density observed, as we would expect. In these three data sets, most of the available data points correspond to the declining defect density, where parameter c plays little role. The opposite is true for the data presented by Fenton and Ohlsson [15]. In their Table 5, the first data point groups all the modules with sizes lass than 500 LOC. They did not observe the initial declining trend, which is not surprising since the trend reverses around size 200-300 lines. Most of their modules were significantly larger than those in other studies. Thus they had very little data from region B. For their project, the data for release n shows a slowly rising trend, as in the Columbus and Withrow data sets. For release (n+1), the data does not show a clear trend. It should be noted that a very accurate fit is not required since in any given project there will a range of module sizes. For Withrow data we note that the model does not fit with the sharp minimum. However overestimation of the defect density in some modules with be compensated by underestimation for slightly smaller and larger modules. It is possible to obtain a better fit by using a model with more parameters however generally fewer parameters provide better interpretation of the process. 8 Table 3: Parameter values for the three data sets Defect density Parameter values Smin a Observed Fitted Fig.4 shows the distribution of module sizes for the Basili data. Small sized modules are the most common. There are only a few modules with large sizes. The distribution curve drops exponentially with increasing module sizes. Unlike Basili data, Withrow data includes many larger modules. Still as we see from Fig. 5, it has a similar module size distribution. The plots by Shen et al. suggest the same thing. We also examined module size distribution for Gnu C library with 792 modules and again found the same distribution. This surprising preference for smaller modules may either be due to programming practices or a natural tendency of the programming problems to be divisible into segments with such a distribution. We can use an exponential function to arrive at a simple model for such a distribution. Let the density function for the module size distribution be given by this equation. f s ( s ) = g.e − gs Thus the module size distribution plots are described by msd ( s ) = M . g.e − gs Module size Figure 3: Defect density variation for Withrow data. 300 250 Number of modules 4 DISTRIBUTION OF MODULE SIZES To know the impact of module size variation within a project, we not only need to know the module-size defect-density relation, but also the distribution of module sizes for the project. One might think that there is a preferred module size and thus we may see a cluster of size values around the average with a Gaussian-like distribution. Surprisingly, there is evidence that it is usually not so. M odule size Figure 4: Module size distribution for Basili data. In this paper, we will use some rule-of-thumb approximations to obtain some simplified expressions. These approximations are not necessary when dealing with an actual data set since a closed form algebraic expression and numerical values can always be obtained. However the simplified expressions allow us to interpret the results, which can be used for rule-of-thumb calculations. In an actual case we will know the size of the smallest and the largest module. However to obtain simple results we will sometimes take the minimum size to be one and the maximum size to be infinity. We have numerically verified that the approximations are Note that the parameter M represents the total number of modules since ∞ − gs ∫ msd ( s) = ∫ M ge ≈ M 1 Module count The available data is all in the form of grouped data, which gives the number of modules mi that lie in the range (si, si+1). We can estimate the value of msd(si ) using Module Size msd ( si ) = mi ( si − si +1 ) The Table 4 gives the values of the parameters M and g. The value of M is taken directly to be the total number of modules. If the value of M is obtained by using curve fitting, it will be slightly different. The value of parameter g is within the same range of magnitude; a larger value implies fewer large modules. Figure 5: Gnu C Library size distribution For Fenton data [15], the module size distribution appears exponential for all the data points except for the first one in their Table 1 with LOC <1000. Having very few small modules was perhaps a good choice since it reduced the number of very small modules that can exhibit high defect density. The exponential distribution is not dependant on the language used. Our observation that the module-size is exponentially distributed for these projects has a significant implication. It allows a way of estimating the total number of defects for a project with different sized modules. Why the exponential distribution arises requires further investigation. Number of modules Table 4 includes a row for the Gnu C Library which includes a wide range of common functions. The size distribution of functions, shown in Figure 5 serves as a good indicator of the naturally occurring size distribution. Thus it is not surprising that we see the same distribution for Withrow data in Fig. 6. Module size Table 4: Module size distribution parameters Data M (total modules) Parameter g Gnu C Library Figure 6: Module size distribution for Withrow data. 5 TOTAL DEFECT CONTENT The total number of defects in a software system is found by adding up the defects in different modules. Since we know both the module size distribution and the dependence of defect density on module size, we can calculate the total number of defects N given by the following equation. ∫ 1 a Mge − gs ( + b + cs ).10 −3.s.ds s where smax is the size of the largest module. Because of the exponential function, the number of large modules will be small. An approximate value can be obtained by setting smax to be infinity. Because of the decaying exponential term, the result is not very sensitive to variation of smax. The factor 10-3 is needed because the defect density is generally stated in terms of defects per 1000 lines of code. The overall defect density is then given by smax − gs a ( + b + cs ).10 −3. s.ds s ST M = g .S T Substituting for M in (11), we have ∫ 1 a g 2 e − gs ( + b + cs ).10 −3.s.ds s This expression can be approximated to D ≈ 0.001( ag + b + 2 (11) where ST the total size of the project with all the modules. Equation (11) can be solved easily to get a closed form expression. Since the resulting expression is quite long, it is given in the Appendix. Example 1: For a software system, there are 400 modules. The module size is exponentially distributed with g=0.004 in Eq. (7). The defect density is related to module sizes as given by (3), with a=120, b=1.8 and c=0.006. The largest module size is 2000 lines. c ) g This provides an optimal value of the parameter g given by g opt = 2c a Note that from (12) we note that 1/g represents the size of an average module. If all the modules were of equal sizes, the minimum defect density would occur when each of them has the size given by smin from (4). On the other hand with a realistic exponential distribution, the optimal size sopt of an average module is obtained using (14), For this system the module size that will have the minimum defect density is obtained using (4), s min = 141.42 sopt = The total number of instructions is given by S tot = ∫ Mge − gs . s.ds = 100,000 lines The total number of defects given by (10) is N = 941 and the overall defect density is found by (11) D = 7.09 per KLOC 6 VARIATION OF MODULE SIZE DISTRIBUTION For exponential module size distribution, the parameter g may vary due to either process variation or due to decisions deliberately made. Assuming that the overall size of the system is the same, how will the variation in g influence defect density? Since overall size of the project ST is fixed, we have a s = min 2c 2 Equation (15) represents a surprising result. If modules of size 250 have the minimum defect density, the lowest overall defect density would occur when the average module size is about 177. That is because the asymmetric distribution of module sizes results in smaller modules having more impact on the overall defect density. Example 2: If we allow the value of the parameter g to vary in Example 1, the optimal value of g is found from (13) to be g opt = 0.01 which yields a defect density of 4.2 per KLOC. Note that this is significantly less than the overall defect density 7.09 when the usual exponential distribution is present. This suggests that defect density may be reduced by breaking modules larger modules and combining smaller modules so that resulting modules have sizes close to sopt. 7 CHARACTERIZING MODULE SIZE DISTRIBUTION The values of the parameters a, b, and c depend on the programmers’ capabilities, maturity of the process and the extent of testing in prior phases. The effect of the module size variation is reflected in the parameter g above, where an exponential distribution is assumed. The exponential distribution was observed in most of the data set we examined. It arises due to natural reasons that need to be explored further. The total defect density is influenced significantly as the plot in Fig. 7 shows. At g=0.01, the overall defect density is about 4.2 per KLOC compared with 7.1 at g=0.002. This behavior is dependent on the parameter values a, b and c as given in (14). This allows us provides us with a model to take into account the module size distribution. The multiplicative factor Fms that takes into account module size distribution can be written as Fms = ( Ag + B + C ) g The parameters A, B and B will need to be estimated from a similar project, such that for a default value of g, Fms is unity. Fms = ( 25 g + 0 .375 + 2 .5 .10 − 3 ) g When g is unknown, the default value of Fms will be unity, as required [7]. An interesting possibility is provided by the fact that there is an optimal module size. It is a common recommendation to break very large modules into smaller ones. If there is a magic module size, say 200 LOC, at which inherently the defect density is likely to be lower, that would reduce the overall defect density. This would approximately correspond to the HP policy reported by Grady [16] that a cyclomatic complexity greater than 16 is undesirable. In many projects there can be a number of modules on the lesser side of the magic size. It would make sense to minimize the number of very small modules, say those smaller than 100 LOC. A possible approach can be to examine very small modules and attempt to coalesce them into larger modules. It can potentially reduce the overall defect density significantly provided the newly created modules contain fairly cohesive code. The Ericsson Telecom data reported by Fenton and Ohlsson [15] suggests that there were very few small modules among those randomly chosen for the study. For their releases n and (n+1), the smallest modules were 37 and 196 lines of code. Reducing the number of very small modules would minimize the number of surface defects. Specifically adjusting the module size distribution will require the exponential distribution assumption to be modified. A possible approach to use Weibull distribution, which generalizes the exponential distribution. In cases where extensive module resizing is done, a discrete module size distribution may need to be used. Total defect density Example 3: If the values of a, b and c are as used in the above examples above, and if the typical value of g is 0.005, the model of (16) will be Parameter g Figure 7: Variation of defect density with parameter g. All the data sets used in this and previous studies came from actual industrial or space projects where objective was to produce a working system, rather than to collect data. The number of defects in a module could have been influenced by a number of factors. Some modules could have gone through more careful inspection and testing. Modules having been reused from previous releases with little modification would have lower defect density than new modules or those, which have been extensively modified. It would be desirable to collect data where such variations are carefully controlled. However since the data sets come from different projects and different organizations, they support the observations of the researchers. We can see that some of the differences in observations for different data sets are explained by the fact that some data sets cover only region A and some only region B. A clear trend may not be seen if the number of modules is small, one needs to use grouped data to observe a pattern. 8 CONCLUSIONS The paper presents a model giving influence of module size on defect density based on data that has been reported. It provides an interpretation for both declining defect density for smaller modules and gradually rising defect density for larger modules. We observe that for several projects, distribution of module sizes is given by an exponential expression. We analyze the combination of the two to address how the overall defect density for a project with many modules can vary. We identify the condition for optimal distribution. A model for characterizing variation of defect density due to module size variation has been obtained which can be used as a sub-model for a multi-factor defect density model. The exponential distribution occurs naturally in many software projects, for reasons that are yet to be studied. When module are specifically broken or coalesced to bring them closer to the size that is expected to give the minimal defect density, the exponential distribution may no longer be applicable. If small modules can be combined into optimal sized modules without reducing cohesion significantly, than the inherent defect density may be significantly reduced. REFERENCES [1] J. Musa, Software Reliability Engineering, McGraw-Hill, 1999. [2] J. C. Munson and T. M. Khoshgoftar, “Software metrics in reliability assessment,” in Handbook of Software Reliability Engineering, Ed. M.R. Lyu, IEEE-CS Press/McGraw-Hill, 1996. M. Takahashi and Y. Kamayachi, " An empirical study of a model for program error prediction," Proc. of 8th International IEEE Conference on Software Engineering, pp. 330-33, Aug. 1985. Methodology for software reliability prediction and assessment. Technical Report RL-TR-95-52, Vol. 1 and 2, Rome Labs, 1992. W. Farr, “Software reliability modeling survey,” in Handbook of Software Reliability Engineering, Ed. M.R. Lyu, IEEE-CS Press/McGraw-Hill, 1996. N. Li and Y.K. Malaiya, “ROBUST: A Next Generation Software Reliability Engineering Tool” Proc. IEEE Int. Symposium on Software Reliability Engineering, pp. 375-380, Oct. 1995. Y.K. Malaiya and J.A. Denton, “What do software reliability parameters represent?,” Proc. International Symposium on Software Reliability Engineering, pp. 124-135, Nov. 1997. Y.K. Malaiya and J.A. Denton, “Requirement volatility and defect density,” Proc. International Symposium on Software Reliability Engineering, pp. 285-294, Nov. 1999. V. R. Basili and B. R. Perricone, "Software errors and complexity," Comm. ACM, vol. 27, pp. 4252, Jan. 1984. V.Y. Shen, T. Yu, S. M. Thebut, “Identifying error-prone software-An empirical study,” IEEE Trans. Software engineering, vol. SE-11, pp. 317324, April 1985. R. D. Banker and C. F. Kemerer, "Scale Economies in new software development," IEEE Trans. Software Engineering, pp. 1199-1205, Oct. 1989. C. Withrow, "Error density and size in Ada software," IEEE Software, pp. 26-30, Jan. 1990. L. Hatton, "Reexamining the fault densitycomponent size connection," IEEE Software, pp. 89-97, March 1997. J. Rosenberg, “Some misconceptions about lines of code,” Proc. Int. Software Metrics Symposium, pp. 137-142, Nov. 1997. N.E. Fenton and N. Ohlsson, ‘Quantitative analysis of faults and failures in a complex software system,” IEEE Trans. Software Engineering, to appear. R.B. Grady, Practical software metrics for project management and process improvement, PrenticeHall, 1992. 9 APPENDIX Equation (11) above gives an expression for the overall defect density as smax ∫ 1 − gs a ( + b + cs ).10 −3. s.ds s ST This is easily solved although the resulting expression is complex. The numerator is M 10 −3 sg e (cg 2 s 2 + sg 2 b + 2 sgc + ag 2 + 2 g bg + 2c)]1Smax and the denominator is given by ( sg + 1) Me − sg ]1Smax g The approximations mentioned above have been verified using numerical values.
{"url":"https://p.pdfkul.com/module-size-distribution-and-defect-density_5a1dd2571723ddc5232dc6c3.html","timestamp":"2024-11-14T04:29:34Z","content_type":"text/html","content_length":"88319","record_id":"<urn:uuid:6454d940-777e-49cc-9d23-1c8d349dee68>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00122.warc.gz"}
If the volume of one cube is 8 times as great as another, then ... | Filo If the volume of one cube is 8 times as great as another, then the ratio of the area of a face of the larger cube to the area of a face of the smaller cube is Not the question you're searching for? + Ask your question If the volume ratio is , the linear ratio is , and the area ratio is the square of this, or . Was this solution helpful? Found 5 tutors discussing this question Discuss this question LIVE 8 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from Volume in the same exam Practice more questions from Volume View more Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text If the volume of one cube is 8 times as great as another, then the ratio of the area of a face of the larger cube to the area of a face of the smaller cube is Topic Volume Subject Mathematics Class Grade 12 Answer Type Text solution:1 Upvotes 81
{"url":"https://askfilo.com/mathematics-question-answers/if-the-volume-of-one-cube-is-8-times-as-great-as-another-then-the-ratio-of-the","timestamp":"2024-11-05T01:02:37Z","content_type":"text/html","content_length":"208353","record_id":"<urn:uuid:fe5a14aa-5ddb-41f0-b5a0-bc334313b458>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00115.warc.gz"}
seminars - Approximation Theory in Data Science Data science has become a fourth approach to scientific discovery, in addition to experimentation, theory, and simulation. From a mathematical perspective, a fundamental problem in data science is to approximate an unknown target function using its data. In this talk, I will give an overview on some of fundamental issues in data science. Modern machine learning has had tremendous success in wide range of applications. However, its theoretical understanding remains elusive. The first part of this talk will be focused on recent theoretical progress on neural network-based machine learning. It has been widely known that the deeper a neural network is, the harder it is to train. Although, there are many empirical and heuristic explanations, little is known about its theoretical analysis. A rigorous answer will be given by showing that a deep ReLU network will eventually die in probability as the depth goes to The second part of this talk will be devoted to approximation under different data collection scenarios. Depending on how data are collected, different approximation methods need to be applied in order to utilize the data properly. Two scenarios will be discussed. One is for big data, and the other is for corrupted data.
{"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&document_srl=800107&sort_index=date&order_type=desc","timestamp":"2024-11-03T19:07:11Z","content_type":"text/html","content_length":"46884","record_id":"<urn:uuid:da762315-7002-4f69-a870-a89afd7f3edc>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00509.warc.gz"}
Interval Notation - Definition, Examples, Types of Intervals - Grade Potential Burbank, CA Interval Notation - Definition, Examples, Types of Intervals Interval Notation - Definition, Examples, Types of Intervals Interval notation is a fundamental principle that pupils are required learn because it becomes more critical as you progress to more difficult math. If you see more complex math, something like integral and differential calculus, on your horizon, then being knowledgeable of interval notation can save you hours in understanding these concepts. This article will discuss what interval notation is, what it’s used for, and how you can decipher it. What Is Interval Notation? The interval notation is merely a method to express a subset of all real numbers across the number line. An interval means the values between two other numbers at any point in the number line, from -∞ to +∞. (The symbol ∞ means infinity.) Fundamental difficulties you encounter primarily composed of single positive or negative numbers, so it can be difficult to see the benefit of the interval notation from such straightforward However, intervals are usually employed to denote domains and ranges of functions in higher math. Expressing these intervals can progressively become difficult as the functions become more complex. Let’s take a straightforward compound inequality notation as an example. • x is higher than negative 4 but less than two Up till now we understand, this inequality notation can be expressed as: {x | -4 < x < 2} in set builder notation. Despite that, it can also be written with interval notation (-4, 2), denoted by values a and b separated by a comma. So far we understand, interval notation is a method of writing intervals concisely and elegantly, using set principles that help writing and understanding intervals on the number line easier. The following sections will tell us more about the rules of expressing a subset in a set of all real numbers with interval notation. Types of Intervals Various types of intervals place the base for denoting the interval notation. These interval types are necessary to get to know due to the fact they underpin the complete notation process. Open intervals are applied when the expression does not comprise the endpoints of the interval. The previous notation is a great example of this. The inequality notation {x | -4 < x < 2} express x as being higher than -4 but less than 2, which means that it does not contain neither of the two numbers referred to. As such, this is an open interval expressed with parentheses or a round bracket, such as the following. (-4, 2) This means that in a given set of real numbers, such as the interval between -4 and 2, those two values are excluded. On the number line, an unshaded circle denotes an open value. A closed interval is the contrary of the previous type of interval. Where the open interval does not include the values mentioned, a closed interval does. In text form, a closed interval is written as any value “greater than or equal to” or “less than or equal to.” For example, if the previous example was a closed interval, it would read, “x is greater than or equal to -4 and less than or equal to two.” In an inequality notation, this would be expressed as {x | -4 < x < 2}. In an interval notation, this is stated with brackets, or [-4, 2]. This implies that the interval contains those two boundary values: -4 and 2. On the number line, a shaded circle is employed to represent an included open value. A half-open interval is a blend of previous types of intervals. Of the two points on the line, one is included, and the other isn’t. Using the prior example as a guide, if the interval were half-open, it would read as “x is greater than or equal to -4 and less than two.” This states that x could be the value -4 but couldn’t possibly be equal to the value two. In an inequality notation, this would be denoted as {x | -4 < x < 2}. A half-open interval notation is written with both a bracket and a parenthesis, or [-4, 2). On the number line, the shaded circle denotes the number present in the interval, and the unshaded circle signifies the value which are not included from the subset. Symbols for Interval Notation and Types of Intervals In brief, there are different types of interval notations; open, closed, and half-open. An open interval doesn’t include the endpoints on the real number line, while a closed interval does. A half-open interval includes one value on the line but excludes the other value. As seen in the prior example, there are different symbols for these types subjected to interval notation. These symbols build the actual interval notation you develop when stating points on a number line. • ( ): The parentheses are employed when the interval is open, or when the two endpoints on the number line are excluded from the subset. • [ ]: The square brackets are used when the interval is closed, or when the two points on the number line are included in the subset of real numbers. • ( ]: Both the parenthesis and the square bracket are employed when the interval is half-open, or when only the left endpoint is excluded in the set, and the right endpoint is included. Also known as a left open interval. • [ ): This is also a half-open notation when there are both included and excluded values within the two. In this case, the left endpoint is not excluded in the set, while the right endpoint is not included. This is also called a right-open interval. Number Line Representations for the Different Interval Types Aside from being written with symbols, the different interval types can also be represented in the number line utilizing both shaded and open circles, relying on the interval type. The table below will display all the different types of intervals as they are described in the number line. Practice Examples for Interval Notation Now that you know everything you need to know about writing things in interval notations, you’re prepared for a few practice problems and their accompanying solution set. Example 1 Transform the following inequality into an interval notation: {x | -6 < x < 9} This sample problem is a easy conversion; simply utilize the equivalent symbols when writing the inequality into an interval notation. In this inequality, the a-value (-6) is an open interval, while the b value (9) is a closed one. Thus, it’s going to be expressed as (-6, 9]. Example 2 For a school to participate in a debate competition, they need at least three teams. Represent this equation in interval notation. In this word problem, let x be the minimum number of teams. Since the number of teams required is “three and above,” the value 3 is consisted in the set, which means that 3 is a closed value. Plus, because no maximum number was mentioned with concern to the number of maximum teams a school can send to the debate competition, this value should be positive to infinity. Therefore, the interval notation should be denoted as [3, ∞). These types of intervals, when one side of the interval that stretches to either positive or negative infinity, are also known as unbounded intervals. Example 3 A friend wants to undertake a diet program limiting their regular calorie intake. For the diet to be a success, they should have minimum of 1800 calories every day, but maximum intake restricted to 2000. How do you write this range in interval notation? In this word problem, the number 1800 is the minimum while the number 2000 is the highest value. The problem implies that both 1800 and 2000 are included in the range, so the equation is a close interval, written with the inequality 1800 ≤ x ≤ 2000. Therefore, the interval notation is described as [1800, 2000]. When the subset of real numbers is restricted to a variation between two values, and doesn’t stretch to either positive or negative infinity, it is also known as a bounded interval. Interval Notation FAQs How Do You Graph an Interval Notation? An interval notation is simply a way of describing inequalities on the number line. There are laws of expressing an interval notation to the number line: a closed interval is written with a filled circle, and an open integral is denoted with an unfilled circle. This way, you can promptly see on a number line if the point is excluded or included from the interval. How Do You Convert Inequality to Interval Notation? An interval notation is basically a different way of describing an inequality or a combination of real numbers. If x is greater than or less a value (not equal to), then the number should be written with parentheses () in the notation. If x is higher than or equal to, or less than or equal to, then the interval is denoted with closed brackets [ ] in the notation. See the examples of interval notation above to check how these symbols are used. How To Rule Out Numbers in Interval Notation? Values ruled out from the interval can be written with parenthesis in the notation. A parenthesis means that you’re expressing an open interval, which states that the value is ruled out from the set. Grade Potential Could Help You Get a Grip on Math Writing interval notations can get complex fast. There are many difficult topics within this concentration, such as those dealing with the union of intervals, fractions, absolute value equations, inequalities with an upper bound, and more. If you want to conquer these concepts fast, you need to review them with the professional help and study materials that the expert teachers of Grade Potential delivers. Unlock your math skills with Grade Potential. Book a call now!
{"url":"https://www.burbankinhometutors.com/blog/interval-notation-definition-examples-types-of-intervals","timestamp":"2024-11-01T22:24:37Z","content_type":"text/html","content_length":"110950","record_id":"<urn:uuid:d1c29453-6c50-46fa-be57-15e74e127920>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00475.warc.gz"}
4.3: Momenta of Systems Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) Using Momentum Conservation When we examined the work-energy theorem, we found that it was not much more than a reformulation of Newton's 2nd Law for cases where we are only interested in speed (not direction) changes. As such, it had only limited usefulness. But when we went a little deeper, we found that this theorem spawned a very useful "shortcut" (the principle of energy conservation) that allowed us to solve certain types of problems much more easily than we could otherwise. We have already expressed a conservation principle for momentum, but let's do so again here, comparing it to the familiar counterpart in Energy: In isolated system (one where there is not external work being done on any of the objects in it), the total energy of the system remains constant. \[ \underbrace{KE+PE+E_{thermal}}_\text{before} = \underbrace{KE+PE+E_{thermal}}_\text{after} \] Momentum: In isolated system (one where there is not external impulse delivered to any of the objects in it), the total (vector) momentum of the system remains constant. \[ \vec p_{cm}\left(\text{before}\right) = \vec p_{cm}\left(\text{after}\right) \;\;\;\Rightarrow\;\;\;\underbrace{\vec p_1+\vec p_2+\dots}_{\text{before}} = \underbrace{\vec p_1+\vec p_2+\dots}_{\ The sum on each side is over the several objects in the system. So adding up the momentum vectors of all the objects before some event, and then doing it again after the event gives the same vector. This of course assumes that the "event" does not involve an external impulse, though it can include as many internal impulses as you like. It is important to remember that this equation does not mean that each of the terms remains unchanged. Rather, they change in such a way that the changes all compensate for each other, and the vector sum of the all-new momentum vectors comes out to the same that came out before. A child sits on the rear end of a sled (whose mass is uniformly-distributed along its length) with a block of frozen snow at rest in her lap. The sled is sliding forward on the horizontal, frictionless snow at constant a speed, when the child suddenly shoves the block forward in the sled (she remains firmly planted on the sled). After a period of time, the block comes to rest in the front of the sled. The forces between the girl, the block of snow, and the sled are all internal to that system of those three objects. With no friction coming from the snow, this means that there are no external forces on this system, and its total momentum remains unchanged. This means that the center of mass of the system of the child, the sled, and the ice, continues sliding at the same constant rate as before. This does not mean that the sled+child combination slides at the same rate throughout this process, because the increased speed of the ice means that the remaining mass of the system must change speed to keep the center of mass speed unchanged. Once the ice reaches the front of the sled, however, the whole system is moving at the same speed again, which means that it returns to the speed it had before the ice was pushed. Without knowing the masses of the parts of this system, we don't know the specific effect of the sliding ice – it could just slow the sled+child, stop the sled+child, or even cause them to move backward. The graph of speeds as a function of time below expresses this well: A few things to note here: □ The graph assumes that the mass of the sled+child is greater than the mass of the ice, because the internal force between them delivers the same impulse to both, which means that they change their momenta by the same amount. The graph shows the speed of the ice changing more, so to experience the same change of momentum, it must have less mass. □ Even though the ice and sled+child have different speeds for a short period of time, even during that time, the speed of the center of mass doesn't change (depicted by the dotted purple □ We don't know the actual speeds, so we can't place the time axis on the graph. If it happens to coincide with the horizontal red line segment, then the sled+child come to rest while the ice slides forward. If the time axis is above this horizontal red line (it must be below the purple line, as we have defined the starting velocity to be in the positive direction), then the sled+child actually moves backward while the ice slides forward. Using Center of Mass Let's look at an example of how we can use what we know about center of mass the analyze a case of two blocks of different masses that squeeze a (massless) spring between them until they are released from rest. Figure 4.3.1 – Repelling Masses Intuitively one can probably tell that for this situation \(m_2>m_1\). When a light object pushes off a heavy one (a flea jumping off a dog, a bullet leaving a gun, etc.), the lighter object's motion is always affected more. With our physics training, we can explain it with Newton's second and third laws: The blocks push on each other with equal forces (third law), and with equal forces, the block with less mass will accelerate more. They both start from rest and are pushed for equal periods of time, so the one with the greater acceleration will be going faster when they separate, sending it a greater distance in the same time period. Okay, now let's look at it from the perspective of momentum conservation. Treating the two blocks as a single system, the spring force produces only internal impulses, which means that the momentum of the system is conserved. The momentum before the spring unloads is zero, so it must be zero afterward. If \(v_1\) and \(v_2\) are the speeds of the two blocks (i.e. these are positive numbers), then we have for our conservation equation: \[ \text{momentum before} = \text{momentum after} \;\;\; \Rightarrow \;\;\; 0 = m_1 v_1 \left(- \widehat i \right) + m_2 v_2 \left(+ \widehat i \right) \;\;\; \Rightarrow \;\;\; v_1 = \dfrac{m_2} {m_1} v_2 \]Since it's clear from the diagram that \(v_1>v_2\), it must be that \(m_2>m_1\). We can also use what we know about center of mass here. The system experiences no external net impulse and its center of mass is stationary, so it must remain stationary! We don't know exactly where the center of mass is before the repulsion, but since it stays put, we can draw a vertical line down into the second diagram to find where it is after the repulsion. This clearly results in the center of mass being closer to \(m_2\), which means that is the larger mass.Center of Mass Acceleration Let's see if we can incorporate what we have learned about center of mass to make sense of Newton's second law. Consider the two systems shown in Figure 4.3.2. Each consists of a collection of 8 identical particles in close proximity to each other (the boxes shown are just used as a reference for later motion – they are not physical objects). In the left system, the particles are floating freely (there is no gravity or other forces), while in the right diagram, the particles are bound together with rigid, massless rods. The two systems are identical in every way except for the presence of these rods – the particle all have the same positions and masses as their counterparts, and are all at rest. Now for the experiment: Suppose we exert the same force on the same particle in both systems. Clearly the reaction is different in the two cases – in the left case, only the particle given the push accelerates away, while in the right cases the entire group of particles accelerates. The question is, in which case does the center of mass of the system of particles accelerate more? Figure 4.3.2 – Forces on Free and Rigid Systems Here is the short answer: The forces that are (or are not) between the particles defining the system are internal, and therefore have no effect on the velocity of the system's center of mass. The only external force on each system is \(\overrightarrow F\), and each system has the same mass, so Newton's second law says that both systems should react with the same acceleration of their center of mass. But that is unconvincing when we see only one particle move in one case, and the whole conglomerate move in the other! Let's suppose the forces act for some small period of time. The acceleration of the single particle will be eight times greater than that of the conglomerate, so in the same time interval it will move eight times as far as the conglomerate. Let's call the initial position of the center of mass the origin. The seven particles left behind experience no change in their position relative to this origin, and the one particle's position relative to the origin travels eight units of distance, while all eight of the particles in the other system travel just one unit from their original positions relative to the origin. Treating the direction of motion as the \(+x\) direction, and plugging the masses and distances into Equation 4.2.1, it should be immediately clear that both centers of mass move by the same amount. As strange as it sounds, Newton's second law works for any system of particles, whether they bond together to form a solid object, or are completely independent of each other, like particles in a gas. Conceptual Question A system of four balls of varying relative masses is shown in the left diagram below, and there is a force exerted on one of the balls as indicated. In the right diagram are a few options for other forces that can be exerted on balls in this system. Which of these forces will assure that the center of mass of this system does not accelerate? The forces shown are the only forces present (i.e. there is no gravity or other forces to worry about here). 1. D only 2. B or D only 3. B, C, or D only 4. A, B, or D only 5. There needs to be a force \(F\) exerted to the right on every ball at at the same time. For the center of mass to not accelerate, the net force on the system must be zero. This means that a force must be applied to the system in the opposite direction. It doesn’t matter where in the system this force is applied. Two different particles are confined by the same potential, shown in the diagram. Both particles have the same total energy, also depicted in the diagram. At one moment the particles pass each other precisely at the origin, with one particle moving in the \(-x\)-direction and the other moving in the \(+x\)-direction. There is a lot unpack here. We'll start by labeling the particles: The particle moving in the \(-x\) direction we'll call "particle A", and the one moving the other way "particle B". Clearly at the moment that they cross each other at the origin, the center of mass is at the origin. But does it remain there as they continue moving away from each other? To answer this question, we consider the net force on the two particle system. If it is zero, then the center of mass does not accelerate. The system experiences two forces, one on each particle. These forces can be computed from the slope of the potential energy function \(U\left(x\right)\). The left side of the potential curve affects particle A (pushing it in the \(+x\)-direction), and the right side affects particle B (pushing it in the \(-x\)-direction). The slopes of the two sides are not equal, so the forces on the particles are not equal, which means that there is in fact a net force on the two-particle system, and the center of mass is accelerating. With more force being applied in the \(-x\)-direction, the center of mass is accelerating in that direction. We even have enough information to determine the ratio of these two forces, thanks to the grid lines: \[\left.\begin{array}{l}\text{force on particle A}=F_A=-\text{slope of left segment} = 1\;unit \\ \text{force on particle B}=F_B=-\text{slope of right segment} = -2\;units \end{array}\right\}\;\; Does this mean that a short time later the center of mass is on the \(-x\) side of the origin? No! We don't know which way the center of mass is moving when it is at the origin. If it is stationary or moving in the \(-x\)-direction, then of course the center of mass speed is increasing in the \(-x\)-direction and the center of mass will later be on the \(-x\) side. But the center of mass may be moving in the \(+x\)-direction, which would mean that it is slowing down, but will still be on the \(+x\) side a short time later. Is there any way to know the direction of the center of mass motion when both particles are at the origin? Another way to ask this is, in which direction is the momentum of the system? One thing we do know is that both particles have the same total energy, and when they are both at the origin, they have the same potential energy as well. This means that at the moment they pass each other, they have equal kinetic energies. We know a relationship between kinetic energy (Equation 4.1.6): With both particles having the same kinetic energy, the particle with more mass is the one with more momentum, and when these momenta are summed, the direction of motion of the particle with more mass is the direction in which the center of mass is moving. Center of Mass Frame Sometimes analysis of problems that involve multiple objects interacting with each other is simplified by using what is called the center of mass frame of reference. Here’s an example. A child's toy called a "hot potato" consists of two hemispherical shells that close on a spring and are held together by a latch on a timer. When the time expires, the latch is released and the spring is allowed to expand, shooting the two shells in opposite directions, exposing the toy company to a product liability lawsuit from the family of the child that holds the hot potato when it goes off. Let's suppose a child throws this hot potato through the air, and the peak of its projectile motion, it explodes so that the two shells are propelled horizontally, as shown in Figure 4.3.3. The landing point of the shell that lands closest to where the toy was thrown is noted, but the other shell flies off into some tall grass and is lost. Naturally the child knows the starting speed they gave the toy as well as the exit angle, and she can easily measure the distance that the closer shell travels form the launch point. From this information and her vast knowledge of physics, she conceives of a plan to find the other shell that is far more elegant than searching for it in the tall grass. Figure 4.3.3 – Exploding Projectile The forces on the shells by the spring are internal to the two-shell system, so assuming air resistance is negligible, the center of mass of the system will behave exactly as it would if the internal forces didn't exist. With the starting angle and speed known, the child can use the range equation (see Example 1.7.4) to calculate the landing point of the center of mass of the system. Then with the actual landing point of one piece of the toy, she can use the center of mass formula to compute the landing point of the other piece. You might think we can do the same even if the spring unloads in an orientation that is other than horizontal, but this is not the case. The center of mass motion still follows the same parabolic trajectory, but naturally the center of mass is always between the two shells. In the case above, the shells land simultaneously (they both start with zero vertical component of velocity when the explosion occurs), so the center of mass lands at the same time, between the shells. When the explosion is not horizontal, one shell lands before the other, then friction stops is horizontal motion while the other shell keeps moving horizontally. This makes calculating the landing point of the center of mass using the usual range equation impossible. A block slides along a frictionless horizontal surface at a speed \(v\), starting at position \(x = 0\) and time \(t = 0\). An identical block dropped from rest lands directly on top of it. The surfaces of the blocks are sticky, so the top block adheres to the bottom block when it lands on it, and they continue along together. The blocks slide together into a curtained-off area, during which a spring noise and a “thud” are heard. At a later time, the bottom block emerges from the curtain without the top block on it, after apparently having its top lid sprung open from within. The collision of the falling block and the sliding block is an example of a case where the momentum of the a system is only conserved for one component. The blocks experience internal forces (normal force and static friction), and these have no effect on the two-block system momentum, but the horizontal surface pushing up on the bottom block is external, so although the two-block system had downward momentum just before the landing, the external force provides an impulse to take it away. However, the surface is frictionless, so there is no external force along the \(x\) -direction, and momentum is conserved along that direction. We can apply momentum conservation along that direction to write their combined speed \(v_1\) in terms of the bottom block's initial speed \(v_o\): \[ mv_o + m\left(0\right) = 2mv_1 \;\;\; \Rightarrow \;\;\; v_1= \frac{1}{2} v_o\nonumber \] We don't know the details of what happens behind the curtain – we don't know when the lid of the lower box sprung open, or where ((x\) position) it happened. But assuming that there are no external forces occurring behind the curtain, we can assume that the velocity of the center of mass of the two-block system along the \(x\)-direction is unchanged – the internal force from the spring-loaded box does not affect the center of mass motion. We actually know the speed of the center of mass of the system behind the curtain, since both blocks are moving at the same speed as they enter. It is \(v_1\), computed in terms of \(v_o\) above: \[v_{cm}=\frac{1}{2} v_o\nonumber\] If we happen to know the speed \(v_2\) at which the bottom block emerges from the curtain, then we can use the known center of mass speed and speed of the bottom block to derive information about the other block. If we are given more information like the times \(t_1\) and \(t_2\) and the positions \(x_1\) and \(x_2\), we can derive more than just relationships between speeds. For example, with the whole system located at \(x_1\) at \(t_1\), and knowing its center of mass speed, we know where the center of mass is at the later time \(t_2\). And combining this knowledge with the position of the bottom block (\(x_2\)) allows us to locate the position of the top block, even though it is hidden behind the curtain. While we are on the topic of two parts of a system going their separate ways by pushed off each other, this brings us to the topic of rocketry. A rocket that is stationary in space somehow is able to accelerate itself by firing its engines. How can the center of mass of the rocket system accelerate without any external forces acting on it? Well, it can't of course, but the rocket (or rather, its fuselage) is not an isolated system. It expels fuel (in the form of very hot gas) backward. If we include the fuel as part of the system, then the center of mass of the system doesn't accelerate at all! All that matters in the end is that the fuselage of the rocket is propelled forward. Note also that the rocket has more mass than the fuel, but the ignited fuel sends particles away at very high speeds, and this momentum balances the momentum of the fuselage in the opposite direction (which has more mass and lower velocity).
{"url":"https://phys.libretexts.org/Courses/University_of_California_Davis/UCD%3A_Classical_Mechanics/4%3A_Linear_Momentum/4.3%3A_Momenta_of_Systems","timestamp":"2024-11-12T14:07:02Z","content_type":"text/html","content_length":"158659","record_id":"<urn:uuid:a9ff7c8b-d309-4a46-9d3b-7987bef723cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00232.warc.gz"}
How to design modular DC-DC systems, Part 2: Filter DesignHow to design modular DC-DC systems, Part 2: Filter Design The previous tutorial in this series highlighted the performance, flexibility, and speed benefits of using power modules to design power systems then provided an overview of the modular design process. A modular design strategy is powerful, but it requires support circuitry in the power delivery network (PDN) to form a complete power system. This second part of the tutorial series addresses the first PDN issue: filtering electrical noise on a switching DC-DC module’s input and output sides. Figure 1: Complete power system filtering elements required to mitigate switching converter noise. Input noise sources The switching action of DC-DC converters and parasitic circuit elements distributed throughout the PDN cause two types of noise currents that must be filtered: common-mode and differential-mode. Common-mode noise originates from the high-voltage switching nodes present inside the converter and coupling through parasitic capacitance to an EMI ground reference. This noise travels in-phase out of the converter’s positive and negative input terminals and closes through the system’s ground reference. The converters’ switching action also generates Differential-mode noise, but its travel is limited to the circulation between the converter’s input terminals. EMI input interference problems and solutions If left unchecked, this noise can cause a host of problems within the power system. Figure 2 shows two DC-DC converters that share a point of common coupling with a DC source, as well as control and communication systems, which are typically sensitive to noise. DC-DC converter-generated noise circulating throughout the PDN can cause erratic system operation and negatively influence adjacent systems’ overall behavior that share an electrical connection. Figure 2: Noise from the converter propagates out of the module and can cause significant problems for control and communication systems. Input filters added to the system bypass switching converter noise locally so that the noise circulates only between the filter and the DC-DC converter itself, reducing interference with other systems connected to the same source. The filter also works in the opposite direction, decreasing the DC-DC converters’ susceptibility to noise that comes from external sources. To begin designing a filter into the system, first note that the filtering for a given application may have to meet specific electromagnetic compatibility standards set forth by various international bodies for both electromagnetic compatibility (EMC) and EMI. Standards can vary significantly by industry and application; defense application compliance is significantly different from automotive, for example. Noise-mitigation techniques require several discrete components to filter both common-mode and differential-mode noise currents effectively. Common-mode filtering is generally achieved with a common-mode choke, which forms a high-impedance series path for common-mode currents that flow out of the converter along with both the positive and negative input terminals. The common-mode choke works alongside Y-capacitors that form a shunt path for the common-mode noise to the EMI ground. (Figure 3) FIgure 3: Noise from the converter propagates out of the module and can cause significant problems for control and communication systems. Differential-mode elements include X-capacitors and series differential inductors to perform similar functions. This ensures that there is a high-impedance series path for differential-mode noise from the converter, and a low-impedance shunt path for the noise current to close locally to the converter. When routing noise currents to ground, it is possible to couple noise from the power components into the control components by connecting the signal and power grounds incorrectly. When coupled by trace parasitics into signal components, the high-frequency noise propagated by the DC-DC converter can impact low-power control signals and, in turn, cause erratic operation. To prevent power current flowing through signal grounds, connect the signal ground and the power ground at a single point only. Filter Topologies While there are several different methods for constructing filter topologies, this tutorial will illustrate only the more common second-order responses for filters, the first of which is a simple inductor and capacitor. Figure 4 shows an undamped LC filter response with a second-order roll-off at -40dB per decade above the cut-off frequency. Undamped LC filters are generally not suitable solutions due to their characteristic resonance at the corner frequency. Without proper damping, this filter topology will amplify noise in the range of frequencies around the resonance. There are several damping strategies to consider—first, a simplified series-damped filter circuit with a resistor in parallel with the inductor. The damping at the corner frequency is much better, but it comes at the cost of decreased high-frequency attenuation due to the addition of a zero in the filter’s frequency response. Figure 4: three damping approaches compared to an undamped LC filter in a system that exhibits second-order roll-off at -40dB. A second, better approach adds an inductor that selectively applies damping resistance into the circuit, maintaining the second-order filter response with improved damping around the resonant frequency. However, this approach shifts the corner frequency slightly. A third alternative is adding a parallel R-C damping branch, which significantly improves damping around the corner frequency of the filter. Output filter design To design an output filter, first define the output voltage ripple magnitude that the application can tolerate. Next, consider the dynamics of the load current, including high di/dt load transients. Of the several pieces of a DC-DC system design that handle high dynamic loads, the converter’s output filter supplying that load has the most direct impact because of its series inductance. With the maximum di/dt of the load defined, it is possible to set a constraint on the inductor’s maximum value for use in the system. The following equation to determine the inductor value factors in the di/ dt of the load and the maximum voltage drop allowed across that inductor during a transient, i.e., the low line input operating voltage at the load. L = (V[OUT-CONVERTER] – V[OUT-LOAD]) / (di[LOAD]/dt) Next, after selecting the appropriate inductor, determine the cut-off frequency for the filter based on the ripple and how much attenuation is needed at the output. From this information, it is then possible to work out the capacitance value using this equation: f[c] = 1/(2π √(L[DM] • C[DM])) Physical properties and implementation considerations DC-DC switching converters today operate at very high frequencies — high enough that parasitic capacitances and inductances in the design layout can significantly impact the filter’s overall behavior in the converter system. In general, EMI filters should be located physically close to the converter itself. Figure 5 shows a DC-DC converter with filtering capacitances placed directly at the input. Due to the proximity to the converter, the noise currents circulate locally. If the noise currents were allowed to circulate in a wider space, the loop path could very easily become an antenna at high frequency, radiating noise to other parts of the circuit and completely negating the benefit of adding a filter. Therefore, both the series and shunt filter elements should be placed as close as possible to the DC-DC converter to limit the size of the loop that will experience high frequency and high di/dt currents. Figure 5: PCB layout considerations include placing filter components in close proximity to the DC-DC converter (left). Separating copper planes to avoid capacitive coupling allows noise currents to bypass the high-impedance filter components (right). The layout of the PCB is also important. Pay careful attention to the traces that will be carrying noise currents to minimize their overall inductance and resistance so that at high frequency, they will not form significant voltage as a result of their impedance. Also, lay out planes on the PCB to avoid forming parasitic capacitances that allow coupling effects to bypass the filtering elements. For example, if the copper planes that define the connection to a filter inductor’s terminals are too close together, a parasitic capacitance will allow high-frequency currents to bypass the high-impedance inductor. Best practices separate the copper planes to minimize parasitic capacitance around high-series-impedance components. The same concept applies to common-mode chokes. Maintain a keep-out area around the common-mode choke windings to prevent parasitic capacitance that would bypass the filter component. Filtering component performance can vary significantly under fluctuating environmental or application related conditions. For example, the effective value of Class 2 dielectric ceramic capacitor exhibits significant variation with applied DC bias voltage. Figure 6a illustrates this effect on an example 1206 size MLCC component; at 50V applied bias, the effective capacitance is reduced by 74%. This variation in effective capacitance will increase the filter’s corner frequency, reducing the attenuation achieved at high frequencies. Figure 6 a and b: The effects of voltage and temperature on effective capacitance are important considerations for filter design. DC bias characteristics significantly change the capacitance of a Murata Class 2 dielectric ceramic capacitor (Figure 6a), and significant reductions in capacitance are evident at the extremes of the rated temperature range (Figure 6b). Source: Murata Manufacturing Co., Ltd. ] Operating temperature can also have a significant impact on effective capacitance. Figure 6b shows that the same example capacitor exhibits up to a 20% reduction in effective capacitance when operated to the extremes of its rated operating temperature range. A robust filtering solution for DC-DC modules must account for the impact of expected component variation over the system’s full expected operating range. Integrating the input filter into the system poses additional challenges for overall system stability. The next tutorial in this series will take up that area of system design.
{"url":"https://www.eeworldonline.com/how-to-design-modular-dc-dc-systems-part-2-filter-design-faq/","timestamp":"2024-11-06T18:48:24Z","content_type":"text/html","content_length":"129857","record_id":"<urn:uuid:457506cd-c9bd-4b4b-a5c4-ee5f8db335b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00176.warc.gz"}
3. Longest Substring Without Repeating Characters Problem Statement: Given a string s, find the length of the longest substring without repeating characters. Input: s = "abcabcbb" Output: 3 Explanation: The answer is "abc", with the length of 3. codeInput: s = "bbbbb" Output: 1 Explanation: The answer is "b", with the length of 1. codeInput: s = "pwwkew" Output: 3 Explanation: The answer is "wke", with the length of 3. Note that the answer must be a substring, "pwke" is a subsequence and not a substring. To solve this problem, we can use the sliding window technique combined with a set to keep track of the characters in the current window. The idea is to expand the window by moving the right pointer (j) until a duplicate character is found. Then, move the left pointer (i) to remove characters until the duplicate is eliminated, thus maintaining a substring without repeating characters. 1. Edge Cases: First, handle cases where the string length is 0 or 1, as these can be returned immediately. 2. Sliding Window Technique: □ Use a Set to store characters in the current window. □ Start with both pointers (i and j) at the beginning of the string. □ Expand the window by moving j and adding characters to the set. □ If a duplicate character is found, move i to shrink the window until the duplicate is removed. □ Keep track of the maximum length of the substring without repeating characters. import java.util.HashSet; import java.util.Set; class Solution { public int lengthOfLongestSubstring(String s) { if (s.length() == 1) return 1; if (s.length() == 0) return 0; Set<Character> set = new HashSet<>(); int i = 0, j = 0, max = 0; while (j < s.length()) { if (set.add(s.charAt(j))) { max = Math.max(set.size(), max); } else { return max; • Time Complexity: O(N), where NNN is the length of the string. Both i and j traverse the string once, so the algorithm runs in linear time. • Space Complexity: O(min(N,M)), where N is the length of the string and M is the size of the character set. The space is used by the set to store characters of the substring. This solution efficiently finds the length of the longest substring without repeating characters using the sliding window technique. It handles edge cases and ensures that the window is adjusted dynamically to maintain the uniqueness of characters. The algorithm runs in linear time and uses minimal extra space, making it an optimal solution for this problem. You can find all my solutions at Github Thank you for reading! Stay tuned for the next article in this series, where we'll tackle another exciting LeetCode problem.
{"url":"https://blog.mohammedsalah.online/leetcode-3-longest-substring-without-repeating-characters","timestamp":"2024-11-15T03:02:30Z","content_type":"text/html","content_length":"121196","record_id":"<urn:uuid:c05456cc-2320-467d-9d51-425308ef0d04>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00229.warc.gz"}
Milliliters and Liters Converter - CalculatorBox Milliliters and Liters Converter Figures rounded to max decimal places. Milliliters and liters are units of volume in the metric system that are commonly used to measure the capacity of containers, liquids, and other substances. A milliliter (ml) is one-thousandth of a liter (L), which means that there are 1,000 milliliters in a single liter. This Milliliters and Liters Converter is a handy tool designed to make it easy for you to convert between these two units of volume quickly and accurately. In this article, we will discuss the formulas necessary to complete the conversion, as well as provide a worked example calculation for your reference. Formulas for Milliliters and Liters Conversion There are two main formulas that you need to know in order to convert between milliliters and liters: 1. To Convert Milliliters to Liters: liters (L) = milliliters (ml) ~/~ 1,000 2. To Convert Liters to Milliliters: milliliters (ml) = liters (L) * 1,000 These formulas are relatively simple and easy to remember, as the only factor required for conversion is 1,000. This is due to the fact that there are 1,000 milliliters in a single liter. Now, let’s look at a worked example calculation to help you understand how to use these formulas in practice. Worked Example Calculation Suppose you have a container that can hold 5,000 milliliters of liquid, and you want to know how many liters this is equivalent to. To find out, you can use the formula for converting milliliters to liters (L) = milliliters ~(ml) / 1,000 In this case, you would plug in the value of 5,000 milliliters into the formula: liters (L) = 5,000 ~ml / 1,000 By performing the division, you can find the equivalent volume in liters: So, a 5,000-milliliter container is equivalent to a 5-liter container. Now, let’s reverse the process and convert liters back into milliliters. Suppose you have a 2-liter bottle of soda and you want to know how many milliliters this is equivalent to. To find out, you can use the formula for converting liters to milliliters: milliliters (ml) = liters (L) * 1,000 In this case, you would plug in the value of 2 liters into the formula: milliliters (ml) = 2 ~L * 1,000 By performing the multiplication, you can find the equivalent volume in milliliters: So, a 2-liter bottle of soda is equivalent to a 2,000-milliliter bottle of soda. Using the Milliliters and Liters Converter This Milliliters and Liters Converter makes it incredibly easy to perform conversions between these two units of volume. All you need to do is enter the value you want to convert and select the appropriate unit (either milliliters or liters). The converter will then automatically calculate the equivalent volume in the other unit using the formulas discussed above. In addition to saving you time and effort, this converter also ensures that your conversions are accurate and free from errors. This can be particularly useful in situations where precise measurements are important, such as in cooking recipes or scientific experiments. Whether you’re measuring the capacity of a container, the volume of a liquid, or any other substance, understanding how to convert between milliliters and liters is essential. With the help of this Milliliters and Liters Converter and the formulas provided in this article, you can easily and accurately perform these conversions in no time. So, the next time you find yourself in need of converting milliliters to liters or vice versa, make sure to use this handy tool and ensure precise measurements every time.
{"url":"https://calculatorbox.com/calculator/milliliters-and-liters-converter/","timestamp":"2024-11-10T20:52:00Z","content_type":"text/html","content_length":"147170","record_id":"<urn:uuid:812d2406-7b92-4222-b8b9-6c9c09c6056f>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00748.warc.gz"}
The Algorithm for Inserting Sequences into Sequences | HackerNoon In software development, ordered data sequences are needed practically everywhere, from displaying products in an online store and tracking orders to managing messages in a chat or handling task priorities in project management tools. Now, the challenge arises when data needs modifications. You need to maintain the correct order, and keeping it that way when inserting, deleting, or rearranging elements can become complex, particularly when using traditional numbering systems. One problem I have seen time and time again is the need to insert an ordered sequence of elements between two existing ones in a dataset. Traditional approaches don't quite cut it, especially when dealing with enormous datasets, and I’d like to offer you a solution. I’ve spent a while perfecting an approach that would help avoid the performance bottlenecks that come with recalculating sequence numbers – and here it is, an alternative algorithm that simplifies this task by leveraging a string-based ordering system. Recap of existing solutions and why they don’t work Typically, to maintain order in a sequence, a special parameter is assigned to the elements of a sequence, often called a "sequence number," which determines the order of a specific component – be it a product, document, message in a chat, order, etc. – in the final sequence. It’s one thing to simply display elements in a given order, but it’s another to maintain that order when inserting, deleting, or modifying elements. A usual scenario is when we need to insert an ordered sequence of elements into an already ordered sequence. For example, we might want to insert ten elements between the fifth and sixth elements. If the sequence uses natural numbers for ordering (i.e., the first element has a sequence number of 1, the second has 2, the third has 3, and so on), then to insert a sequence of ten elements between the fifth and sixth elements, we would need to somehow "spread" the natural number range. This contradicts the definition of natural numbers, as there can’t be anything between two consecutive numbers in a natural number series. There are two possible solutions to this problem: Option 1: We assign the new elements the numbers 6, 7, 8, …, 15, and for the elements that were originally numbered 6 and higher, we simply add 10 to their numbers, making them 16, 17, 18, and so on. This shifts everything after the insertion point. Option 2: We move from natural numbers to real numbers. In this case, we can insert fractional numbers between the fifth and sixth elements, like this: 5, 5.1, 5.2, 5.3, …, 5.10, 6. Both of these have their downsides, which can often be significant enough to make you reconsider your choice. For example, switching from natural numbers to real numbers comes with the problem of limited precision: we can only make a finite number of insertions before running into the limits of floating-point accuracy. A number like 5.127956 has a finite sequence of digits after the decimal point, meaning we can’t keep inserting numbers indefinitely. The approach of using natural numbers as sequence numbers and shifting the entire sequence when inserting another ordered sequence also has its own set of problems. The first issue is that such an insertion will require recalculating all previously assigned sequence numbers. If we’re dealing with large datasets (thousands, hundreds of thousands, or even millions of items) and we make an insertion near the beginning of the list, we’ll have to shift all (and I mean, all) of the sequence numbers, which can affect performance in a way we certainly don’t want it to. Plus, we would need to lock the database during the process of updating such a large volume of data, as we’re essentially performing a massive update operation. And before you say anything, yes, there are ways to handle the challenges of large-scale operations. However, I want to discuss an approach that avoids the need for such mass updates altogether. Reserve those heavy-duty tools for more complex tasks where you won't be able to avoid mass operations. Exploring our options Before we rush to a solution, let's think this through: how can we insert 10 elements between the numbers 5 and 6, using natural numbers? One initial approach could be to always insert new elements with some spacing between them, so we never run into a situation where we need to insert elements between, say, 5 and 6. For example, when we first insert elements (let’s say 5 of them), instead of numbering them 1, 2, 3, 4, 5, we could number them 100, 200, 300, 400, and 500. This creates a "gap" that allows us to insert elements between the fifth and sixth positions, which now have sequence numbers 500 and 600. This way, we could easily insert up to 100 elements between 500 and 600 without any issues. However, this comes with three main problems: 1. We need to predict how large the gap should be. 2. Hitting the maximum value for integers happens much quicker than you would expect. If we keep creating large gaps between numbers, we will eventually run into a situation where even after inserting a small number of elements, we reach an excessively large number, beyond the range that modern data types can store. 3. Limited space. As we continue making insertions, we are gradually filling in our initially spaced-out sequence, making it denser and denser. Eventually, we’ll end up with two natural numbers (e.g., 500 and 501) between which nothing can be inserted. Approaching strings So, we come to realize that neither natural numbers nor real numbers can fully solve this problem for us. However, numbers – whether natural or real – are fortunately not the only way or data type that can be ordered. One such type is strings. Most modern and popular programming languages and database management systems can compare strings, allowing us to order them using lexicographic order based on ASCII character codes. Now let's get to the point. For simplicity, we’ll take only the characters from A to Z and a to z from the ASCII set. Then, we choose a word length, say 3, and call a word with three characters a "domain". For example, AAA would be the first domain (because the character A has the smallest ASCII code), and zzz would be the last. This gives us a sequence of domains: AAA, AAB, …, ZZZ, aaa, aab, …, zzz. This sequence contains (26 uppercase letters + 26 lowercase letters)³ = 140,608 words, which allows us to arrange 140,608 elements in order. Then, we need to calculate the "left" and "right" words, when inserting new elements. These correspond to the words of the elements on the left and right (in the examples earlier, these were the fifth and sixth elements). If there is no left or right word (as in the case of the first insertion, or an insertion at the beginning or end), the smallest domain (AAA) is used as the left word, and the largest domain (zzz) is used as the right word. After that, elements are inserted one by one, with their ordinal number assigned based on the formula: left word + 1. The "+1" refers to the next word in lexicographical order, which is calculated as the next permutation with repetition. The insertion proceeds if the left word + 1 is still less than the right word. Otherwise, the ordinal number is calculated using the formula: left word + the smallest domain. After this, the left word is updated to the word that was just inserted. To clear this up, let's look at an example of inserting the first 3 elements into an initially empty sequence: Determine the left and right words. Since the sequence is empty, the left and right words are AAA and zzz, respectively. 1. Insert the first element. Its ordinal number is AAA + 1 = AAB. Is AAB less than zzz? Yes, so we go on. The left word now becomes AAB. 2. Insert the second element. Its ordinal number is AAB + 1 = AAC. Is AAC less than zzz? Yep, good. The left word now becomes AAC. 3. Insert the third element. Its ordinal number is AAC + 1 = AAD. Is AAD less than zzz? Yes? Great, this works out. After this, your sequence will be AAB, AAC, AAD. Now you can insert two elements between AAB and AAC. This is where things get interesting. When we insert the first element after the left word (AAB), AAB + 1 gives us AAC, which is equal to the right word (AAC). This is exactly why we introduced the concept of a "domain" – instead of adding +1, we add the smallest domain, which in our case is AAA. So, the result would be AAB_AAA (I've added the underscore here for readability in the article, but you could also use it to enhance readability when debugging the algorithm). Next, when inserting the second element, we add +1 to AAB_AAA, which gives us AAB_AAB, and this is definitely less than AAC. This means we now have space to insert 140,608 whole more elements! The final sequence is now this: AAB, AAB_AAA, AAB_AAB, AAC, AAD. It maintains the order when sorted and is also derived from the previous one without changing the ordinal numbers of the elements that were already in the sequence before the insertion. Do you know what this means? We've solved the main issue of inserting into an ordered sequence! In-field application At first glance, it might seem similar to the approach with spaced-out natural numbers, where we need to decide what domain length will work for us. However, in this case, even if we choose a domain length of one, we would simply concatenate domains more frequently. This doesn't affect the complexity of the algorithm or the final result. But if we want "cleaner" or "prettier" strings in our database, we can increase the initial domain length to reduce the number of concatenations. So, if we increase the domain length to 4, we could order up to 7,311,616 elements using just a single However, our most attentive readers might have noticed that the above algorithm would break down if we inserted an element between AAB and AAB_AAA. If we add +1 to AAB, we get AAC, which is greater than AAB_AAA. And if we add the smallest domain, we get AAB_AAA, but this element already exists and is the “right” word. No worries, this can be solved too. Instead of inserting the smallest domain, insert the "middle" domain, the one that lies in the middle of the domain sequence. In our case, the sequence is even, so we can choose either ZZZ or aaa (which is what we'll do moving forward). Alternatively, we could choose a character from the ASCII table that comes after Z but before a, such as the symbol ^. This way, each new domain insertion opens the possibility to insert elements around it, allowing us to use half of the domain's capacity at the given length K (in our case, K equals 3, and with an alphabet of 52 symbols, the domain capacity is 140,608 elements, so half of that would be 70,304). And after inserting such a domain, elements are added either from the left word or from the right ("backwards") as follows: 1. Insert an element between AAB and AAC. 2. Check if there are available spots without introducing a new domain — either to the right of the left word or to the left of the right word. If there’s space to add on the right side of the left word without creating a new domain, we increment the left word by +1 and continue adding from the right of the left word. If space is available to add on the left side of the right word without creating a domain, we choose it and decrement the right word by -1. If neither side has space, we create a new domain. 3. In this case, since neither the right side of AAB nor the left side of AAC has space, we add a domain: AAB + "middle domain" = AAB_aaa. This gives us AAB, AAB_aaa, AAC. 4. Now, in this setup, we can insert up to 70,304 words on either side of AAB_aaa. However, it’s essential to avoid getting too close to AAB_AAA or AAB_zzz, as this would again prevent any further insertions between AAB and AAB_AAA or between AAB_zzz and AAC. If you reach the last available space (when no further +-1 operations are possible), instead of filling that space, just create a new domain straight away. So, instead of inserting the word AAB_AAA, insert AAB_AAA_aaa. Lastly, we can also make the very first insertion relative to the middle of the domain (aaa) instead of AAA/ZZZ. This makes our algorithm even more generic, as we can treat an empty sequence as something where no insertions are possible without first adding a domain. This, in turn, leads to the creation of the first domain – aaa. Congratulations! We've found a solution to one of the most common challenges in handling ordered sequences. Now you can organize sequences, change the order of their elements, and perform insertions and deletions without the need to recalculate the entire sequence. Only the new elements will receive new ordinal numbers (represented as strings) in the database. This means that any insertion into any part of the sequence will not trigger a recalculation of all the ordinal numbers, as would be required if we used the classic approach with natural numbers. You won’t have to worry about performance drops anytime soon, even when handling large volumes of ordered data. The problem is solved, and now you can take a breather before tackling the next seemingly endless dataset. The best part? You’ve got a new tool in your toolkit!
{"url":"https://hackernoon.com/the-algorithm-for-inserting-sequences-into-sequences","timestamp":"2024-11-05T13:11:01Z","content_type":"text/html","content_length":"308059","record_id":"<urn:uuid:3b563d27-8cb5-454d-b8c0-ee7bc417bbf5>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00080.warc.gz"}
class Diagonal Diagonal (Jacobi) preconditioner for iterative solvers of linear systems. More... class Diagonal< Matrices::DistributedMatrix< Matrix > > Specialization of the diagonal preconditioner for distributed matrices. More... class ILU0 Implementation of a preconditioner based on Incomplete LU. More... class ILU0_impl class ILU0_impl< Matrix, Real, Devices::Cuda, Index > class ILU0_impl< Matrix, Real, Devices::Host, Index > Implementation of a preconditioner based in Incomplete LU - specialization for CPU. More... class ILU0_impl< Matrix, Real, Devices::Sequential, Index > class ILUT Implementation of a preconditioner based on Incomplete LU with thresholding. More... class ILUT_impl class ILUT_impl< Matrix, Real, Devices::Cuda, Index > class ILUT_impl< Matrix, Real, Devices::Host, Index > class ILUT_impl< Matrix, Real, Devices::Sequential, Index > class Preconditioner Base class for preconditioners of of iterative solvers of linear systems. More... Namespace for preconditioners of linear system solvers. This namespace contains the following preconditioners for iterative solvers linear systems. 1. Diagonal - is diagonal or Jacobi preconditioner - seeNetlib 2. ILU0 - is Incomplete LU preconditioner with the same sparsity pattern as the original matrix - see Wikipedia 3. ILUT - is Incomplete LU preconiditoner with thresholding - see paper by Y. Saad
{"url":"https://tnl-project.gitlab.io/tnl/namespaceTNL_1_1Solvers_1_1Linear_1_1Preconditioners.html","timestamp":"2024-11-14T23:54:08Z","content_type":"application/xhtml+xml","content_length":"12726","record_id":"<urn:uuid:1fe65811-c703-4d6f-ab32-253229a6dabc>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00794.warc.gz"}
Lesson 4 Money and Debts Let's apply what we know about signed numbers to money. Problem 1 The table shows five transactions and the resulting account balance in a bank account, except some numbers are missing. Fill in the missing numbers. │ │transaction amount │account balance │ │transaction 1│200 │200 │ │transaction 2│-147 │53 │ │transaction 3│90 │ │ │transaction 4│-229 │ │ │transaction 5│ │0 │ Problem 2 1. Clare has $54 in her bank account. A store credits her account with a $10 refund. How much does she now have in the bank? 2. Mai's bank account is overdrawn by $60, which means her balance is -$60. She gets $85 for her birthday and deposits it into her account. How much does she now have in the bank? 3. Tyler is overdrawn at the bank by $180. He gets $70 for his birthday and deposits it. What is his account balance now? 4. Andre has $37 in his bank account and writes a check for $87. After the check has been cashed, what will the bank balance show? Problem 3 Last week, it rained \(g\) inches. This week, the amount of rain decreased by 5%. Which expressions represent the amount of rain that fell this week? Select all that apply. (From Unit 4, Lesson 8.) Problem 4 Decide whether or not each equation represents a proportional relationship. 1. Volume measured in cups (\(c\)) vs. the same volume measured in ounces (\(z\)): \(c = \frac18 z\) 2. Area of a square (\(A\)) vs. the side length of the square (\(s\)): \(A = s^2\) 3. Perimeter of an equilateral triangle (\(P\)) vs. the side length of the triangle (\(s\)): \(3s = P\) 4. Length (\(L\)) vs. width (\(w\)) for a rectangle whose area is 60 square units: \(L = \frac{60}{w}\) (From Unit 2, Lesson 8.) Problem 5 1. \(5\frac34 + (\text{-}\frac {1}{4})\) 2. \(\text {-}\frac {2}{3} + \frac16\) 3. \(\text{-}\frac {8}{5} + (\text{-}\frac {3}{4})\) (From Unit 5, Lesson 3.) Problem 6 In each diagram, \(x\) represents a different value. For each diagram, 1. What is something that is definitely true about the value of \(x\)? 2. What is something that could be true about the value of \(x\)? (From Unit 5, Lesson 1.)
{"url":"https://curriculum.illustrativemathematics.org/MS/students/2/5/4/practice.html","timestamp":"2024-11-03T07:43:59Z","content_type":"text/html","content_length":"73436","record_id":"<urn:uuid:8d6ab2b1-6352-41d9-b0aa-d4d1bce2c35f>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00108.warc.gz"}
Causal signal transmission by quantum fields. VI: The Lorentz condition and Maxwell's equations for fluctuations of the electromagnetic field The general structure of electromagnetic interactions in the so-called response representation of quantum electrodynamics(QED)is analysed. A formal solution to the general quantum problem of the electromagnetic fieldinteracting with matter is found. Independently, a formal solution to the corresponding problem in classical stochastic electrodynamics(CSED)is constructed. CSED and QED differ only in the replacement of stochastic averages of c-number fields and currents by time-normal averages of the corresponding Heisenbergoperators. All relations of QED connecting quantum field to quantum current lack Planck's constant, and thus coincide with their counterparts in CSED. In Feynman's terms, one encounters complete disentanglement of the potential and current operators in response picture. • Macroscopic quantum electrodynamics • Phase-space methods • Quantum field theory • Quantum fluctuations • Quantum-statistical response problem Dive into the research topics of 'Causal signal transmission by quantum fields. VI: The Lorentz condition and Maxwell's equations for fluctuations of the electromagnetic field'. Together they form a unique fingerprint.
{"url":"https://research.aalto.fi/en/publications/causal-signal-transmission-by-quantum-fields-vi-the-lorentz-condi","timestamp":"2024-11-10T22:07:32Z","content_type":"text/html","content_length":"58396","record_id":"<urn:uuid:7baf496f-23c5-47a2-ab70-243af87b3ed9>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00164.warc.gz"}
Doubling Experiment with O(n) Analysis Project Purpose Bellow the is output from running different commands in a (linux) terminal. The commands are explained in depth bellow the sections of terminal output. poetry run de --filename tests/benchmarkable_functions.py --funcname bubble_sort Benchmarking Tool for Sorting Algorithms Filepath: tests/benchmarkable_functions.py Function: bubble_sort Data to sort: ints Number of runs: 5 Minimum execution time: 0.0010673040 seconds for run 1 with size 100 Maximum execution time: 0.2696081410 seconds for run 5 with size 1600 Average execution time: 0.0721370716 seconds across runs 1 through 5 Average doubling ratio: 3.9996190149 across runs 1 through 5 Estimated time complexity for tests/benchmarkable_functions.py -> bubble_sort: poetry run de Benchmarking Tool for Sorting Algorithms Estimated time complexity for tests/benchmarkable_functions.py -> bubble_sort: Estimated time complexity for tests/benchmarkable_functions.py -> bubble_sort_str: O(n²) Estimated time complexity for tests/benchmarkable_functions.py -> selection_sort: O(n²) Estimated time complexity for tests/benchmarkable_functions.py -> insertion_sort: O(n²) Estimated time complexity for tests/benchmarkable_functions.py -> heap_sort: O(n log(n)) Estimated time complexity for tests/benchmarkable_functions.py -> quick_sort: O(n log(n)) Estimated time complexity for tests/benchmarkable_functions.py -> merge_sort: O(n log(n)) Our program has commands to show the user both the empirical analysis and the theoretical analysis of running the six functions in the benchmakarble_functions.py file. poetry run de --filename tests/benchmarkable_functions.py --funcname bubble_sort is the command that returns empirical results. This command performs a doubling experiment similar to that of Algorithm Analysis Project 4. However, one difference is that our tool will only print the minimum, maximum, and average run times. For this example this command calls the file benchmakarble_functions.py which contains six different sorting functions. We have tested the different functions and have recorded the expected Big O notation based on analysis of the code. Therefore, as we test our empirical results, they should confirm the expected Big O notation. The last element of the command line is the function name, which determines what function in the file will run. This command shows the running time/empirical results of running different functions in a certain file which for this example is the function bubble_sort in the file benchmakarble_functions.py. This baseline file, is just a baseline. We are using it to test our program, and it is expected that a user will create their own file to pass ink through the command line In addition to showing the user empirical results, our product also gives the user options to display the run time results from running all the commands in benchmakarble_functions.py. The computer calculates the possible Big O notation based on the empirical results. However, a point to note is that this means the Big O notation is not always completely accurate. Sometimes the empirical results change when, for example, different factors of the computer’s environment change. Then the ratio will not exactly represent the expected ratio for certain Big O notations, though it will be very close. The command to show the empirical results is poetry run de. This command will run all the function in the benchmakarble_functions.py file and in the terminal prints the possible Big O notation for each function. Our product gives the user multiple options in the output and result they would like to see and analyze. Project Code This project allows for the user to input a file name, function name, data type, start size, and number of runs. If the user does not supply any arguments, the running time of six sample algorithms are tested and reported. An example of the default program being run is in the following code block: $ de Benchmarking Tool for Sorting Algorithms Estimated time complexity for tests/benchmarkable_functions.py -> bubble_sort: O(n²) Estimated time complexity for tests/benchmarkable_functions.py -> bubble_sort_str: O(n²) Estimated time complexity for tests/benchmarkable_functions.py -> selection_sort: O(n²) Estimated time complexity for tests/benchmarkable_functions.py -> insertion_sort: O(n²) Estimated time complexity for tests/benchmarkable_functions.py -> heap_sort: O(n) Estimated time complexity for tests/benchmarkable_functions.py -> quick_sort: O(n log(n)) Estimated time complexity for tests/benchmarkable_functions.py -> merge_sort: O(n log(n)) It is worth noting that there is sometimes variance in the results, so multiple runs may produce slight variations. Dynamically Loading Python Files In order for de to dynamically load Python files, we make use of python’s compile and exec functions: # path.py # ... with open(filename, 'r') as file: code = compile(file.read(), filename, 'exec') namespace = {} exec(code, namespace) if funcname not in namespace: raise AttributeError(f"Function '{funcname}' not found in '{filename}'") if not callable(namespace[funcname]): raise BalueError(f"'{funcname}' was not found to be a function.") # ... This code loads the symbols from filename into the AST, making the funcname available under the current namespace. The function’s parameters are then counted to determine if the function only needs a list as input or if it needs a list and the list length. Generating Input Data de has the ability to generate random input data for ints, floats, and strings. In general, each generation procedure creates a list of a specified size with randomly-populated input data. Benchmarking Sorting Algorithms To benchmark the sorting algorithms, de exercises the use of a doubling experiment to double the size of input data for each run. Each run uses the time.perf_counter method to measure the execution time of running functions, as seen in the following code: # benchmark.py # ... start = time.perf_counter() stop = time.perf_counter() times_list.append((i + 1, size, stop - start)) # ... In this case, funcname is a Callable. After timing the function’s execution, we append its result to the list of data, which is used to analyze results. Analyzing Benchmark Results To analyze the benchmarking results, we calculate the average doubling ratio between runs. This is done in the following code: def compute_average_doubling_ratio(times_list: List[Tuple[int, int, float]]) -> float: times = [item[2] for item in times_list] # iterate through times, calculating doubling ratios between runs doubling_ratios = [times[i+1] / times[i] for i in range(len(times) - 1)] # calculate average doubling ratio return sum(doubling_ratios) / len(doubling_ratios) By calculating the average ratio between execution times in our doubling experiment, we can develop an approximation of the worst-case time complexity of the sorting algorithms we are testing. After computing the average doubling ratio, we make a conjecture of the worst-case time complexity based on the doubling ratio, as seen in the following image: The doubling ratios and their respective time complexities (left). Time complexities compared graphically (right). Doubling Ratios As established in prior sections, the tool discussed here presents the estimated worst-case time complexities of the sorting functions passed in. The above section outlines the calculation of the doubling ratio in particular, which is the primary value in calculating the worst-case time complexity. The tool uses six values to describe the worst case time complexity, which are as follows: constant = "1" linear = "n" quadratic = "n²" logarithmic = "log(n)" linearithmic = "n log(n)" notsure = "not sure" These are common worst-case time complexities, with an option for if the doubling ratio is not one of these accounted for. They are shown using O(n) notation, and are those that are included in the output after the doubling ratio is run and processed. Below is an example of the code which processes the doubling ratios. def estimate_time_complexity(average_doubling_ratio: float) -> enumerations.TimeComplexity: """Estimate the time complexity given the average doubling ratio.""" average_doubling_ratio_rounded = round(average_doubling_ratio) if average_doubling_ratio >= 1.75 and average_doubling_ratio <= 2.25: return enumerations.TimeComplexity.linear elif average_doubling_ratio > 2.25 and average_doubling_ratio < 3.75: return enumerations.TimeComplexity.linearithmic elif average_doubling_ratio >= 3.75 and average_doubling_ratio_rounded <= 4.25: return enumerations.TimeComplexity.quadratic elif average_doubling_ratio > 1.25 and average_doubling_ratio < 1.75: return enumerations.TimeComplexity.logarithmic elif average_doubling_ratio_rounded == 1: return enumerations.TimeComplexity.constant # indicate that it does not match any of our predefined values return enumerations.TimeComplexity.notsure This function, estimate_time_complexity, takes in the average doubling ratio calculated from the benchmarking data, and assesses where it falls in these ranges to determine if it is equivalent to that of one of the time complexity defaults defined previously, and if not, returns the enum representing that it is outside the tool’s scope of comprehension. Thus, by examining the ratio of the increase in time, the tool can estimate which, if any, known base case of worst-case time complexity the sorting function has. The code gives us a clear understanding of what is the worst-case time complexity for each function in the code. The creation of this leads to a clear understanding of the code of finding a path and reading a generated input and processing the data and displaying the worst case time complexity. The analysis of the code allows us to be able to calculate the average and furthermore examine roughly the worst-case time complexity. Each component of the code reads and understands the next as it processes the function and gives results showing how the code has it’s worst case time complexities as well as what the worst case time complexity is. Back to top
{"url":"https://algorithmology.netlify.app/allhands/weekthirteen/teamfour/index.html","timestamp":"2024-11-04T23:45:51Z","content_type":"application/xhtml+xml","content_length":"47880","record_id":"<urn:uuid:8bec4a15-55cf-4793-8535-a65a197fa676>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00259.warc.gz"}
Check if a symbolic polynomial is positive Check if a symbolic polynomial is positive Is there a way to check if a symbolic polynomial has all positive coefficients? something like, (x^2+y+2xy).is_positive() = True (x+y-xy).is_positive()=False? 1 Answer Sort by » oldest newest most voted How did you define x and y? If you simply use sage: x, y = var('x, y') then Sage has now way of knowing what is a variable and what is a coefficient in you polynomial. Still, for one variable it is possible to do: sage: p = 4*x^2+2*x+1 sage: all([coef>=0 for coef, degree in p.coefficients(x)]) For multivariate polynomials you should use PolynomialRing: sage: PXY.<x, y> = PolynomialRing(QQ) sage: p = x^2+y+2*x*y sage: all([coef>=0 for coef in p.coefficients()]) edit flag offensive delete link more
{"url":"https://ask.sagemath.org/question/52700/check-if-a-symbolic-polynomial-is-positive/","timestamp":"2024-11-07T11:30:14Z","content_type":"application/xhtml+xml","content_length":"51903","record_id":"<urn:uuid:d2070436-b6e4-48e3-813f-396f60e47a56>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00070.warc.gz"}
Transactions Online Kazuhiro OGATA, Kokichi FUTATSUGI, "Proof Score Approach to Verification of Liveness Properties" in IEICE TRANSACTIONS on Information, vol. E91-D, no. 12, pp. 2804-2817, December 2008, doi: 10.1093/ Abstract: Proofs written in algebraic specification languages are called proof scores. The proof score approach to design verification is attractive because it provides a flexible way to prove that designs for systems satisfy properties. Thus far, however, the approach has focused on safety properties. In this paper, we describe a way to verify that designs for systems satisfy liveness properties with the approach. A mutual exclusion protocol using a queue is used as an example. We describe the design verification and explain how it is verified that the protocol satisfies the lockout freedom property. URL: https://global.ieice.org/en_transactions/information/10.1093/ietisy/e91-d.12.2804/_p author={Kazuhiro OGATA, Kokichi FUTATSUGI, }, journal={IEICE TRANSACTIONS on Information}, title={Proof Score Approach to Verification of Liveness Properties}, abstract={Proofs written in algebraic specification languages are called proof scores. The proof score approach to design verification is attractive because it provides a flexible way to prove that designs for systems satisfy properties. Thus far, however, the approach has focused on safety properties. In this paper, we describe a way to verify that designs for systems satisfy liveness properties with the approach. A mutual exclusion protocol using a queue is used as an example. We describe the design verification and explain how it is verified that the protocol satisfies the lockout freedom property.}, TY - JOUR TI - Proof Score Approach to Verification of Liveness Properties T2 - IEICE TRANSACTIONS on Information SP - 2804 EP - 2817 AU - Kazuhiro OGATA AU - Kokichi FUTATSUGI PY - 2008 DO - 10.1093/ietisy/e91-d.12.2804 JO - IEICE TRANSACTIONS on Information SN - 1745-1361 VL - E91-D IS - 12 JA - IEICE TRANSACTIONS on Information Y1 - December 2008 AB - Proofs written in algebraic specification languages are called proof scores. The proof score approach to design verification is attractive because it provides a flexible way to prove that designs for systems satisfy properties. Thus far, however, the approach has focused on safety properties. In this paper, we describe a way to verify that designs for systems satisfy liveness properties with the approach. A mutual exclusion protocol using a queue is used as an example. We describe the design verification and explain how it is verified that the protocol satisfies the lockout freedom property. ER -
{"url":"https://global.ieice.org/en_transactions/information/10.1093/ietisy/e91-d.12.2804/_p","timestamp":"2024-11-10T19:11:32Z","content_type":"text/html","content_length":"58384","record_id":"<urn:uuid:e8c246fa-e215-40a7-a7ee-8bebad307d3f>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00300.warc.gz"}
Dr Natalia Timofeyuk | University of Surrey Dr Natalia Timofeyuk Senior Research Fellow Research interests Nuclear Reaction Theory Hyperspherical Harmonics formalism Research projects Nonlocality in (d,p) reactions Usually cross sections of (d,p) reactions are calculated with local nucleon-nucleus optical potentials. We have extended the (d,p) reaction theory for the case of nonlocal nucleon-nucleus potentials. Manifestation of 3N force in (d,p) reactions 3N force creates a novel three-body interaction involving neutron and proton in deuteron and one of the nucleons in the target. We investigated this effect in adiabatic model and in couninuum-discretized coupled-channel method. 3N force also creates an additional term in the (d,p) T-matrix. This has been studied in plane-wave Born approximation only. Hyperspherical Cluster model To describe long-range behaviour of one particle removed from a few- or a many-body system, a hyperspherical cluster model has been developed. It has been applied to the ground and first excited states of helium drops with five, six, eight and ten atoms interacting via a two-body soft gaussian potential. Research collaborations Universite Libre de Bruxelles University of Pisa University of Seville University de Santiago de Compostela
{"url":"https://www.surrey.ac.uk/people/natalia-timofeyuk","timestamp":"2024-11-05T14:19:52Z","content_type":"text/html","content_length":"171747","record_id":"<urn:uuid:f0c507f1-0c77-447b-8dda-0f11d7525066>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00830.warc.gz"}
SAT-math - www.thattutorguy.com SAT & PSAT Math Prep (from someone who can actually explain it) Chris is a Stanford-educated tutor with over 10 years experience tutoring math to students of all abilities and levels, from pre-algebra and middle school all the way through college algebra, calculus and the SAT. In that time he got a lot of experience learning how to explain this stuff in a way it actually makes sense to normal people, and he brings that experience to his videos, whether you are in class now or preparing for SAT & PSAT Math. How to prep for SAT & PSAT Math using ThatTutorGuy.com -- The table below contains a list of all the topics you'll find on the SAT & PSAT Math test. In the right-hand column of the table are links to all the videos on our site that you'll want to watch to cover those topics. SAT & SAT Math Test Prep SAT: Math SAT is the registered trademark of The College Board. ThatTutorGuy.com.com has no affiliation with The College Board, and the ThatTutorGuy.com SAT test prep course is not approved or endorsed by The College Board.
{"url":"https://www.thattutorguy.com/test-prep/sat-math/","timestamp":"2024-11-12T10:11:31Z","content_type":"text/html","content_length":"32934","record_id":"<urn:uuid:5e57fc4c-5c29-4d9d-8255-0557f111f132>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00864.warc.gz"}
DSpace Angular :: Navegando por Autor "Vulpe, Nicolae" Navegando por Autor "Vulpe, Nicolae" Agora exibindo 1 - 3 de 3 Resultados por página Opções de Ordenação • Classification of the family of quadratic differential systems possessing invariant ellipses. (2019-04) Oliveira, Regilene Delazari dos Santos; Rezende, Alex C.; Schlomiuk, Dana; Vulpe, Nicolae Consider the class QS of all non-degenerate quadratic systems. Note that each quadratic polynomial differential system can be identiffed with a point of R12 through its coeffcients. In this paper we provide necessary and suffcient conditions for a system in QS, in term of its coeffcients, to have at least one invariant ellipse. Let QSE be the whole class of non-degenerate planar quadratic differential systems possessing at least one invariant ellipse. For the class QSE, we give the global \bifurcation" diagram which indicates where an ellipse is present or absent and in case it is present, the diagram indicates if the ellipse is or not a limit cycle. The diagram is expressed in terms of affne invariant polynomials and it is done in the 12-dimensional space of parameters. This diagram is also an algorithm for determining for each quadratic system if it possesses an invariant ellipse and whether or not this ellipse is a limit cycle. • Geometric analysis of quadratic differential systems with invariant ellipses. (2019-10) Mota, Marcos C.; Oliveira, Regilene Delazari dos Santos; Rezende, Alex C.; Schlomiuk, Dana; Vulpe, Nicolae n this article we study the whole class QSE of non-degenerate planar quadratic differential systems possessing at least one invariant ellipse.We classify this family of systems according to their geometric properties encoded in the configurations of invariant ellipses and invariant straight lines which these systems could possess. The classification, which staken modulo the action of t he group of real affine transformations and time rescaling, is given in terms of algebraic geometric invariants and also in terms of invariant polynomials and it yields a total of 35 distinct such configurations. This classification is also an algorithm which makes it possible to verify for any given real quadratic differential system if it has invariant ellipses or not and to specify its configuration of invariant ellipses and straight lines.
{"url":"http://repositorio.icmc.usp.br/browse/author?value=Vulpe,%20Nicolae","timestamp":"2024-11-02T23:55:28Z","content_type":"text/html","content_length":"384702","record_id":"<urn:uuid:f69f929e-0e62-4156-bdf6-01e92a75ab46>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00682.warc.gz"}
Foredrag d. 12. november 2019 Prof. K. Ramasubramanian Department of Humanitites and Social Sciences IIT Bombay A short history of the evolution of the sine function in Indian mathematics Starting from the simplest model for conceiving the planetary motion, the sine function pervades all of physical, engineering, mathematical, medical and biological sciences. So much so, today it would be almost impossible to conceive of any discipline of science, without this sine function. Briefly touching upon how the sine function was conceived in India, we shall highlight the various techniques that got invented over centuries to determine its value for different arguments. Aryabhata by the end of the 5th century had proposed an interesting geometric approach and an analytic one involving an interesting recursive relation. For those who are mathematically inclined, it may be mentioned that this relation essentially happens to be the discrete analogue of what is today referred to as the harmonic equation. At this stage, we may also point out how distinct this conception is from the way it has been characterized by the Greek civilization. Yet another interesting recursive relation, which is very different in its structure and nature form the one given by Aryabhata, has been proposed by Kerala astronomers, in connection with their study of the properties of cyclic quadrilaterals. During the talk, shall attempt to take the audience through this fascinating journey that the Indian mathematicians seem to have taken in trying to evaluate the sine function in multiple ways!
{"url":"http://videnskabshistorisk.dk/index.php/2019-11-12/","timestamp":"2024-11-10T04:41:36Z","content_type":"text/html","content_length":"35391","record_id":"<urn:uuid:de575f1d-a57e-47fe-9170-ef08d65c04b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00428.warc.gz"}
5 Best Ways to Find the Largest Perfect Subtree in a Given Binary Tree in Python π ‘ Problem Formulation: In the realm of binary trees, a perfect subtree is a subtrees that is both full and complete, meaning all its internal nodes have two children and all leaves are at the same depth. The challenge is to find the largest such subtree within a given binary tree. For instance, if we input a binary tree with varying levels of completeness, we want to output the root node and the size of its largest perfect subtree. Method 1: Recursive Depth Counting This approach uses a recursive function to count the depth of the left and right children of a binary tree. If the depths are equal, it implies the subtree rooted at that node is perfect. Function specification involves calculating the depth of subtrees and returning the maximum size of a perfect subtree found. Here’s an example: class TreeNode: def __init__(self, val=0, left=None, right=None): self.val = val self.left = left self.right = right def findLargestPerfectSubtree(root): def isPerfect(node): if not node: return 0 left_depth = isPerfect(node.left) right_depth = isPerfect(node.right) if left_depth == right_depth: return left_depth + 1 return 0 if not root: return 0 return max(isPerfect(root), findLargestPerfectSubtree(root.left), findLargestPerfectSubtree(root.right)) # Example binary tree root = TreeNode(1, TreeNode(2, TreeNode(4), TreeNode(5)), TreeNode(3, None, TreeNode(7))) Output: 2 This code defines a TreeNode to construct the binary tree and a recursive function to compute the largest perfect subtree. The isPerfect helper function returns the depth of perfect subtrees or 0 otherwise. The main function compares the size of the perfect subtrees found in the current root and its left and right children recursively. Method 2: Bottom-Up Recursive Approach A bottom-up recursion checks from the leaves upwards to identify perfect subtrees, returning both the depth and validity of a subtree. This reduces redundant checks on nodes that have already been Here’s an example: def findLargestPerfectSubtreeBottomUp(root): def checkSubtree(node): if not node: return 0, True left_depth, is_left_perfect = checkSubtree(node.left) right_depth, is_right_perfect = checkSubtree(node.right) if is_left_perfect and is_right_perfect and left_depth == right_depth: return left_depth + 1, True return max(left_depth, right_depth), False max_depth, _ = checkSubtree(root) return max_depth Output: 2 In this method, checkSubtree function returns a tuple containing the depth and a boolean flag that indicates if the subtree is perfect. The main function findLargestPerfectSubtreeBottomUp calls checkSubtree and extracts the maximum depth recorded from a perfect subtree. Method 3: Level Order Traversal By traversing the tree level by level, one can detect when a level is not completely filled, which signifies the end of a perfect subtree. This method uses a queue to implement a breadth-first Here’s an example: from collections import deque def findLargestPerfectSubtreeLevelOrder(root): if not root: return 0 queue = deque([(root, 1)]) perfect_depth = 0 while queue: node, depth = queue.popleft() if node.left and node.right: queue.append((node.left, depth + 1)) queue.append((node.right, depth + 1)) perfect_depth = depth return perfect_depth Output: 2 This code uses a queue to implement a level order traversal. Nodes along with their depths are added to the queue. If both children are present, the perfect depth is updated; otherwise, it indicates an imperfect level, and the traversal ends. Method 4: Optimized Space Usage This variant of the depth counting method aims to save space by using a single variable to track the maximum perfect subtree depth encountered during traversal. Here’s an example: def findLargestPerfectSubtreeOptimized(root): max_perfect_depth = 0 def checkSubtree(node, depth): nonlocal max_perfect_depth if not node: return depth left_depth = checkSubtree(node.left, depth + 1) right_depth = checkSubtree(node.right, depth + 1) if left_depth == right_depth: max_perfect_depth = max(max_perfect_depth, left_depth) return max(left_depth, right_depth) checkSubtree(root, 0) return max_perfect_depth Output: 2 In this method, a helper function checkSubtree is called recursively. It returns the depth of the subtree, updating a nonlocal variable max_perfect_depth when a perfect subtree is detected at the current node level. Bonus One-Liner Method 5: Recursion with Built-in Maximum Function The Python one-liner takes advantage of the built-in max() function to condense the recursive approach into a single return statement. Here’s an example: def findLargestPerfectSubtreeOneLiner(root): return max(isPerfect(root), findLargestPerfectSubtree(root.left), findLargestPerfectSubtree(root.right)) if root else 0 # Assuming `isPerfect` function is already defined as in Method 1. Output: 2 This concise one-liner combines a check for the root being None and calls to the recursive isPerfect function on the left and right children using the max() function. • Method 1: Recursive Depth Counting. Intuitive approach but may lead to redundant checks. Efficient with balanced trees. • Method 2: Bottom-Up Recursive Approach. Optimizes by eliminating redundant subtree depth calculations. More complex to understand but efficient in all cases. • Method 3: Level Order Traversal. Easy to visualize and implement, but can be inefficient due to queue operations for large trees. • Method 4: Optimized Space Usage. Reduces space complexity but may still perform redundant operations for imbalanced trees. • Method 5: Recursion with Built-in Maximum Function. Elegantly simple, yet abstracts away from the actual processing logic making it less instructive.
{"url":"https://blog.finxter.com/5-best-ways-to-find-the-largest-perfect-subtree-in-a-given-binary-tree-in-python/","timestamp":"2024-11-12T05:19:18Z","content_type":"text/html","content_length":"72534","record_id":"<urn:uuid:e655f6f9-b2fb-4ede-b837-0e654a4fb9c2>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00116.warc.gz"}
><Methods to study complex systems related to PIK research on Time Series Analysis and Complex Networks Recurrence plots Recurrence plots (RPs) provide an alternative way to study various aspects of complex systems, such regime transitions, classification, detection of time-scales, synchronisation, and coupling detection (RP bibliography). Main contributions have been in bivariate extensions (cross RPs) and coupling analysis, new measures of complexity, significance assessments of the RP based results, spatial extensions, parameter selection, RPs for irregularly sampled data and for extreme events data, or complex network based quantification. Complex networks Complex networks provide a powerful approach to investigate extended and spatio-temporal systems, such as the climate by climate networks. Moreover, they offer an alternative way for a recurrence based time-series analysis by recurrence networks. Special time-series analysis methods for special problems Special problems require especially adopted methods of time-series analysis. For example, proxy records in Earth sciences are often irregularly sampled and come with uncertainties in the dating points. Approaches for considering such dating uncertainties in the subsequent analysis and methods for correlation analysis of irregularly sampled time series have been developed. Such approaches can be helpful for the reconstruction of palaeoclimate complex networks. ><Complexity in applications Climate and palaeoclimate The study of palaeoclimate from proxy records is helpful for a better understanding of the climate system. Information based on lake sediments or speleothemes can be used to study complex interrelationships or past climate transitions. We are also participating in the coordinated scientific research in the Blessberg Cave, Thuringia. Cardiovascular systems Besides the main focus on climate related problems, recurrence properties of the cardiovascular system are studied, e.g., to early detect ventricular tachycardia or preeclampsia, or to investigate the coupling mechanisms in the cardio-respiratory system. Further interest in life science is related to EEG analysis, aiming at the detection of event related potentials or early signatures of epileptic seizures, or identifying pathological changes in brains connectivity due to diseases. 3D image analysis Methods to investigate complexity in 3D have been applied to study structural changes in trabecular bone, such as occurring during osteoporosis or space flights. ><Cave research Scientific research in caves is performed to explore and survey newly discovered cave parts, but also to collect data for the palaeoclimate studies (samples, monitoring). Cave research is focused on caves in Switzerland (research with isaak), but also in India, Caucasus, Kosovo, and Germany. • N. Marwan: Kalzit-Sinter in Sandsteinhöhlen des Elbsandsteingebirges, Die Höhle, 51(1), 19–20 (2000). • N. Marwan: Cave Blisters in der Oberländerhöhle (M3)/ Découverte de blisters dans la Oberländerhöhle (M3), Stalactite, 50(2), 103-105 (2000). • N. Marwan: Das Karstgebiet des Bol'soj Thac, Abhandlungen und Berichte des Naturkundemuseums Görlitz, 79(1), 55-84 (2007). • S. Breitenbach, N. Marwan, G. Wibbelt: Weißnasensyndrom in Nordamerika – Pilzbesiedlung in Europa, Nyctalus, 16(3), 172-179 (2011). • N. Marwan: Der digitale Sägistal-Kataster, Stalactite, 73, 24–33 (2023). • S. Breitenbach, N. Marwan: Using Low-Cost Software to Obtain and Study Stalagmite Greyscale Data, CREG Journal, 125, 7–10 (2024). • one of the first web presentations of speleology was the speleo server east
{"url":"https://tocsy.pik-potsdam.de/~marwan/special.php","timestamp":"2024-11-08T01:01:13Z","content_type":"text/html","content_length":"30693","record_id":"<urn:uuid:f7249a2d-2b1a-440a-9c12-511bab7402b2>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00865.warc.gz"}
103.3.2 Descriptive Statistics | Statinfer103.3.2 Descriptive Statistics | Statinfer In previous section, we studied about Basic Statistics, Graphs and Reports, now we will be studying about Descriptive Statistics. As soon as we get some data, we can carry out descriptive statistics on it. Basic descriptive statistics give an idea on the variable and their distribution, we get an overall picture of dataset and it also helps us to create a report on the data. There are 2 types of basic descriptive statistics: Central tendencies and Dispersion. Central tendencies deal with the mean, median and mode, whereas the measures of Dispersion are range, variance and standard deviation. Central tendencies: mean, median Mean is nothing but the arithmetic mean or the average, i.e, the sum of the values divided by the count of values. It helps us to understand the data, evaluate the data. The mean is a good measure to calculate the average of the variables, but it is not recommended when there are outliers in the data. Outliers are fewer data elements in the dataset which are very much different from the rest of the data elements. For Example : Let us consider this data Nowhere 90% of the values are below 2, but when we calculate the mean, we get the value as 2. This is because there is a value (i.e.,9), which is very much different from the rest of the values. This is called an outlier. So in such cases, where there are outliers, we need a better approach which gives a more accurate or true middle value. Hence median can be considered in such cases. For calculating the median, the give data is sorted in either ascending or descending order, and then take the middle value which becomes the median, which can be a true average value in such For example, consider the same data; Ascending order: Here the middle value is 1.4, which becomes the median. Therefore, we can say that even if there are outliers present in the data, we can get a true middle value using median, as the sorting shifts the outliers to the extreme ends. Let us see how to calculate mean and median in R. We consider the Income data. Income<-read.csv("C:\\Amrita\\Datavedi\\Census Income Data\\Income_data.csv") From this dataset we calculate the mean and median of the variable “capital.gain”. ## [1] 1077.649 ## [1] 0 We get mean as 1077.649 and median as 0. As there is a vast difference between the two, we can say that there are outliers in the data. If there are no outliers, there will not be much difference in the mean and median values. So if there are outliers we must always consider the median. Lab: Mean & Median Now let us consider the dataset, Online Retail Sales Data. Online_Retail<-read.csv("C:\\Amrita\\Datavedi\\Online Retail Sales Data\\Online Retail.csv") Calculate the mean and median of the variable “UnitPrice” and let us see if there are any outliers in the data. ## [1] 4.611114 ## [1] 2.08 So here the mean is 4.611114 and the median is 2.08, which means mean and meadian are very close. However, we still cannot conclude on the absence of an outlier because if there are balancing outliers on the either side of median, then also the mean and median can be close. Now also find the mean and median of the variable “Quantity”. ## [1] 9.55225 ## [1] 3 Here we can see that the mean is 9.55225 and the median is 3. In this case, as there is some difference in the mean and the median value, there can be outliers in the data but we cannot be sure. Outliers can be detected using box plot which will be covered in further sessions. In next section, we will be studying about Percentile and Quartile.
{"url":"https://statinfer.com/103-3-2-descriptive-statistics/","timestamp":"2024-11-08T01:54:20Z","content_type":"text/html","content_length":"206507","record_id":"<urn:uuid:fb68cc2e-276a-405b-80e9-ad8d82a953b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00695.warc.gz"}
25 Pi Day Activities Happy pi day, or 22/3/14! On this day, people celebrate the irrational and infinitely long number by eating pies and solving equations. So to get in the spirit, I’ve compiled a list of some of my favorite pi day activities. There are some great ideas for how to incorporate mathematics into your daily routine, like measuring how many times you blink in a day or calculating how much time it takes for math to become an integral part of your life. But there are also some other great ideas for fun pi day activities that have nothing to do with math, like trying to get a hula hoop around your waist or measuring the circumference of your head. 25 Pi Day Activities So whether you’re actually celebrating pi day for math or if you’re just looking for something fun to do this Saturday, here are some great activities to try! Top 25 Pi Day Activities 1. Assemble a Pi Day Paper Chain Cut a paper into 1 inch wide strips and tape them together. Start with a loop of paper and use scissors to make a small slit in the center, then add more loops. Tie each end together securely with a piece of ribbon or string. Assembling a pi day paper chain is one of the games you can play. Assemble a Pi Day Paper Chain 2. Celebrate with a Pizza Pi Party A fun way to celebrate Pi Day is to host a pizza pi party. Ovens are unnecessary for this celebration, so get a slab of pizza and cut out three circles using the diameter of the pie as your guide. Staple each circle to the wall in your kitchen, leaving one free so that you can walk around. Celebrate with a Pizza Pi Party 3. Do the Math The math of pi is actually a lot easier the more you do it. Try the following for an easy way to keep track of pi. Write it on a post-it or index card and stick it to your fridge, write it on your calendar, or use a chalkboard in your kitchen. Do the Math 4. Roll Pi Digits with Dice Roll a standard six sided die. Then, divide the number by two and add that number to pi. (If you are rolling one, then you have dropped the first digit of pi.) For example, if you roll a 3 on your d6, then divide 3 by 2 and add 1 to get 1.59. Write down the digit 3 and subtract your starting number (3) to get 1.571428… Roll Pi Digits with Dice 5. Memorize Those Pi Digits A great way to celebrate pi day is by memorizing the first 50 digits of pi. While it might seem like an impossible task, this computation was accomplished by a 9 year-old, who then won $100,000 on Memorize Those Pi Digits 6. Celebrate Albert Einstein In 1853, German mathematician Johann Lambert calculated pi to 35 places by hand. On March 14, 2012, a Google Doodle celebrated this accomplishment. Why this day? It was Albert Einstein’s 139th Celebrate Albert Einstein 7. Play Pi in a Circle Play pi in a circle by placing digits of pi around the outside edge of a circle. Read each digit clockwise starting at three o’clock, and use the center point as zero (or as an optional starting point if you are having trouble with your memory). Play Pi in a Circle 8. Play a Card Game Pi is the same in all parts of the world, so it makes sense that the card game Pi mal Daube is fun no matter where you are! This game requires a 52-card deck and a single joker. Deal nine cards to each player, and then have each player draw one card to create three piles. The first pile is four cards (3-6), the second pile is five cards (7-10), and the third pile has six cards (jack, queen, king, ace). Play a Card Game 9. Make Paper Plate Pies Put a pizza on a paper plate, add tomatoes and olives, and then have fun making random circles with the remainder of your ingredients. Also, use extra dough to make a circle around each pie. If you need more time to finish your pizza while watching the game, just let it sit at room temperature until you are ready to eat it. Make Paper Plate Pies 10. Craft Paper Pie Gift Boxes Using origami, paper plates, and pizza dough, you can make a gift box that celebrates Pi Day. You can also put candy inside your box or use the box to hold things like tiny note cards, or even coins. Craft Paper Pie Gift Boxes 11. Introduce Sir Cumference and the Dragon of Pi Sir Cumference and the Dragon of Pi is a great book for families that like to read math-related stories and for younger students who are studying math. Sir Cumference is a character from the amusing children’s book called The Breadth of the Briny Ball, by Cindy Neuschwander. In this lovely and humorous book, readers learn about pi in a way that is fun and clear. Introduce Sir Cumference and the Dragon of Pi 12. Tell a Math Joke Math jokes aren’t as bad as they sound, okay? You can even get your kids involved by asking them to crack pi puns. If no one laughs at your pun, then try again later. If you want to tell a math joke, use the following joke, which is a variation of the well-known “Ceci n’est pas une pipe” joke. Tell a Math Joke 13. Share some Pi Puns As with all jokes, there is a fine line between funny and awful. But, if you want to share some pi puns on Pi Day, then feel free to do so. Just don’t assume that everyone will get your pun or even understand it! Share some Pi Puns 14. Draw a Pi Map If you can grab a few friends and the right paper and pencils, you can easily draw a pi map of your own locale. Fill in the top of the map with numbers from 1 to 30. The mathematicians who originally calculated pi assumed that pi was an irrational number, meaning that it had no exact value. However, by using a compass and paper, you can make a map of your locale where the highest integer is 8. Draw a Pi Map 15. Write Pi – Ku Poems Pi (π) is a symbol for a circle on the keyboard. It is also the symbol for poetry in Japan, where it is called ‘ku’ or ‘kupo’. Pi poems are usually about numbers or letters, but you can use them to celebrate Pi Day as well. You can make your poems fun by using animals or objects with pi names and symbols. Write Pi – Ku Poems 16. Bake some Pie Pi is an irrational number, it has infinite digits and can never be written completely. However, you can still celebrate Pi Day by eating pie on March 14. You might also be interested in other irrational numbers like √2 (1.414213…). Bake some Pie 17. Conduct a Pi Symphony If you want to conduct a pi symphony, then make sure that your conductor’s baton is 30 cm in length. A pi symphony can be conducted by using a computer, or even by playing a piano and observing the length of the strings. Conduct a Pi Symphony 18. Create Your Own Pi Puzzle Lastly, no Pi Day celebration is complete without a pi puzzle! You can make Pi Day a little more fun by making your own puzzle using a 5×7 photo or printout of pi. This can be done by folding the page, then cutting the paper and opening it. You can also write pi on the inside with a marker so that you’ll be able to see it in the completed puzzle. Create Your Own Pi Puzzle 19. Graph a Pi – Line Skyline On March 14, the first of the month, the sun rises directly over the point in space that is closest to the north point of a standard (right angle) ruler. Therefore, this day is also a great time to graph a pi-line skyline. Watch as your points line up along your series of points marking this event! Graph a Pi – Line Skyline 20. Plot out Pi – Inspired Art While we have mentioned pi a few times in this article, it’s also worth pointing out that pi-inspired art is hot right now. You can make your own art by using materials like ink and paint or even by just drawing with a compass. Plot out Pi – Inspired Art 21. Create Punny Pi – Lentines While you’re at it, why not also make punny pi-lentines? You can also make them into cupid cards, silly limericks, and other amusing ways. Also, there are a lot of cards and books that you can use to celebrate this holiday. Create Punny Pi – Lentines 22. Dress the Part It is also a great idea to dress the part for Pi Day. You can wear anything from a t-shirt that says π, to a pi symbol necklace or other jewelry, or even a pink shirt to the party! You can even make your own pins or clothing if you want to get fancy. Dress the Part 23. Have a Pi Word Challenge For those who are math-obsessed, you can also have a pi word challenge. All you need is a copy of a time and date sheet and some paper that has pi on it. The challenge is to read through the sheet and find as many words that have π in them as possible. Have a Pi Word Challenge 24. Play Pi Bingo Bingo is a classic game that can be played in a number of ways, from creating a bingo sheet or using numbers to simply playing by calling out the numbers in order. With Pi Day, you can also create a bingo sheet with things and items that have pi in their names. Be sure to track your bingo card carefully so you can win! Play Pi Bingo 25. Plan a Pi Day Run With Pi Day on 3/14, you can celebrate by planning a run. You can then collect donations for your local animal shelter, or even plan a fun run to help raise awareness for a charity of your choosing. Plan a Pi Day Run
{"url":"https://www.aneverydaystory.com/pi-day-activities/","timestamp":"2024-11-14T16:50:28Z","content_type":"text/html","content_length":"85248","record_id":"<urn:uuid:e03f5c7d-6655-4fb7-9cb8-bd418d61b76c>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00317.warc.gz"}
Deep Neural Networks | Principal Components of a Neural Network » Sirf Padhai Deep Neural Networks | Principal Components of a Neural Network What do you understand by deep neural networks? Neural networks (NN) are powerful statistical learning models that can solve complex classification and regression problems. A neural network is composed of interconnected layers of computation units, mimicking the structures of the brain’s neurons and their connectivity. Each neuron includes an activation function (i.e., a non-linear transformation) to a weighted sum of input values with bias. The predicted output of a neural network is calculated by computing the outputs of each neuron through the network layers in a feed-forward manner. In fact, deep neural networks, which are the backbone of deep learning use a cascade of multiple hidden layers to increase exponentially. the learning capacity of neural networks. Deep neural networks are built using differentiable model fitting, which is an iterative process during which the model trains itself on input data through gradient-based optimization routines, making small adjustments iteratively, with the aim of refining the model until it predicts mostly the right outputs. The principal components of a neural network are as follows- (i) Parameters It represents the trained weights and biases used by neurons to make their internal calculations. (ii) Activations These represent non-linear functions that add non-linearity to neuron output aimed at indicating, fundamentally, if a neuron should be activated or not. (iii) Loss Function It consists of a math function, which estimates the distance between predicted and actual outcomes. If the DNN’s predictions are perfect, the loss is zero; otherwise, the loss is greater than zero. (iv) Regularization It consists in techniques that penalize the model’s complexity to prevent overfitting such as L1-L2 regularization or dropout. (v) Optimizer It adjusts the parameters of the model iteratively (reducing the objective function), in order to – (a) build the best-fitted model i.e., lowest loss; and (b) keep the model as simple as possible i.e., strong regularization. The most used optimizers are based on gradient descent algorithms. (vi) Hyperparameters These are the model’s parameters that are constant during the training phase and which can be fixed before running the fitting process such as the number of layers or the learning rate. When training a deep neural network machine learning developers set hyperparameters, and choose loss functions, regularization techniques, and gradient-based optimizers. After training the model, the best-fitted model is evaluated on a testing dataset (which should be different from the training dataset), using error rate and accuracy measures. The occurrence of errors in the training program of a DNN often translates into poor model performance. Therefore, it is important to ensure that DNN program implementations are bug-free. Given the large size of the testing space of a DNN, systematic debugging and testing techniques are required to assist developers in error detection and correction activities.
{"url":"https://sirfpadhai.in/deep-neural-networks/","timestamp":"2024-11-14T20:11:59Z","content_type":"text/html","content_length":"320930","record_id":"<urn:uuid:b6e091c5-a266-4a55-925e-b9127ed2fcce>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00702.warc.gz"}
Rolle's Theorem Equal endpoints + differentiable = derivative is zero somewhere in between. Hi Scholars! This week we review a result many of us briefly saw in calculus: Rolle’s Theorem: If f is continuous on [a,b], f is differentiable on (a,b), and f(a) = f(b), then there is c in (a,b) such that f ’(c) = 0. Upon drawing a picture, we can often intuit that this result holds. And, it is perhaps one of the more intuitive results to verify formally. If the maximum and minimum of f are at the endpoints, then f is constant and the result follows. Otherwise, by the extreme value theorem, we know there is a point somewhere in (a,b) where f attains either a maximum or a minimum. In the picture, we assume there is a maximum at a point c. At any point x < c, the slope of a secant line from (x, f(x)) to (c, f(c)) is nonnegative. Thus, letting x approach c reveals the left hand limit is nonnegative too. But, this limit is precisely the derivative, and so f ‘(c ) ≥ 0. Similar argument applies with the right hand limit to deduce f'(c) ≤ 0. Combining these inequalities, we conclude f ‘(c) = 0, as desired. That outlines all the key argument steps! Stay Awesome.
{"url":"https://www.typalacademy.com/p/rolles-theorem","timestamp":"2024-11-05T09:22:56Z","content_type":"text/html","content_length":"150283","record_id":"<urn:uuid:0c65e7c5-0065-4208-99ad-a084ce44a38a>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00548.warc.gz"}
Absolute Value Equations To view this video please enable Javascript The section on Advanced Algebra delves into Absolute Value Equations, offering a comprehensive guide on solving these equations, including the necessity to isolate the absolute value, split the equation, and verify solutions to eliminate extraneous roots. • Absolute value equations can have two solutions, akin to quadratic equations, necessitating a split into two scenarios: the expression within the absolute value being equal to a positive or a negative value. • When the absolute value is not isolated, it must be isolated before the equation can be split into the 'or' equations for solving. • Solving absolute value equations involves setting the inside expression equal to both the positive and negative values of the other side, solving these equations separately, and then checking each solution in the original equation to eliminate extraneous roots. • Extraneous solutions, which may arise from the algebraic manipulation, must be checked against the original equation to ensure they are valid solutions. Introduction to Absolute Value Equations Solving Basic Absolute Value Equations Generalizing Absolute Value Equation Solutions Addressing Extraneous Solutions Practice and Verification Q: Why can't the negative values be considered as possible answers to the last practice problem? A: The key here is that x can be negative, and the expression inside the absolute value can be negative, but the absolute value itself can never be negative. Let's take a look at how this plays out in the two equations you mentioned! For l1 + 2xl = 4 - x we said that 1 + 2x = 4 - x or 1 + 2x = -(4 - x) Which gave us x=1 or x= -5 So there are some negatives involved up to this point. But when we plug those values into the original right side of the equation, 4 - x, we get that l1 + 2xl = 3 or l1 + 2xl = 9 --->So you see that the absolute value is positive in either case When we go through the same process with l2x + 5l = x +1, we get that x = -4 or x = -2. And when we plug those values back in to the original, x + 1, we get l2x + 5l = -3 or l2x + 5l = -1 ---> but an absolute value can never be negative! Q: What's an extraneous solution and when do we need to check for them? A: Extraneous solutions are invalid and do not solve the original equation. On the GRE, you must check your answers on algebra problems involving squaring or taking roots. Extraneous roots are not considered solutions on the GRE. Squaring both sides of an equation with radicals makes it possible to introduce extraneous roots as solutions. It is essential to check the answers you find to figure out if they are extraneous. To check if any of your roots are extraneous, plug each of the roots back in to the original equation. If the root does not solve the original problem, then it is extraneous and is not a one of the Here is a link to a Magoosh blog about extraneous solutions that you may find helpful! It is written for the GMAT, but is equally applicable to the GRE:
{"url":"https://gre.magoosh.com/lessons/52-absolute-value-equations?study_item=1614?utm_source=greblog&utm_medium=blog&utm_campaign=grestudyschedule&utm_content=90-day-gre-study-plan-for-advanced-students","timestamp":"2024-11-10T12:58:09Z","content_type":"text/html","content_length":"106708","record_id":"<urn:uuid:9036676d-26e8-42f8-b91e-2e7692c66134>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00297.warc.gz"}
Edge Classifications | Graph | myMusingEdge Classifications | Graph | myMusing Edge Classifications | Graph Edge Types With DFS traversal of Graph, only some edges will be traversed. These edges will form a tree, called the depth-first-search tree starting at the given root, and the edges in this tree are called tree We can classify the various edges of the graph based on the color of the node reached when DFS follows the edge. Classification of the edges depends on what node we start from and in what order the algorithm happens to select successors to visit. Edges of graph can be divided into below categories: • Tree edge belong to the spanning tree. • Back edge point from a node to one of its ancestors in the DFS tree. • Forward edge point from a node to one of its descendants. • Cross edge point from a node to a previously visited node that is neither an ancestor nor a descendant. When the destination node of a followed edge is white, this is when the algorithm performs a recursive call. These edges are called tree edges. Tree edges also show the precise sequence of recursive calls performed during the traversal. When the destination of the followed edge is gray, it is a back edge, shown in red. Because there is only a single path of gray nodes, a back edge is looping back to an earlier gray node, creating a cycle. A graph has a cycle if and only if it contains a back edge when traversed from some node. When the destination of the followed edge is colored black, it is a forward edge or a cross edge. If there is a path from the source node to the destination node through tree edges, it is a forward edge. Otherwise, it is a cross edge. Different Types of Edge in Graph 0 Comments Inline Feedbacks View all comments Sharing is Caring !
{"url":"https://mymusing.co/edge-classifications-graph/","timestamp":"2024-11-03T19:21:45Z","content_type":"text/html","content_length":"162342","record_id":"<urn:uuid:c5700ced-8a4b-41b6-8c73-dee0f15b2561>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00717.warc.gz"}
Section 1: Sequences Shodor > Interactivate > Textbooks > Math Thematics 1st Ed. Book 3 > Section 1: Sequences Math Thematics 1st Ed. Book 3 Module 4 - Patterns and Discoveries Section 1: Sequences Lesson • Activity • Discussion • Worksheet • Show All Lesson (...) Lesson: Introduces students to arithmetic and geometric sequences. Students explore further through producing sequences by varying the starting number, multiplier, and add-on. Activity (...) Activity: Students work step-by-step through the generation of a different Hilbert-like Curve (a fractal made from deforming a line by bending it), allowing them to explore number patterns in sequences and geometric properties of fractals. Hilbert Curve Generator Activity: Step through the generation of a Hilbert Curve -- a fractal made from deforming a line by bending it, and explore number patterns in sequences and geometric properties of fractals. Koch's Snowflake Activity: Step through the generation of the Koch Snowflake -- a fractal made from deforming the sides of a triangle, and explore number patterns in sequences and geometric properties of fractals. Sierpinski's Carpet Activity: Step through the generation of Sierpinski's Carpet -- a fractal made from subdividing a square into nine smaller squares and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals. Sierpinski's Triangle Activity: Step through the generation of Sierpinski's Triangle -- a fractal made from subdividing a triangle into four smaller triangles and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals. Activity: Enter two complex numbers (z and c) as ordered pairs of real numbers, then click a button to iterate step by step. The iterates are graphed in the x-y plane and printed out in table form. This is an introduction to the idea of prisoners/escapees in iterated functions and the calculation of fractal Julia sets. Discussion (...) Worksheet (...) No Results Found ©1994-2024 Shodor Website Feedback Math Thematics 1st Ed. Book 3 Module 4 - Patterns and Discoveries Section 1: Sequences Lesson • Activity • Discussion • Worksheet • Show All Lesson (...) Lesson: Introduces students to arithmetic and geometric sequences. Students explore further through producing sequences by varying the starting number, multiplier, and add-on. Activity (...) Activity: Students work step-by-step through the generation of a different Hilbert-like Curve (a fractal made from deforming a line by bending it), allowing them to explore number patterns in sequences and geometric properties of fractals. Hilbert Curve Generator Activity: Step through the generation of a Hilbert Curve -- a fractal made from deforming a line by bending it, and explore number patterns in sequences and geometric properties of fractals. Koch's Snowflake Activity: Step through the generation of the Koch Snowflake -- a fractal made from deforming the sides of a triangle, and explore number patterns in sequences and geometric properties of fractals. Sierpinski's Carpet Activity: Step through the generation of Sierpinski's Carpet -- a fractal made from subdividing a square into nine smaller squares and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals. Sierpinski's Triangle Activity: Step through the generation of Sierpinski's Triangle -- a fractal made from subdividing a triangle into four smaller triangles and cutting the middle one out. Explore number patterns in sequences and geometric properties of fractals. Activity: Enter two complex numbers (z and c) as ordered pairs of real numbers, then click a button to iterate step by step. The iterates are graphed in the x-y plane and printed out in table form. This is an introduction to the idea of prisoners/escapees in iterated functions and the calculation of fractal Julia sets. Discussion (...) Worksheet (...) No Results Found
{"url":"http://www.shodor.org/interactivate/textbooks/section/190/","timestamp":"2024-11-12T12:07:40Z","content_type":"application/xhtml+xml","content_length":"16270","record_id":"<urn:uuid:d3f23af0-bb95-4f0b-95e8-84fc7030f58c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00387.warc.gz"}
Lesson 2: Adding FDK Features Lesson 2: Adding FDK Features# Old tutorial: This tutorial has not yet been updated to ver. 7 of the AnyBody Modeling System. Some concepts may have changed. What we have now is a standard inverse dynamics AnyBody model capable of computing forces in a knee joint that is presumed to be a simple hinge. Real knees are unfortunately not as simple as that. Mechanically speaking, the difference between an idealized revolute knee and a real knee lies in the source of the forces which hold the joint together. An idealized knee joint does not allow any deviations from its hinge motion, like if the tibia and femur were to start sliding on one another. This is accomplished by joint reaction forces which will enforce the zero sliding constraint, regardless of how large the required forces may be. The real knee does not work like that. The cartilage cushioning the contact between the femoral condyles and the tibial plateau is elastic, and so are the ligaments and menisci stabilizing the knee against free sliding. Since the forces in these passive structures depend on their deformation, zero deformation implies the absence of any stabilizing force. In other words, the knee MUST deform a little to invoke these stabilizing forces and retain its integrity. Let us begin the steps that will allow AnyBody to compute this deformation. We zoom in on the definition of the knee joint and change the definition of its reaction forces: AnyRevoluteJoint KneeJoint = { AnyRefFrame &Shank = .Shank.KneeCenter; AnyRefFrame &Thigh = .Thigh.KneeCenter; // Prepare the joint for FDK: Define the reaction types in x and y // directions to be FDK-dependent. These reaction forces must then // be switched off and provided by some elastic element that we // define explicitly below. Constraints = { CType = {ForceDep, ForceDep, Hard, Hard, Hard}; Here we redefine one of the default properties of the joint: the definition of constraints. As mentioned in “Getting Started: AnyScript Programming, Lesson 3: Connecting segments by joints”, connecting two completely independent rigid segments with a joint arrests some or all of the six degrees of relative motion freedom that existed between the two. In this manner, a revolute joint imposes five constraints of which the first three are translational constraints (preventing relative sliding) whose violation is resisted by the joint reaction forces and the latter two are rotational constraints (preventing relative, out of plane rotation) enforced by reaction moments. The shank’s KneeCenter node - which is the joint’s default coordinate system - has the y axis pointing proximally along the shank’s length axis and the x axis pointing forward. These are the two directions in which we’d like to introduce elastic stabilization of tibio-femoral translation, so the first two components of the CType vector are changed to the value ForceDep, which means that rather than being ‘Hard’ constraints, the forces are now defined by some elastic element, which we shall introduce later. We are thus switching off the usual reaction forces in those directions by setting the Reaction.Type vector. Now let us add the necessary elasticity to the joint. This can be done anywhere in the model, but we might as well place it just below the joint: // Knee joint. Notice that this is only going to be the nominal joint. // The actual position of the knee joint center will depend on the forces // acting upon it. Notice that we list the shank before the thigh. This // defines the knee joint in the shank coordinate system and we can // relate the reaction forces to the direction of the tibial plateau. AnyRevoluteJoint KneeJoint = { AnyRefFrame &Shank = .Shank.KneeCenter; AnyRefFrame &Thigh = .Thigh.KneeCenter; // Prepare the joint for FDK: Define the reaction types in x and y // directions to be FDK-dependent. These reaction forces must then // be switched off and provided by some elastic element that we // define explicitly below. Constraints = { CType = {ForceDep, ForceDep, Hard, Hard, Hard}; // Define springs in the knee, simulating the effect of cartilage // and ligaments. AnyForce KneeStiffness = { AnyKinLinear &lin = Main.MyModel.KneeJoint.Linear; F = {-1000*lin.Pos[0], -5000*lin.Pos[1], 0}; We are using the AnyForce class for this purpose. AnyForce in an abstract force that works on any kinematic measure we define inside it. In this case, we simply refer to the linear measure which tracks the distance between the two joint nodes on each segment. In an idealized joint, this measure will always be zero as long as AnyBody can successfully enforce all the translational constraints, however since the first two components of CType are set to ‘ForceDep’, they can now vary and become non-zero. The x corresponds to sliding of the condyle along the tibial plateau. In this direction, we can perceive the elasticity as primarily being provided by the rim of the meniscus and the cruciate The y direction is along the shank’s long axis and in this direction, the elasticity is provided by the layer of cartilage between the tibial plateau and the femoral condyles. The z axis points laterally but since we are building a planar model of the knee, we leave it to be a conventional hard constraint. It is therefore likely that the stiffness in the y direction is somewhat larger than in the x direction. We are going to define it that way and also choose values that are much smaller than in the real knee to get some nice, large deformations that are visually perceivable. So, the definition of the actual force inside the AnyForce object looks like this: F = {-1000 * lin.Pos[0], -5000 * lin.Pos[1], 0}; As you can see, we simply specify the forces in the different directions as mathematical functions of the Pos property of the lin measure. Pos contains the actual linear displacements, and when we multiply those with -1000 and -5000 respectively, we are generating spring forces that are proportional and opposite to the translational deformation of the joint. As discussed earlier, we have made the y direction stiffness five times larger than the value for the x direction. One of the beauties of the AnyScript language is that these expressions can be as complicated as you want. So if you happen to know more complex, realistic stiffness properties of the knee from a cadaver study or from a detailed finite element model, then you could just as well input those. Let’s get the final part of the definition finalized. All that is remaining is to tell the solver in AnyBody that it should apply force-dependent kinematics to solve the problem. This is of course done in the study section: AnyBodyStudy Study = { AnyFolder &Model = .MyModel; Gravity = {0.0, -9.81, 0.0}; tStart = 1; tEnd = 10; nStep = 100; That is all there is to it. The usual InverseDynamics operation will now compute elastic deformations in the knee joint resulting from the deformation of soft tissues in response to internal and external forces. Go ahead and try it out. If something does not work, you can download a functional model here. TROUBLESHOOTING HELP: Inverse dynamics arrives at values of the force dependent degrees of freedom (corresponding to the flexible joint constraints) where the resulting passive stabilizing forces and computed muscle forces, place those degrees of freedom in static equilibrium. This is achieved by a gradient sensing optimizer which iteratively tries out different combinations of joint deformation and muscle force magnitudes which fulfil the equilibrium and optimization criteria. It may therefore be necessary to adjust optimization settings of the AnyBodyStudy class such as “InverseDynamics.ForceDepKin.MaxNewtonStep” and “InverseDynamics.ForceDepKin.Perturbation”. For example, a large perturbation size implies a large finite-difference step for the knee translation values when the optimizer computes gradients numerically. If the knee stiffness was extremely non-linear, this gradient might not reflect the local behaviour of the functions which the optimizer is working with. When using more anatomically realistic body models containing passive spring-like ligaments, it is good to ensure that the ligaments are calibrated to ensure that their resting length isn’t too short or long. You can read more on calibration in “Muscle Modeling, Lesson 7: Ligaments” and “Inverse Dynamics of Muscle Systems, Lesson 7: Calibration”. In the next lesson we investigate the results in more detail.
{"url":"https://anyscript.org/tutorials/ForceDependentKinematics/lesson2.html","timestamp":"2024-11-13T11:46:34Z","content_type":"text/html","content_length":"45218","record_id":"<urn:uuid:7c94ebbf-9e94-4bc5-ad11-f72b2bc5b9d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00816.warc.gz"}
seminars - Demazure modules and $0$-Hecke modules of type $A$ In this talk, I will explain the construction of the $0$-Hecke module arising from a certain tableau that comes from the Demazure module of type $A$. More precisely, I first give Mason's tableau model of Demazure atoms as a component of the decomposition of the Demazure module. And then, a basis of the ring of quasisymmetric functions, called quasisymmetric Schur functions, defined by using this model will be presented. Finally, I will describe the action of the 0-Hecke algebra due to Tewari and van Willigenberg, and describe the $0$-Hecke modules whose quasisymmetric characteristics are quasisymmetric Schur
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&sort_index=Time&order_type=desc&l=en&page=41&document_srl=815337","timestamp":"2024-11-09T04:28:04Z","content_type":"text/html","content_length":"46074","record_id":"<urn:uuid:bbecc517-b09c-4eda-bee0-b607e75486f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00361.warc.gz"}
If Average Total Cost Is Increasing As Output Increases, A. Average Fixed Costs Will Also Be Increasing. B. Average Variable Costs Must Be Decreasing. C. Marginal Cost Must Be Greater Than Average Total Cost. D. All Of The Above Are True? If Average Total Cost Is Increasing As Output Increases, A. Average Fixed Costs Will Also Be Increasing. B. Average Variable Costs Must Be Decreasing. C. Marginal Cost Must Be Greater Than Average Total Cost. D. All Of The Above Are True? 2 Answers The only answer that must be true is C: Marginal cost must be greater than average total cost. The only way to make an average go up by adding numbers is to add numbers that are higher than the average. Each additional unit's cost (marginal cost) must be greater than the average total cost for the average total cost to increase by producing more. Example I have produced 10 units at an average total cost of $5 per unit. My total cost must be 10*$5 = $50. If my marginal cost is now $6 (higher than the average), then my total cost for 11 units will be $50 + $6 = $56 And my new average total cost will be $56/11 = $5.09 (higher than before)
{"url":"https://education.blurtit.com/1208895/if-average-total-cost-is-increasing-as-output-increases-a-average-fixed-costs-will-also","timestamp":"2024-11-11T10:50:50Z","content_type":"text/html","content_length":"57477","record_id":"<urn:uuid:dc275611-321a-43f3-84e4-d5998c50fa66>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00805.warc.gz"}
Ancient History Part 4: Hair, Blightmare Ancient History Part 4: Hair Chris - Tuesday, February 11th, 2020 Comments Off on Ancient History Part 4: Hair Hello once again! Welcome back to the dev blog of Blightmare. Today we’re going to do a little bit of a deeper dive into one specific feature of Blissa: her hair. We have tried a couple of different approaches to animation of Blissa’s hair, but so far we haven’t settled on the final version. In this post I will present two of those methods: authored animations and physics based. The current state of hair is actually worse than either of these attempts because we decided to focus on the level design and mechanic tuning rather than the visual effects and somewhere along the way the hair simulation broke down. Eventually we will shift focus back on the visuals and I’ll be returning to this area which will be really exciting. I think the primary reason I got into games is the visual aspect and I always enjoy fiddling around with different visual effects. Let’s jump right in. Authored Animations The simplest way to do hair as a programmer is to not do it at all. In this case, we rely entirely on our artists to make something that looks good and bake the effects into the regular character animations. This works just like the rest of the character animations that we discussed last week. Often there’s not even extra states to add to the state machine so this can just be dropped in without changing the code. I slowed down the video below to half speed so that you can see the hair animation a little better. It’s pretty subtle, but you can see a wind effect when Blissa is When Blissa jumps, you can see the hair react, but when she falls off the edge at the end there isn’t quite the same reaction. This illustrates one of the big issues with relying on direct animation. It can be very difficult for an artist to anticipate all the things that the character model might be doing especially as new mechanics are being added. Even when the set of motions can be predicted, there’s an explosion of combinations that can happen when the various mechanics can interact with each other. For example, we have a “hang” pose for when Blissa is hanging from a cocoon. Her hair is still here of course because she isn’t moving. However, sometimes the thing she is hanging from is moving, like in the case of a caterpillar on the wall. This would mean that we would need multiple hanging animations. Additionally, the direction that a caterpillar is moving should affect the direction her hair moves, so we need many animations just for hanging and some complex logic now to figure out which one to play and how to blend between them. Yuck. On a team as small as this one, we don’t have the time to be building so many animations. Additionally, it just makes everything more complicated: the amount of assets, the animation names, the animation code, etc. Maintaining such a system eventually becomes a significant burden on multiple team members which means we have less time to spend doing other things. Can we do better? Procedural Animations Of course we can! I wouldn’t have picked this topic to post about if we weren’t going to get to a solution where the programmers save the day. Let’s think for a second about what the problem that we have is. What we want is for Blissa’s hair to react in a natural way to her movements. Furthermore, we want this to happen every time she moves, and we don’t want to have to tell the system that a movement happened. It should “Just Work.” Generating an animation at runtime is called “procedural animation” and it’s pretty common even in games that aren’t doing lots of things procedurally like Minecraft or No Man’s Sky. Our task is to build a system that we can hook into the character animation to simulate hair. It turns out that hair is a really complex problem to solve correctly because of a number of issues like self-collision – you don’t want hair strands to intersect each other – and the sheer number of strands of hair to make a nice dense visual effect. For our purposes then, we will simplify things greatly and rely on our artists to help fill in the gaps. In particular, we’re only going to simulate a couple of regions of hair so that we can get the high impact effects from starting/stopping running or jumping and falling. In order to do this, we’re going to turn to a little physics for help. These are some of the conditions that we want to be able to handle. The hair sections should be in some base position when Blissa is completely idle, they should move up and away from the direction she is moving in, but more when she’s going faster, and if there’s an abrupt stop, there should be some sway before coming back to idle. It turns out that this is exactly what might happen if you attached a slinky to the back of your head and then ran around. The term for such a system is “mass and spring” and that’s what we’re going to try to simulate here. Our case is a little more complex though because we have constraints to where our mass – or the end of the hair – can go. We don’t want really stretchy hair because that would look silly, and we don’t want the hair to fall straight down when we’re stopped. Additionally, the strand needs to stay connected to Blissa’s head at all times. Now that we have a bit of an understanding of the problem, lets figure out how to build something that implements it. The first thing to try is what is built-in to Unity itself. There’s a complete 2D physics system with various constraints and rigid bodies. I took a night and tried to build a really simple test configuration to see if I could get something that looked like it could work. I ended up being really frustrated and didn’t get anywhere close to what I wanted. Being a programmer, I decided to take matters into my own fingers. Implementing a physics simulation seems like it should be rather easy initially. We know the basic equation of motion: Where t is time, a is acceleration, v is velocity, and p0 is the starting position. Implementing this into a game requires breaking it down into individual time steps – these are our frames – and pretending that nothing happens in between these points. This process is called discretization, and it’s the most common way to use computers to simulate physics. The obvious way to implement this is the direct way. Store a velocity and acceleration and then calculate a new position directly given some time step. The problem with this is that it introduces pretty significant error over time. I won’t go into the math here, but this implementation follows what is called the forward Euler method which is a first-order approximation and therefore has error proportional to the step size. This means that we can reduce the error by taking smaller steps, but that means we have to run the simulation more times for the same amount of simulation time and that doesn’t scale very well. We don’t want to do that, so we will use a slightly better method. This method is called Verlet Integration and it has second-order error, meaning the error is proportional to the square of the step size. The chart below shows what this means in practice. Red is the correct trajectory, Green is the approximation using Verlet, and Blue is the approximation using Forward Euler. As you can see, we’re going to be a lot closer this way. Let’s take a closer look at what Verlet Integration actually is. The formulation that we’re going to use doesn’t actually store the velocity directly at all! Instead, we store the current position and the previous position only. The velocity is then inferred by the distance between these locations divided by time. In programmer art math this looks like: And that’s it! We don’t need to worry about storing (or maintaining) any velocity term which is actually pretty handy by itself, just the previous position along with the current one. Both methods need the acceleration term to be handled roughly the same, so we’ll ignore that for now. Okay, now that we can model the movement we have to take a look at the constraints we talked about earlier. This is where the Verlet formulation really shines for us. To see this, we first need to consider what a constraint actually is. For our purposes, we want to keep the length of a piece of hair about the same, and we want it to stay attached to Blissa’s head. Lastly, we want it to return to some initial shape. All of these can be thought of as specifying a distance that we want to maintain between pairs of points. For the length, this is easy to see because we’re already talking about the length in the constraint! Keeping something attached is just a distance of zero, and the shape case is slightly more complex, but it can be modeled by introducing anchor points that have distance constraints with the simulation points. The next step then is to figure out how we would satisfy these constraints in our simulation. We can easily get the distance between two points with the Pythagorean Theorem, but that only tells us which points are violating their constraints. We want to just move the points then to make them satisfy their constraints. For simplicity, we will always move points along the line that the constraint forms. This is pretty easy so far. But now what do we do in our physics simulation to make this happen? Nothing extra! That’s the beauty of this equation. By moving the point’s current position, we automatically correct the velocity in the simulation to account for the constraint taking effect. But what if we have lots of constraints? Maybe solving one violates another? This happens all the time actually, and it’s really difficult to solve all of them at once. This problem is actually a system of equations with one equation for each constraint. As we remember from algebra class, it’s really annoying to try to solve more than a couple of these equations at the same time. This is also true when dealing with computer simulations if we try to solve the equations directly. Instead we can pretty easily take an iterative approach: solve each equation independently and then do it again and again until we aren’t moving the points very much on average. The number of times we iterate this system can be thought of as a convergence factor – the more times we iterate, the closer we will be to an actual solution. Being at the solution means that the points will be essentially at their rest position. So here’s the last trick: we do a single iteration each frame only. This means that for many frames, the points will not be in their constrained positions, but that’s actually okay! Solving the system over many frames actually gives us pretty nice visual results because the points appear to fall into place. Enough words, let’s see this in action. This is the most basic test setup that I had when trying to build out the system for Blissa. You can see here the constraints (in green). There’s distance and angle constraints in this video because I thought I needed both. It took quite some time for me to realize that this isn’t the case, and it greatly simplifies the code to not have to try to solve angular constraints, but it’s fun to do the So now that we have something working, let’s see what happens when we integrate it with Blissa. Note that in this next video, the manual animations are in play, not the procedural ones. The white balls are the setup that is being driven by the physics simulation. Now we shall see what it looks like when we override the manual animations with our procedural ones. It’s pretty easy to see how much tuning is required. Here’s a tuning pass. Oops. Yet another tuning pass. Maybe this is progress? Things are settling down a bit now. Getting pretty close maybe? In the end this system got rebuilt for one of the demos that we had. It was during that time that I realized there were some bugs in the implementation shown in these videos that made it harder to tune than it should have been. Such is the life of programming. Here’s a sneak peek ahead at what it looked like while I was tuning the fixed version – including some of the helpful debug drawing I had to add. Whew. That was a long one. Thanks for sticking with me all the way! As always, if you like what you see here or are interested in the game, Wishlist Blightmare on Steam and follow us on Twitter to stay up to date!
{"url":"https://blightmare.com/2020/02/11/ancient-history-hair/","timestamp":"2024-11-11T09:35:40Z","content_type":"text/html","content_length":"58837","record_id":"<urn:uuid:e84a12a8-3527-45ba-9437-50b85d2f103b>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00643.warc.gz"}
Study Model assumptions Screening Screening data used Underlying incidence of cancer Assumed Lead time Confidence Test sensitivity (estimated Tumor regression modeled round for estimation (estimated within the model, distribution(s) of modeled (yes, interval or within the model, assumed (yes, no), estimates Author, year (first and/ (screen-detected and observed from the control group, the preclinical no) and its standard error 100%, observed from the corrected for length time or /or interval observed from registry data or not detectable phase assumed around estimate literature, or not bias and/or overdiagnosis subsequent) cancers) included) duration distribution (yes, no) included) Prevalence to incidence Hutchinson, 1968 [13] First Screen-detected Observed from the control group Constant Yes, constant No Assumed 100% No, no cancer data Zelen, 1969 [6] First Screen-detected Observed from the control group Exponential Yes, No Assumed 100% Yes, length time cancer data exponential correction Shapiro, 1974 [14] First Screen-detected and Observed from the control group Exponential No, NA No Assumed 100% No, no interval cancer data (corrected for self-selection) Albert, 1978 [15,16] First Screen-detected Observed from the control group Exponential, gamma Yes, No Not included Yes, length time and cancer data exponential overdiagnosis correction Louis, 1978 [17] Launoy, 1997 [18]^1 First Screen-detected and Observed from registry data Not reported No, NA Yes Estimated within the model No, no interval cancer Brenner, 2011 [19] First Screen-detected Observed from registry data Exponential No, NA Yes Assumed 100% No, no cancer data Maximum likelihood Walter, 1983 [20] First and Screen-detected and Observed from the control group Exponential, Yes, NA Yes Estimated within the model No, length time subsequent interval cancer data log-normal, step correction Day, 1984 [7] First and Screen-detected and Observed from the control group Exponential Yes, Yes Estimated within the model No, length time subsequent interval cancer data exponential correction Brookmeyer, 1986 [24] First and Screen-detected and Not included (canceled out the Exponential, No, NA Yes Estimated within the model No, no subsequent interval cancer data model) piecewise Brookmeyer, 1987 [25] First and Screen-detected and Not included (canceled out the Exponential No, NA Yes Estimated within the model Yes, overdiagnosis subsequent interval cancer data model) correction Alexander, 1989 [21] First and Screen-detected and Observed from the control group Exponential Yes, No Estimated within the model No, no subsequent interval cancer data (corrected for self-selection) exponential Launoy, 1997 [18]^1 First Screen-detected and Observed from registry data Not reported No, NA Yes Estimated within the model No, no interval cancer data Straatman, 1997 [26] First and Screen-detected Estimated within the model Exponential Yes, No Estimated within the model No, no subsequent cancer data exponential Shen, 1999 [27] First and Screen-detected and Estimated within the model Exponential No, NA Yes Estimated within the model No, no subsequent interval cancer data Pinsky, 2001 [28] First Screen-detected and Observed from registry data Exponential, gamma Yes, not Yes Estimated within the model No, length time and interval cancer data & Weibull reported overdiagnosis correction Hsieh, 2002 [29] First and Screen-detected Estimated within the model Weibull and No, NA Yes Assumed 100% No, no subsequent cancer data piecewise Pinsky, 2004 [22] First and Screen-detected and Estimated within the model Exponential, Yes, not Yes Estimated within the model No, overdiagnosis subsequent interval cancer data Weibull reported correction Shen, 2005 [23] First and Screen-detected and Observed from the control group Piecewise-constant No, NA No Estimated within the model No, no subsequent interval cancer data Wu, 2005 [30]1 First and Screen-detected and Estimated within the model Log-logistic No, NA No Estimated within the model No, no subsequent interval cancer data Cong, 2005 [31] First and Screen-detected and Estimated within the model Exponential No, NA Yes Estimated within the model No, no subsequent interval cancer data Jiang, 2016 [12] First and Screen-detected and Not included (assumed constant) Exponential No, NA Yes Estimated within the model No, no subsequent interval cancer data Shen, 2019 [32] First and Screen-detected and Estimated within the model Exponential No, NA Yes Observed from literature Yes, no subsequent interval cancer data Etzioni, 1997 [33] First and Screen-detected and Observed from the control group NA No, NA No Estimated within the model No, no subsequent interval cancer data Regression of observed on expected Paci, 1991 [8] First and Interval cancer data Estimated within the model Exponential Yes, Yes Estimated within the model No, no subsequent exponential Duffy, 1995 [11] First and Interval cancer data Estimated within the model Exponential No, NA Yes Assumed 100% No, no Chen, 1996 [34] First and Screen-detected and Observed from the control group Exponential No, NA Yes Estimated within the model No, no subsequent interval cancer data Chen, 1997 [35] First and Screen-detected and Observed from the control group Exponential No, NA Yes Assumed 100% or observed No, no subsequent interval cancer data from the literature Duffy, 1997 [36] First and Screen-detected and Estimated within the model Exponential No, NA Yes Estimated within the model No, no subsequent interval cancer data Chen, 2000 [37] First and Screen-detected Estimated within the model Not reported No, NA Yes Assumed 100% or estimated No, no subsequent cancer data within the model Bayesian Markov-chain Monte Carlo simulation Launoy, 1997 [18]^1 First Screen-detected and Observed from registry data Not reported No, NA Yes Estimated within the model No, no interval cancer data Myles, 2003 [39] First and Screen-detected and Observed from the control group Poisson No, NA Yes Estimated within the model No, no subsequent interval cancer data Wu, 2005 [30]1 First and Screen-detected and Estimated within the model Non-parametric No, NA Yes Estimated within the model No, no subsequent interval cancer data Kim, 2015 [40] First and Screen-detected Estimated within the model Log-logistic No, NA Yes Estimated within the model No, no subsequent interval cancer data Shen, 2017 [38] First and Screen-detected and Estimated within the model Exponential No, NA Yes Assumed 100% or estimated Yes, overdiagnosis subsequent interval cancer data within the model correction
{"url":"https://e-epih.org/journal/view.php?number=1256","timestamp":"2024-11-04T01:03:14Z","content_type":"application/xhtml+xml","content_length":"238071","record_id":"<urn:uuid:673c7869-caf6-4c95-b3b6-93267b0e971b>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00149.warc.gz"}
Challenge: Euclidean Algorithm Calculate the greatest common divisor using the Euclidean algorithm. We'll cover the following Euclidean algorithm The Euclidean algorithm is a technique used to compute the greatest common divisor (GCD) of two numbers, i.e., the largest number that divides both of them without leaving a remainder. Problem statement Given two integers, x and y, calculate the largest number that divides both of them without leaving a remainder. Two integers: x and y. An integer that will be the GCD of x and y. Sample input x = 1071; y = 462; Sample output result = 21; Coding challenge First, take a close look at this problem and design a step-by-step algorithm before jumping to the implementation. This problem is designed for your practice, so try to solve it on your own first. If you get stuck, you can always refer to the solution provided in the solution section. Good luck! Level up your interview prep. Join Educative to access 80+ hands-on prep courses.
{"url":"https://www.educative.io/courses/algorithms-for-coding-interviews-in-csharp/challenge-euclidean-algorithm","timestamp":"2024-11-03T01:03:12Z","content_type":"text/html","content_length":"788735","record_id":"<urn:uuid:2bd6f854-bf55-4766-bfb2-5fe4f029e9b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00552.warc.gz"}
A theory and methodology to quantify knowledge | Royal Society Open Science This article proposes quantitative answers to meta-scientific questions including ‘how much knowledge is attained by a research field?’, ‘how rapidly is a field making progress?’, ‘what is the expected reproducibility of a result?’, ‘how much knowledge is lost from scientific bias and misconduct?’, ‘what do we mean by soft science?’, and ‘what demarcates a pseudoscience?’. Knowledge is suggested to be a system-specific property measured by K, a quantity determined by how much of the information contained in an explanandum is compressed by an explanans, which is composed of an information ‘input’ and a ‘theory/methodology’ conditioning factor. This approach is justified on three grounds: (i) K is derived from postulating that information is finite and knowledge is information compression; (ii) K is compatible and convertible to ordinary measures of effect size and algorithmic complexity; (iii) K is physically interpretable as a measure of entropic efficiency. Moreover, the K function has useful properties that support its potential as a measure of knowledge. Examples given to illustrate the possible uses of K include: the knowledge value of proving Fermat’s last theorem; the accuracy of measurements of the mass of the electron; the half life of predictions of solar eclipses; the usefulness of evolutionary models of reproductive skew; the significance of gender differences in personality; the sources of irreproducibility in psychology; the impact of scientific misconduct and questionable research practices; the knowledge value of astrology. Furthermore, measures derived from K may complement ordinary meta-analysis and may give rise to a universal classification of sciences and pseudosciences. Simple and memorable mathematical formulae that summarize the theory’s key results may find practical uses in meta-research, philosophy and research policy. 1. Introduction A science of science is flourishing in all disciplines and promises to boost discovery on all research fronts [1]. Commonly branded ‘meta-science’ or ‘meta-research’, this rapidly expanding literature of empirical studies, experiments, interventions and theoretical models explicitly aims to take a ‘bird’s eye view’ of science and a decidedly cross-disciplinary approach to studying the scientific method, which is dissected and experimented upon as any other topic of academic inquiry. To fully mature into an independent field, meta-research needs a fully cross-disciplinary, quantitative and operationalizable theory of scientific knowledge—a unifying paradigm that, in simple words, can help tell apart ‘good’ from ‘bad’ science. This article proposes such a meta-scientific theory and methodology. By means of analyses and practical examples, it suggests that a system-specific quantity named ‘K’ can help answer meta-scientific questions including ‘how much knowledge is attained by a research field?’, ‘how rapidly is a field making progress?’, ‘what is the expected reproducibility of a result?’, ‘how much knowledge is lost from scientific bias and misconduct?’, ‘what do we mean by soft science?’, and ‘what demarcates a pseudoscience?’. The theoretical and methodological framework proposed in this article is built upon basic notions of classic and algorithmic information theory, which have been rarely used in a meta-research context. The key innovation introduced is a function that, it will be argued, quantifies the essential phenomenology of knowledge, scientific or otherwise. This approach rests upon a long history of advances made in combining epistemology and information theory. The concept that scientific knowledge consists in pattern encoding can be traced back at least to the polymath and father of positive philosophy August Comte (1798–1857) [2], and the connection between knowledge and information compression ante litteram to the writings of Ernst Mach (1838–1916) and his concept of ‘economy of thought’ [3]. Claude Shannon’s theory of communication gave a mathematical language to quantify information [4], whose applications to physical science were soon examined by Léon Brillouin (1889–1969) [5]. The independent works of Solomonoff, Kolmogorov and Chaitin gave rise to algorithmic information theory, which dispenses of the notion of probability in favour of that of complexity and compressibility of strings [6]. The notion of learning as information compression was formalized in Rissanen’s minimum description length principle [7], which has fruitful and expanding applications in statistical inference and machine learning [8,9]. From a philosophical perspective, the relation between knowledge and information was explored by Fred Dretske [10], and a computational philosophy of science was elaborated by Paul Thagard [11]. To the best of the author’s knowledge, however, the main ideas and formulae presented in this article were never proposed before (see Discussion for further details). The article is organized as follows. In §2, the core mathematical approach is presented. This verges on a single equation, the K function, whose terms are described in §2.1, and whose derivation and justification are described in §2.2 by a theoretical, a statistical and a physical argument. Section 2.3 explains and discusses properties of the K function. These properties further support the claim that K is a universal quantifier of knowledge, and they lay out the bases for developing a methodology. The methodology is illustrated in §3, which offers practical examples of how the theory may help answer typical meta-research questions. These questions include: how to quantify theoretical and empirical knowledge (§3.1 and 3.2, respectively), how to quantify scientific progress within or across fields (§3.3), how to forecast reproducibility (§3.4), how to estimate the knowledge value of null and negative results (§3.5), how to compare the knowledge costs of bias, misconduct and QRP (§3.6) and how to define a ‘soft’ science (§3.8) and a pseudoscience (§3.7). These results are expressed in simple and memorable formulae (table 1), and are further summarized in §4, where the theory’s predictions, limitations and testability are discussed. The essay’s sections make cross-reference to each other but can be read in any order with little loss of comprehensibility. question formula interpretation section How much knowledge is contained in a K = h Logico-deductive knowledge is a lossless compression of noise-free systems. Its value is inversely related to complexity and 3.1 theoretical system? directly related to the extent of domain of application. How much knowledge is contained in an K = k × h Empirical knowledge is lossy compression. It is encoded in a theory/methodology whose predictions have a non-zero error. It follows 3.2 empirical system? that K[empirical] < K[theoretical]. How much progress is a field making? $mΔX+Δτ<nYΔkK$ Progress occurs to the extent that explanandum and/or explanatory power expand more than the explanans. This is the essence of 3.3 How reproducible is a research finding? $Kr=KA−λλ⋅dd$ The ratio between the K of a study and its replication K[r] is an exponentially declining function of the distance between their 3.4 systems and/or methodologies. What is the value of a null or negative $Knull≤hYlog⁡| The knowledge yielded by a single conclusive negative result is an exponentially declining function of the total number of 3.5 result? T||T|−1$ hypotheses (theories, methods, explanations or outcomes) $|T|$ that remain untested. What is the cost of research fabrication, $Kcorr= The K corrected for a questioned methodology is inversely proportional to the methodology’s relative description length times the 3.6 falsification, bias and QRP? K−huhbB$ bias it generates (B). When is a field a pseudoscience? $K<huhbB$ A pseudoscience results from a hyper-biased theory/methodology that produces net negative knowledge. Conversely, a science has $K> 3.7 What makes a science ‘soft’? $kHkS>hShH$ Compared to a harder science (H), a softer science (S) yields relatively lower knowledge at the cost of relatively more complex 3.8 theories and methods. 2. Analysis 2.1. The quantity of knowledge At the core of the theory and methodology proposed, which will henceforth be called ‘K-theory’, is the claim that knowledge is a system-specific property measured by a quantity symbolized by a ‘K’ and given by the function in which each term represents a quantify of information. What is information? In a very general and intuitive sense, information consists in questions we do not have answers to, or, equivalently, it consists in answers to those questions. Any object or event that has a probability ) carries a quantity of information equal to $−log A⁡p(y)=log A⁡1p(y)$2.2 that quantifies the number of questions with possible answers that we would need to ask to determine . The logarithm’s base, , could have any value, but we will always assume that = 2 and therefore that information is measured in ‘bits’, i.e. in binary questions. Shannon’s entropy is the expected value of the information in a random variable . A sequence of events, objects or random variables, for example, a string of bits 101100011 · · ·, is of course just another object, event or random variable, and therefore is quantifiable by the same logic [ The three terms in function (2.1) are defined as follows: — Y constitutes the explanandum, latin for ‘what is to the explained’. Examples of explananda include: response variables in regression analysis, physical properties to be measured, experimental outcomes, unknown answers to questions. — X and τ together constitute the explanans, latin for ‘what does the explaining’. In particular, (a) X will be referred to as the ‘input’, and it will represent information acquired externally. Examples of inputs include: results of any measurement, explanatory variables in regression analysis, physical constants, arbitrary methodological decisions and all other factors that are not ‘rigidly’ encoded in the theory or methodology. (b) τ will be referred to as the ‘theory’ or ‘methodology’. A typical τ is likely to contain both a description of the relation between Y and X, as well as a specification of all other conditions that allow the relationship between X and Y to manifest. Examples of τ include: an algorithm to reproduce Y, a description of a physical law relating Y to X, a description of the methodology of a study or a field (i.e. description of how subjects are selected, how measurements are made, etc.). Specific examples of all of these terms will be offered repeatedly throughout the essay. Mathematically, all three terms ultimately consist of sequences, produced by random variables and therefore characterized by a specific quantity of information. In the cases most typically discussed in this essay, explanandum and input will be assumed to be sequences of lengths , respectively, resulting from a series of independent identically distributed random variables, , with discrete alphabets , probability distributions and therefore Shannon entropy ) and The object representing the theory or methodology τ will be typically more complex than Y and X, because it will consist in a sequence of independent random variables (henceforth, RVs) that have distinctive alphabets (are non-identical) and are all uniformly distributed. This sequence of RVs represents the sequence of choices that define a theory and/or methodology. Indicating with T a RV with uniform probability distribution P[T], resulting from a sequence of l RVs T[i] ∈ {T[1], T[2] … T[l]} each with a probability distribution $PTi$, we have $log⁡1pT(τ)=log⁡1Pr{T1=τ1,T2=τ2,…Tl=τl}=∑i≤llog⁡1PTi(Ti=τi) .$2.4 The alphabet of each individual RV composing τ may have size greater than or equal to 2, with equality corresponding to a binary choice. For example, let τ correspond to the description of three components of a study’s method: τ = (‘randomized’, ‘human subject’, ‘female’). In the simplest possible condition, this sequence represents a draw from three independent binary choices: 1 = ‘randomized vs not’, 2 = ‘human vs not’, 3 = ‘female vs not’. Representing each choice as a binary RV T[i], the probability of τ is Pr{T[1] = τ[1]} × Pr{T[2] = τ[2]} × Pr{T[3] = τ[3]} = 0.5^3 = 0.125 and its information content is 3 bits. Equivalent and useful formulations of equation (2.1) are in which will be referred to as the ‘effect’ component, because it embodies what is often quantified by ordinary measures of effect size (§ ), and will be referred to as the ‘hardness’ component, because it quantifies the informational costs of a methodology, which is connected to the concept of ‘soft science’, as will be explained in § 2.2. Why K is a measure of knowledge Why do we claim that equation (2.1) quantifies the essence of knowledge? This section will offer three different arguments. First, a theoretical argument, which illustrates the logic by which the K function was originally derived, i.e. following two postulates about the nature of information and knowledge. Second, a statistical argument, which illustrates how the K function includes the quantities that are typically computed in ordinary measures of effect size. Third, a physical argument, which explains how the K function, unlike ordinary measures of effect size or information compression, has a direct physical interpretation in terms of negentropic efficiency. 2.2.1. Theoretical argument: K as a measure of pattern encoding Equation (2.1) is the mathematical translation of two postulates concerning the nature of the phenomenon we call knowledge: (i) Information is finite. Whatever its ultimate nature may be, reality is knowable only to the extent that it can be represented as a set of discrete, distinguishable states. Although in theory the number of states could be infinite (countably infinite, that is), physical limitations ensure that the number of states that are actually represented and processed never is or can be infinite. (ii) Knowledge is information compression. Knowledge is manifested as an encoding of patterns that connect states, thereby permitting the anticipation of states not yet presented, based on states that are presented. All forms of biological adaptation consist in the encoding of patterns and regularities by means of natural selection. Human cognition and science are merely highly derived manifestations of this process. Physical, biological and philosophical arguments in support of these two postulates are offered in appendix A. The most general quantification of patterns between finite states is given by Shannon’s mutual information function in which is Shannon’s entropy (equation ( )). The mutual information function is completely free from any assumption concerning the random variables involved ( figure 1 ). In order to turn equation ( ) into an operationalizable quantity of knowledge, we formalize the following properties: (i) The pattern between Y and X is explicitly expressed by a conditioning. We therefore posit the existence of a third random variable, T, with alphabet $T={τa,τb…}$, such that H(Y, X|T) = H(Y|T) + H(Y|X, T), or H(Y, X|T) = H(Y) + H(X) if $T=∅$. Unlike Y and X, T is assumed to be uniformly distributed, and therefore the size of its alphabet is $z=|T|=2n$, where n is the minimum number of bits required to describe each τ in the set. The uniform distribution of T also implies that H(T) = −logPr{T = τ} = n. (ii) The mutual information expressing the pattern as described above is standardized (i.e. divided by the total information content of its own terms), in order to allow comparisons between different The two requirement above lead us to formulate knowledge as resulting from the contextual, system-specific connection of the quantities, defined by the following equation: in which, to simplify the notation, we will typically use ) in place of ) and ) in place of Note how, at this stage, the value computed by equation (2.10) is potentially very low, because $H(Y|X,T)=∑τi∈TP(T=τi)H(Y|X,T=τi)$ is the average value of the conditional entropy for every possible theory of description length −log p(τ). The more complex is the average $τ∈T$, the larger is the number of possible theories of equivalent description length, and therefore the smaller is the proportion of theories τ[i] that yield H(Y|X, T = τ[i]) < H(Y) (because most realizable theories are likely to be nonsensical). Knowledge is realized because, from all possible theories, only a specific theory (or possibly a subset of theories) is selected (figure 2). This selection is not merely a mathematical fiction, but is typically the result of Darwinian natural selection and/or other analogous neurological, memetic and computational processes. The details of how a τ is arrived at, however, need not concern us because, in mathematical terms, the result of a selection process is the same: the selection ‘fixes’ the random variable T in equation (2.10) on a particular realization $τ∈T$, with two consequences. On the one hand, the entropy of T goes to zero (because there is no longer any uncertainty about T), but on the other hand, the selection itself entails a non-zero amount of information. Since T has a uniform distribution, the information necessary to identify this realization of T is simply −logP(T = τ) = log 2^l(τ) = l(τ), which is the shortest description length of τ (e.g. the minimum number of binary questions needed to identify τ in the alphabet of T). This quantity constitutes an informational cost that needs to be computed in the standardized equation (2.10). Therefore, we get Equation ( ) is arrived at by generalizing ( ) to the case in which the knowledge encoded by is applied to multiple realizations of explanandum and/or input, which are counted by the terms, respectively. 2.2.2. Statistical argument: K as a universal measure of effect size Despite having been derived theoretically and being potentially applicable to phenomena of any kind, i.e. not merely statistical ones, equation (2.1) bears structural similarities with ordinary measures of statistical effect size. Such similarities ought not to be surprising, in retrospect. Statistical measures of effect size are intended to quantify knowledge about patterns between variables, and so K would be expected to reflect them. Indeed, structural analogies between the K function and other measures of effect size offer further support for the theoretical argument made above that K is a general quantifier of knowledge. To illustrate such similarities, it is useful to point out that the value of the K function can be approximated from the quantization of any continuous probability distribution. For information to be finite as required by the K function, the entropy of a normally distributed quantized random variable X^Δ can be approximated by $H(XΔ)=log⁡2πeσ$, in which σ is the standard deviation rescaled to a lowest decimal (for example, from σ = 0.123 to σ = 123, further details in appendix B). There is a clear structural similarity between the k component of equation (2.6) and the coefficient of determination R^2. Since the entropy of a random variable is a monotonically increasing function of the variable’s dispersion (e.g. its variance), this measure is directly related to K. For example, if Y and Y|X are continuous normally distributed RVs with variance σ[Y] and σ[Y|X], respectively, then R^2 is a function of K, in which is the total sum of squares, is the sum of squared errors, is the sample size and (·) represents an undefined function. The adjusted coefficient of determination is also directly related to = ( − 1)/( − 1). From this relation follows that multiple ordinary measures of statistical effects size used in meta-analysis are also functions of K. For example, for any two continuous random variables, R^2 = r^2, with r the correlation coefficient. And since most popular measures of effect size used in meta-analysis, including Cohen’s d and odds ratios, are approximately convertible to and from r [13], they are also convertible to K. The direct connection between K and measures of effect size like Cohen’s d implies that K is also related to the t and the F distributions, which are constructed as ratios between the amount of what is explained and what remains to be explained, and are therefore constructed similarly to an ‘odds’ transformation of K Other more general tests, such as the Chi-squared test, can be shown to be an approximation of the Kullback–Leibler distance between the probability distributions of observed and expected frequencies ]. Therefore, they are a measure of the mutual information between two random variables, i.e. the same measure on which the function is built. Figure 3 illustrates how these are not merely structural analogies, because K can be approximately or exactly converted to ordinary measures of effect size. As the figure illustrates, K stands in one-to-one correspondence with ordinary measures of effect sizes, but its specific value is modulated by additional variables that are critical to knowledge and that are ignored by ordinary measures of effect size. Such variables include the size of the theory or methodology describing the pattern, which is always non-zero, the number of repetitions (which, depending on analyses, may correspond to the sample size or to the intended total number of uses of a τ); the resolution (e.g. accuracy of measurement, §2.3.6); distance in time and space and methods (§2.3.5) and Ockham’s razor (§2.3.1). The latter property also makes K conceptually analogous to measures of minimum description length, discussed below. Minimum description length principle. The minimum description length (MDL) principle is a formalization of the principle of inductive inference and of Ockham’s razor that has many potential applications in statistical inference, particularly with regard to the problem of model selection [8]. In its most basic formulation, the MDL principle states that the best model to explain a dataset is the one that minimizes the quantity in which ) is the description length of the hypothesis (i.e. a candidate model for the data) and ) is the description length of the data given the model. The equation has equivalent properties to equation ( ), with ) ≡ −log ) and ) ≡ ). Therefore, the values that minimize equation ( ) maximize the The reader may question why, if K is equivalent to existing statistical measures of effect size and MDL, we could not just use the latter to quantify knowledge. There are at least three reasons. The first reason is that only K is a universal measure of effect size. The quantity measured by K is completely free from any distributional assumptions about the subject matter being assessed. It can be applied not only to quantitative data with any distribution (e.g. figure 1), but also to any other explanandum that has a finite description length (although this potential application will not be examined in detail in this essay). In essence, K can be applied to anything that is quantifiable in terms of information, which means any phenomenon that is the object of cognition—any phenomenon amenable to being ‘known’. The second reason is that, as illustrated above, K takes into account factors that are overlooked by ordinary measures of effect size or model fit, and therefore is a more complete representation of knowledge phenomena (figure 3). The third reason is that, unlike any of the statistical and algorithmic approaches mentioned above, K has a straightforward physical interpretation, which is presented in the next section. 2.2.3. Physical argument: K as a measure of negentropic efficiency The physical interpretation of equation (2.1) follows from the physical interpretation of information, which was revealed by the solution to the famous paradox known as Maxwell’s Demon. In the most general formulation of this Gedankenexperiment, the demon is an organism or a machine that is able to manipulate molecules of a gas, for example, by operating a trap door, and is thus able to segregate molecules that move at higher speed from those that move at lower speed, seemingly without dissipation. This created a theoretical paradox as it would contradict the second law of thermodynamics, according to which no process can have as its only result the transfer of heat from a cooler to a warmer body. In one variant of this paradox, called the ‘pressure demon’, a cylinder is immersed in a heat bath and has a single ‘gas’ molecule moving randomly inside it. The demon inserts a partition right in the middle of the cylinder, thereby trapping the molecule in one half of the cylinder’s volume. It then operates a measurement to assess in which half of the cylinder the molecule is, and pushes down, with a reversible process, a piston in the half that is empty. The demon could then remove the partition, allowing the gas molecule to push the piston up, and thus extract work from the system, apparently without dissipating any energy. Objections to the paradox that involve the energetic costs of operating the machine or of measuring the position of the particle [5] were proven to be invalid, at least from a theoretical point of view [6,14]. The conclusive solution to the paradox was given in 1982 by Charles Bennett, who showed that dissipation in the process occurred as a byproduct of the demon’s need to process information [15]. In order to know which piston to lower, the demon must memorize the position of the molecule, storing one bit of information, and it must eventually re-set its memory to prepare it for the next measurement. The recording of information can occur with no dissipation, but the erasure of it is an irreversible process that will produce heat that is at least equivalent to the work extracted from the system, i.e kTln2 joules, in which k is Boltzmann’s constant. This solution to the paradox proved that information is a measurable physical quantity. Figure 4 illustrates how the K function relates to Maxwell’s pressure demon. The explanandum H(Y) (which is a shorthand for H(Y|τ), as explained previously) quantifies the entropy, i.e. the amount of uncertainty about the molecule’s position relative to the partition in the cylinder. The input H(X) is the external information obtained by a measurement. The input corresponds to the colloquial notion of ‘information’ as something that is acquired and ‘gives form’ (to subsequent choices, actions, etc.). Since this latter notion of information is a counterpart to the physical notion of information as entropy, it may be perhaps more correctly defined as negentropy [5]. The theory τ contains a description of the information-processing structure that allows the Pressure Demon to operate. The extent of this description will depend in part on how the system is defined. A minimal description will include at least an encoding of the identity relation between the state of X and that of Y, i.e. ‘X = Y’ as distinguished from its alternative, ‘X ≠ Y’. This theory requires at least a binary alphabet and therefore one bit of memory storage. A more comprehensive description will include a description of the algorithm that enables the negentropy in X to be exploited—something like ‘if X = left, press down right piston, else, press left piston’. Multiple other aspects of the system may be included in τ. The amount of information contained in the explanandum, for example, is a function of where the partition is laid down, a variable that a truly complete algorithm would need to specify. The broadest possible physical description of the pressure demon ought to encode instructions to set up the entire system, i.e. the heat bath, the partition etc. In other words, a complete τ contains the genetic code to reproduce pressure demons. The description length of τ will, intuitively, also depend on the language used to describe it. Moreover, some descriptions might be less succinct than others and contain redundancies, unnecessary complexities, etc. From a physical point of view, however, it is well understood that each τ would be characterized by its own specific minimum amount of information, a quantity known as Kolmogorov complexity [6]. This is defined as the shortest program that, if fed into a universal Turing machine, would output the τ and then halt. Mathematical theorems prove that this quantity cannot be computed directly—at least in the sense that one can never be sure to have found the shortest possible program. In practice, however, the Kolmogorov complexity of an object is approximated, by excess, by any information compression algorithm and is independent of the encoding language used, up to a constant. This means that, even though we cannot measure the Kolmogorov complexity in absolute terms, we can measure it rather reliably in relative terms. A τ that is more complex, and/or more redundant than another τ will necessarily have, all else being equal, a longer description Whether we take τ to represent the theoretical shortest possible description length for the demon (in which case −log p(τ) quantifies its Kolmogorov complexity), or whether we assume that it is a realistic, suboptimal description (in which case the description length −log p(τ) is best interpreted in relative terms), the K function expresses the efficiency with which the demon converts information into work. At the start of the cycle, the demon’s K is zero. After measuring the particle’s position, the demon has stored one bit of information (or less, if the partition is not placed in the middle of the cylinder, but we will here assume that it is), and has knowledge K > 0, with the magnitude of K inversely related to the description length of τ. By setting the piston and removing the partition, the demon puts its knowledge to use and extracts k ln 2 of work from it. Once the piston is fully pushed out, the demon no longer knows where the molecule is (K = 0) and yet still has one bit stored in memory, a trace of its last experience. The demon has now two possible options. First, as in Bennett’s solution to the paradox, it can simply erase that bit, re-setting X to the initial state H(X) = 0 and releasing k ln 2 in the environment. At each cycle, the negentropy is renewed via a new measurement, whereas the fixed τ component remains unaltered. Since the position of the molecule at each cycle is independent of previous positions, the total cumulative explanandum (the total entropy that the demon has reduced) grows by one bit, whereas the theory component remains unaltered. For n cycles, the total K is therefore which to the limit of infinite cycles is The value of K = 1/2 constitutes the absolute limit for knowledge that requires a direct measurement and/or a complete and direct description of the explanandum. Alternatively, the demon could keep the value of X in memory and allocate new memory space for the information to be gathered in the next cycle ([6]). As Bennett also pointed out, in practice it could not do so forever. In any physical implementation of the experiment, the demon would eventually run out of memory space and would be forced to erase some of it, releasing the entropy locked in it. If, ad absurdum, the demon stored an infinite amount of information, then at each cycle the input would grow by one bit yielding which to the limit of infinite cycles is again independent of . This is a further argument to illustrate how information is necessarily finite, as we postulated (§ , see also § for another mathematical argument and appendix A for philosophical and scientific arguments). More realistically, we can imagine that the number of physical bits available to the demon is finite. As cycles progress, the demon could try to allocate as many resources as possible to the memory X , for example, by reducing the space occupied by τ. This is why knowledge entails compression and pattern encoding (see also §2.3.1). Elaborations on the pressure demon experiment shed further light on the meaning of K and its implications for knowledge. First, let us imagine that the movement of the gas molecule is not actually random, but that, acted upon by some external force, the molecule periodically and regularly finds itself alternatively on the right and left side of the cylinder, and expands from there. If the demon kept a sufficiently long record of past measurements, say a number z of bits, it might be able to discover the pattern. Its τ could then store a new, slightly expanded algorithm, such as ‘if last position was left, new position is right, else, new position is left’. With this new theory, and one bit of input to determine the initial position of the molecule, the demon could extract unlimited amounts of energy from the heat bath. In this case, which to the limit of infinite cycles is Therefore, the maximum amount of knowledge expressed in a system asymptotically approaches 1. As we would expect, it is higher than the maximum value of 1/2 attained by mere descriptions. Note, however, that can never actually be equal to 1, since is never actually infinite and cannot be 0. Intermediate cases are also easy to imagine, in which the behaviour of the molecule is predictable only for a limited number of cycles, say c. In such case, K would increase as the number of necessary measurements n[X] is reduced to n[X]/c. At any rate, this example illustrated how the demon’s ability to implement knowledge (in order to extract work, create order, etc.) is determined by the presence of regularities in the explanandum as well as the efficiency with which the demon can identify and encode patterns. Since this ability is higher when the explanans is minimized, the demon (the τ) is selected to be as ‘intelligent’ and ‘informed’ as possible. As a final case, let us imagine instead that the gas molecule moves at random and that its position is measurable only to limited accuracy. A single measurement yields the position of the molecule with an error η. However, each additional measurement reduces η by a fraction a. The demon, in this case, could benefit from increasing the number of measurements. Indicating with m the number of measurements and with τ[m] the corresponding theory we have that to the limit of infinite cycles is The work extracted at each cycle will be k ln 2 (1 − η × a^−m). Therefore, K expresses the efficiency with which work can be extracted from a system, given a certain error rate a and number of measurements m. 2.3. Properties of knowledge This section will illustrate how K possesses properties that a measure of knowledge would be expected to possess. In addition to offering support for the three arguments given above, these properties underlie some of the results presented in §3. 2.3.1. Ockham’s razor is relative. As discussed in §2.2.2, the K function encompasses the MDL principle, and therefore computes a quantification of Ockham’s razor. However, the K formulation of Ockham’s razor highlights a property that other formulations overlook: that Ockham’s razor is relative to the size of the explanandum and the number of times a given theory or explanation can be used. For a given Y and X and two alternative theories τ and τ′ that have the same effect H(Y|X, τ) = H(Y|X, τ′) and that can be applied to a number of repetitions n[Y] and n′[Y], respectively, we have that $−log⁡p(τ′)n′Y<−log⁡p(τ)nY ⟺ K(Yn′Y;X,τ′)>K(YnY;X,τ)$2.24 and similarly for the case in which ′ while n[X] H ) ≠ [X] H $nX′H(X′)n′Y<nXH(X)nY ⟺ K(Yn′Y;XnX′,τ)>K(YnY;XnX,τ).$2.25 Therefore, the relative epistemological value of the simplicity of an explanans, i.e. Ockham’s razor, is modulated by the number of times that the explanans can be applied to the explanandum. 2.3.2. Prediction is more costly than explanation, but preferable to it. The K function can be used to quantify either explanatory or predictive efficiency. The expected (average) explanatory or predictive efficiency of an explanans with regard to an explanandum is measured when the terms of the K function are entropies, i.e. expectation values of uncertainties. If instead the explanandum is an event that has already occurred and that carries information −logP( Y = y), K quantifies the value of an explanation, whose information cost includes the surprisal of explanatory conditions −logP(X = x) and the complexity of the theory linking such conditions to the event, −logP(T = τ). Inference to the best explanation and/or model is, in both these cases, driven by the maximization of K. If instead it is the explanans, that is pre-determined and fixed, then its predictive power is quantified by how divergent its predictions are relative to observations. To any extent that observations do not match predictions, the observed and predicted distributions will have a non-zero informational divergence, which quantifies the extra amount of information that would be needed to ‘adjust’ the predictions to make them match the observations. It follows that, indicating with the tilde sign the predictive theory, we can calculate an ‘adjusted’ K as in which ) is the observed, and is the Kullback–Leibler divergence between the observed and the predicted distribution (proof in appendix C). Since , with equality corresponding to perfect fit between observations and predictions. An analogous formula could be derived for the case in which the explanandum is a sequence, in which case the distance would be calculated following methods suggested in § Now, note that the observed K is the explanatory K, and therefore is always greater or equal to the predictive K for individual observations. When evidence cumulates, then the explanans of an explanatory K is likely to expand, reducing the cumulative K (§3.3). Replacing a ‘flexible’ explanation with a fixed one avoids these latter cumulative costs, allowing a fixed explanans to be applied to a larger number of cases n[Y], with no cumulative increase in its complexity. Therefore, predictive knowledge is simply a more generalized, unchanging form of explanatory knowledge. As intuition would suggests, prediction can never yield more knowledge than a post hoc explanation for a given event (e.g. an experimental outcome). However, predictive knowledge becomes cumulatively more valuable to the extent that it allows to explain, with no changes, a larger number of events, backwards or forwards in time. 2.3.3. Causation entails correlation and is preferable to it Properties of the K function also suggests why the knowledge we gain from uncovering a cause–effect relation is often, but not always, more valuable than that derived from a mere correlation. Definitions of causality have a long history of subtle philosophical controversies [16], but no definition of causality can dispense with counterfactuals and/or with assuming that manipulating present causes can change future effects [17]. The difference between a mere correlation and a causal relation can be formalized as the difference between two types of conditional probabilities, P(Y = y|X = x) and P(Y = y|do(X = x)), where ‘do(X = x)’ is a shorthand for ‘X|do(X = x)’ and the ‘do’ function indicates the manipulation of a variable. In general, correlation without causation entails P(Y = y) ≤ P(Y = y|X = x) and P(Y = y) = P(Y = y|do(X = x)) whereas causation entails P(Y = y) ≤ P(Y = y|X = x) ≤ P(Y = y|do(X = x)). If knowledge is exclusively correlational, then K(Y; X = x, τ) > 0 and K(Y; do(X = x), τ) = 0, otherwise K(Y; X = x, τ) > 0 and K(Y; do(X = x), τ) > 0. Hence, all else being equal, the knowledge attainable via causation is larger under a broader set of conditions. Moreover, note that in the correlational case knowledge is only attained once an external input of information is obtained, which has an informational cost n[Y]H(X) > 0. In the causal case, instead, the input has no informational cost, i.e. H(X|do(X = x)) = 0, because there is no uncertainty about the value of X, at least to the extent that the manipulation of the variable is successful. However, the explanans is expanded by an additional τ[do(X=x)], which is the description length of the methodology to manipulate the value of X. Therefore, the value of causal knowledge is defined as $K(Y;τ,τdo(X=x)) =nYH(Y)−nYH(Y|X,τ)nYH(Y)−log⁡p(τ)−log⁡p(τdo(X=x))≡H(Y)−H(Y|X,τ)H(Y)+−log⁡p(τdo(X=x))−log⁡p(τ)nY .$2.27 It follows that there is always an $nY∗∈N$ such that $K(YnY∗;τ,τdo(X=x))>K(YnY∗;XnY∗,τ)$. Specifically, assuming τ to be constant, causal knowledge is superior to correlational knowledge when $nY∗> 2.3.4. Knowledge growth requires lossy information compression Both theoretical and physical arguments suggest that K is maximized when τ is minimized (§2.2). A simple calculation shows that such minimization must eventually consist in the encoding of concisely described patterns, even if such patterns offer an incomplete account of the explanandum, because otherwise knowledge cannot grow indefinitely. Let τ be a theory that is not encoding a relation between RVs X and Y, but merely lists all possible (x, y) pairs of elements from the respective alphabets, i.e. $x∈X$ and $y∈Y$. To take the simplest possible example, let each element $x∈X$ correspond to one element of $y∈Y$. Clearly, such τ would always yield H(Y|X, τ) = 0, but its description length will grow with the factorial of the size of the alphabet. Indicating with s the size of the two alphabets, which in our example have the same length, the size of τ would be proportional to log(s!). As the size of the alphabet grows, knowledge declines because independent of the probability distribution of . Therefore, as the explanandum is expanded (i.e. its total information and/or complexity grows), knowledge rapidly decreases, unless is something other than a listing of ( ) pairs. In other words, knowledge cannot grow unless consists in a relatively short description of some pattern that exploits a redundancy. The knowledge cost of a finite level of error or missing information ) > 0 will soon be preferable to an exceedingly complex 2.3.5. Decline with distance in time, space and/or explanans Everyone’s experience of the physical world suggests that our ability to predict future states of empirical phenomena tends to become less accurate the more ‘distant’ the phenomena are from us, in time or space. Perhaps less immediately obvious, the same applies to explanations: the further back we try to go in time, the harder it becomes to connect the present state of phenomena to past events. These experiences suggest that any spatio-temporal notion of ‘distance’ is closely connected to the information-theoretic notion of ‘divergence’. In other words, our perception that a distance in time or space separates us from objects or events is cognitively intertwined, if not indeed equivalent, to our diminished ability to access and process information about those objects or events and, therefore, to our knowledge about them. One of the most remarkable properties of K is that it expresses how knowledge changes with informational distances between systems. It can be shown that, under most conditions in which a system contains knowledge, divergence in any component of the system will lead to a decline of K that can be described by a simple exponential function of the form in which is an arbitrary basis, ′ is a system having an overall distance (i.e. informational divergence) , and defines the decline rate (proof in appendix D). 2.3.6. Knowledge has an optimal resolution Accuracy of measurement is a special case of the general informational concept of resolution, quantifiable as the number of bits that are available to describe explanandum and explanans. It can be shown both analytically and empirically that any system Y, X, τ is characterized by a unique optimal resolution that maximizes K (the full argument is offered in appendix E). We may start by noticing how, even if empirical data is assumed to be measurable to infinite accuracy (against one of the postulates in §2.2.1), the resulting K value will be inversely proportional to measurement accuracy, unless special conditions are met. When K is measured on a continuous, normal and quantized random variable Y^Δ (§2.2.2), to the limit of infinite accuracy only one of two values is possible, representing Shannon’s differential entropy function. The upper limit in equation ( ) occurs if and when ) > 0, i.e. by assumption there is a non-zero residual uncertainty that needs to be measured. When this is the case, then the two information terms brought about by the quantization cancel each other out in the numerator (because the explanandum and the residual error are necessarily measured at the same resolution). This is the typical case of empirical knowledge. The lower limit in equation ( ) presupposes a priori ) = 0, i.e. the explanandum is perfectly known via the explanans and there is no residual error to be quantized. This case is only represented by logico-deductive knowledge. We can define empirical systems as intermediate cases, i.e. cases that have a non-zero conditional entropy and have a finite level of resolution. We can show (see appendix E) that all empirical systems have ‘K-optimal’ resolutions $αY∗$ and $αX∗$, such that As the resolution increases, K will increase up to a maximal value and then decline. A system’s optimal resolution is partially determined by the shape of the relation between explanandum and explanans in ways that are likely to be system-specific. Two simulations in figure 5 illustrate how both K and H(Y)K may vary depending on resolution. The dependence of K on resolution reflects its status as a measure of entropic efficiency (§2.2.3) and entails that, to compare systems for which the explanandum is measured to different levels of accuracy, the K value needs to be rescaled. Such rescaling can be attained rather simply, by multiplying the value of K by the entropy of the corresponding explanandum, The resulting product quantifies in absolute terms how many bits are extracted from the explanandum by the explanans. 3. Results This section will illustrate, with practical examples, how the tools developed so far can be used to answer meta-scientific questions. Each of the questions is briefly introduced by a problem statement, followed by the answer, which comprises a mathematical equation, an explanation and one or more examples. Most of the examples are offered as suggestions of potential applications of the theory, and the specific results obtained should not be considered conclusive. 3.1. How much knowledge is contained in a theoretical system? Problem: Unlike empirical knowledge, which is amenable to errors that can be verified against experiences, knowledge derived from logical and deductive processes conveys absolute certainty. It might therefore seem impossible to compare the knowledge yield of two different theories, such as two mathematical theorems. The problem is made even deeper by the fact that any logico-deductive system is effectively a tautology, i.e. a system that derives its own internal truths from a set of a priori axioms. How can we quantify the knowledge contained such a system? Answer: The value of theoretical knowledge is quantified as in which corresponds to equation ( ) and to equation ( Explanation: Logico-deductive knowledge, like all other forms of knowledge, ultimately consists in the encoding of patterns. Mathematical knowledge, for example, is produced by revealing previously unnoticed logical connections between a statement with uncertainty H(Y) and another statement, which may or may not have uncertainty H(X) (depending on whether X has been proven, postulated or conjectured), via a set of passages described in a proof τ. The latter consists in the derivation of identities, creating an error-free chain of connections such that P(Y|X, τ) = 1. When the proof of the theorem is correct, the effect component k in equation (2.6), is always equal to one, yielding equation (3.1). However, when the chain of connections τ is replaced with a τ′ at a distance d[τ] > 0 from it, k is likely to be zero, because even minor modifications of τ (for example, changing a passage in the proof of a theorem) break the chain of identities and invalidate the conclusion. This is equivalent to the case λ[τ] ≈ ∞. Therefore, the reproducibility (§3.4) of mathematical knowledge, as it is embodied in a theorem, is either perfect or null, $Kr=Kif dτ=0,Kr=0otherwise.$3.2 Alternative valid proofs, however, might also occur, and their K value will be inversely proportional to their length, since a shorter proof yields a higher h. Once a theorem is proven, its application will usually not require invoking the entire proof τ. In K, we can formalize this fact by letting τ be replaced by a single symbol encoding the nature of the relationship itself. The entropy of τ will in this case be minimized to that of a small set of symbols, e.g. {=, ≠, >, < · · ·}. In such case, the value of the knowledge obtained will be primarily determined by n[Y], which is the number of times that the theorem will be invoked and used. This leads to the general conclusion that the value of a theory is inversely related to its complexity and directly related to the frequency of its use. 3.1.1. Example: The proof of Fermat’s last theorem Fermat’s last theorem (henceforth, FLT) states that there is no solution to the equation a^n + b^n = c^n when all terms are positive integers and n > 2. The French mathematician Pierre de Fermat (1607–1665) claimed to have proven such statement, but his proof was never found. In 1995, Andrew Wiles published a proof of FLT, winning a challenge that had engaged mathematicians for three centuries [19]. How valuable was Wiles’ contribution? We can describe the explanandum of FLT as a binary question: ‘does a^n + b^n = c^n have a solution’? In absence of any proof τ, the answer can only be obtained by calculating the result for any given set of integers [a, b, c, n]. Let n[Y] be the total plausible number of times that this result could be calculated. Of course, we cannot estimate this number exactly, but we are assured that this number is an integer (because a calculation is either made or not), and that it is finite (because the number of individuals, human or otherwise, who have, will, or might do calculations is finite). Therefore, the explanandum is n[Y]H(Y). For simplicity, we might assume that in absence of any proof, individuals making the calculations are genuinely agnostic about the result, such that H(Y) = 1. Indicating with τ the maximally succinct (i.e. maximally compressed) description of this proof, the knowledge yielded by it is $K(YnY;τ)=nYH(Y)nYH(Y)−log⁡p(τ)≡11−log⁡p(τ)nY .$3.3 Here we assume that any input is contained in the proof τ. The information size of the latter is certainly calculable in principle, since, in its most complete form, it will consist in an algorithm that derives the result from a small set of axioms and operations. Wiles’ proof of FLT is over 100 pages long and is based on highly advanced mathematical concepts that were unknown in Fermat’s times. This suggests that Fermat’s proof (assuming that it existed and was correct) was considerably simpler and shorter than Wiles’. Mathematicians are now engaged in the challenge of discovering such a simple proof. How would a new, simpler proof compare to the one given by Wiles? Indicating this simpler proof with τ′ and ignoring n[Y] because it is constant and independent of the proof, the maximal gain in knowledge is Equation ( ) reflects the maximal gain in knowledge obtained by devising a simpler, shorter proof of a previously proven theorem. Given two theorems addressing different questions, in the more general case, the difference in knowledge yield will depend on the lengths of the respective proofs as well as the number of computations that each theorem allows to be spared. The general formula is, indicating with Y′ and τ′ an explanandum and explanans different from Y and τ, 3.2. How much knowledge is contained in an empirical system? Problem: Science is at once a unitary phenomenon and highly diversified and complex one. It is unitary in its fundamental objectives and in general aspects of its procedures, but it takes a myriad different forms when it is realized in individual research fields, whose diversity of theories, methodologies, practices, sociologies and histories mirrors that of the phenomena being investigated. How can we compare the knowledge obtained in different fields, about different subject matters? Answer: The knowledge produced by a study, a research field, and generally a methodology is quantified as in which is given by equation ( by equation ( ) and by equation ( Explanation: Knowledge entails a reduction of uncertainty, attained by the processing of stored information by means of an encoded procedure (an algorithm, a ‘theory’, a ‘methodology’). Equation (3.6 ) quantifies the efficiency with which uncertainty is reduced. This is a scale-free, system-specific property. The system is uniquely defined by a combination of explanandum, explanans and theory, the information content of which is subject to physical constraints. Such physical constraints ensure that, among other properties, every system Y, X, τ has an optimal resolution, non-zero and non-infinite, and therefore a unique identifiable value K (§2.3.6). As discussed in §2.3.6, this quantity can also be rescaled to K × H(Y), which gives the total net number of bits that are extracted from the explanandum by the explanans. Since k ≤ 1, theoretical knowledge is typically, although not necessarily always, larger than empirical knowledge. Equation (3.6) applies to descriptive knowledge as well as correlational or causal knowledge, as examples below illustrate. 3.2.1. Example 1: The mass of the electron Decades of progressively accurate measurements have led to a current estimate of the mass of the electron of m[e] = 9.10938356 ± 11 × 10^−31 kg (based on the NIST recommended value [20]), with the error term representing the standard deviation of normally distributed errors. Since this is a fixed number of 39 significant digits, the explanandum is quantified by the amount of storage required to encode it, i.e. a string of information content −logP(Y = y) = 39 × log(10), and the residual uncertainty is quantified by the entropy of the normal distribution of errors with σ = 11. These measurements are obtained by complex methodologies that are in principle quantifiable as a string of inputs and algorithms, −log p(x) −log p(τ). However, the case of physical constants is similar to that of a mathematical theorem, in that the explanans becomes negligible to the extent that the value obtained can be used in a very large number of subsequent applications. Therefore, we estimate our current knowledge of the mass of the electron to be with the last approximation due to the case that the value can be stored and used for a very large times, yielding ≈ 1. More accurate calculations would require estimating the component, too. In particular, to compare ) to the value of another constant, the relative frequency of use would need to be taken into account. The corresponding rescaled value is ) × 39log 10 ≈ 124 bits. Note that the specific value of K depends on the scale or unit in which m[e] is measured. If it is measured in grams (10^−3 kg), for example, then K(m[e]) = 0.954. This reflects the fact that units of measurement are just another definable component of the system: there is no ‘absolute’ value of K, but solely one that is relative to how the system is defined. The relativity of K may lead to difficulties when comparing systems that are widely different from each other (§3.8). However, results obtained comparing systems that are adequately similar to each other are coherent and consistent, as illustrated in the next paragraph. We could be tempted to ‘cheat’ by rescaling the value of m[e] to a lower number of digits, in order to ignore the current measurement error. For example, we could quantify knowledge for the mass measured to 36 significant digits only (which is likely to cover over three standard deviations of errors, and therefore over $99%$ of possible values). By doing so, we would obtain K(m[e]) ≈ 1, suggesting that at that level of accuracy, we have virtually perfect knowledge of the mass of the electron. This is indeed the case: we have virtually no uncertainty about the value of m[e] in the first few dozen significant digits. However, note that the rescaled value of K is K(m[e]) × 36 log10 = 119.6 bits. Therefore, by lowering the resolution, our knowledge increased in relative but not in absolute terms. It should be emphasized that we are measuring here the knowledge value of the mass of the electron in the narrowest possible sense, i.e. by restricting the system to the mass itself. However, the knowledge we derive by measuring (describing) phenomena such as a physical constant has value also in a broader context, in its role as an input required to know other phenomena, as the next example 3.2.2. Example 2: Predicting an eclipse The total solar eclipse that occurred in North America on 21 August 2017 (henceforth, E[2017]) was predicted with a spatial accuracy of 1–3 km, at least in publicly accessible calculations [21]. This error is mainly due to irregularities in the Moon’s surface and, to a lesser extent, to irregularities of the shape of the Earth. Both sources of error can be reduced further with additional information and calculations (and thus a longer explanans), but we will limit our analysis to this estimate and therefore assume an average prediction error of 4 km^2. What is the value of the explanans for this knowledge? The theory component of the explanans consists in calculations based on the JPL DE405 solar system ephemeris, obtained via numerical integration of 33 equations of motion, derived from a total of 21 computations [22]. In the words of the authors, these equations are deemed to be ‘correct and complete to the level of accuracy of the observational data’ [22], which means that this τ can be used for an indefinite number n[Y] of computations, suggesting that we can assume −logp(τ)/n[Y] ≈ 0. The input is in this case a defined object of information content H(X) = −logp(x). It contains 98 values of initial conditions, physical constants and parameters, measured to up to 20 significant digits, plus 21 auxiliary constants used to correct previous data, and the radii of 297 asteroids [22]. Assuming for simplicity that on average these inputs take five digits, we estimate the total information of the input to be at least (98 + 21 + 297) × 5 × log10 ≈ 6910 bits. The accuracy of predictions is primarily determined by the accuracy of measurement of these parameters, which moreover are in many cases subject to revision. Therefore, in this case n[X]/n[Y] > 0, and the value of H(X) is less appropriately neglected. Nonetheless, we will again assume for simplicity that n[Y] ≫ n[X] and thus h ≈ 1. Therefore, since the surface of the Earth is approximately 510 072 000 km^2, we estimate our astronomical knowledge to be $K(E2017;X,τ)≈log⁡(510 072 000)−log⁡(4)log⁡(510 072 000)=0.931$3.8 and a rescaled value of ) × log (510 072 000) = 26.9261. Therefore, the value of K for predicting eclipses is smaller than that obtained for physical constants (§3.2.1). However, our analysis is not complete and it still over-estimates the K value of predicting an eclipse for at least two reasons. First, because the assumption of a negligible explanans for eclipse prediction is a coarser approximation than for physical constants, since physical constant are required to predict eclipses, and not vice versa. Secondly, and most importantly, our knowledge about eclipses is susceptible to declining with distance between explanans and explanandum. This is in stark contrast to the case of physical constants, which are, by definition, unchanging in time and space, such that λ[y] ≈ 0. What is λ in the case of eclipses? We will not examine here the possible effects of distance in methods, and we will only estimate the knowledge loss rate over time. We can do so by taking the most distant prediction made using the JPL DE405 ephemeris for a total solar eclipse: the one that will occur on 26 April AD 3000 [21]. The estimated error is approximately 7.8° of longitude, which at the predicted latitude of peak eclipse (21.1° N, 18.4° W) corresponds to an error of approximately 815 km in either direction. Therefore, the estimated K for predicting an eclipse 982 years from now is $K(E3000;X,τ)≈log⁡(510 072 000)−2log⁡(815)log⁡(510 072 000)=0.331.$3.9 ) = ) × 2 yields a knowledge loss rate of per year. Which corresponds to a knowledge half life of ≈ 667 years. Therefore our knowledge about the position of eclipses, based on the JPL DE405 methodology, is halved for every 667 years of time-distance to predictions. 3.3. How much progress is a research field making? Problem: Knowledge is a dynamic quantity. Research fields are known to be constantly evolving, splitting and merging [23]. As evidence cumulates, theories and methodologies are modified, enlarged or simplified, and may be extended to encompass new explananda and explanantia, or conversely may be re-defined to account more accurately for a narrower set of phenomena. To what extent do these dynamics determine scientific progress? Answer: Progress occurs if and only if the following condition is met: in which ′) ≡ Δ ) and −log ′) ≡ −Δlog ) are expansions or reductions of explanantia, and = ( ) − ′ = ( ) − ) + ) − log )) (see appendix F). Explanation: Knowledge occurs when progressively larger explananda are accounted for by relatively smaller explanantia. This is the essence of the process of consilience, which has been recognized for a long time as the fundamental goal of the scientific enterprise [24]. Consilience drives progress at all levels of generality of scientific knowledge. At the research frontier, where new research fields are being created by identifying new explananda and/or new combinations of explanandum and explanans, K grows by a process of ‘micro-consilience’. A ‘macro-consilience’ may be said to occur when knowledge-containing systems are extended and unified across fields, disciplines and entire domains. Equation (3.11) quantifies the conditions for consilience to occur both at the micro- and macro-level. The inequality (3.11) is satisfied under several conditions. First, when the explanantia X′ and/or τ′ produce a sufficiently large improvement in the effect, from k to k′. Second, equation (3.11) is satisfied even when explanatory power is lost, i.e. when k′ ≤ k, if ΔH(X) − Δlog p(τ) is sufficiently negative. This entails that input, theory or methodology are being reduced or simplified. Finally, if ΔH(X) − Δlog p(τ) = 0, condition (3.11) is satisfied provided that k′ > k, which would occur by expansion of the explanandum. In all cases, the conditions for consilience are modulated by the extent of application of the theories themselves, quantified by the n[X] and n[Y] indices. 3.3.1. Example 1: Evolutionary models of reproductive skew Reproductive skew theory is an ambitious attempt to explain reproductive inequalities within animal societies according to simple principles derived from kin selection theory ([25] and references within). In its earliest formulation, reproductive skew was predicted to be determined by a ‘transactional’ dynamic between dominant and subordinate individuals, according to the condition, in which is the minimum proportion of reproduction required by the subordinate to stay, are the number of offspring that the subordinate and dominant, respectively, would produce if breeding independently, is the genetic relatedness between subordinate and dominant and is the productivity of the group. The theory was later expanded to include an alternative ‘compromise’ model approach, in which skew was determined by direct intra-group conflict. Subsequent elaborations of this theory have extended its range of possible conditions and assumptions, leading to a proliferation of models whose overall explanatory value has been increasingly questioned [ We can use equation (3.11) to examine the conditions under which introducing a new parameter or a new model would constitute net progress within reproductive skew theory, using data from a comprehensive review [25]. In particular, we will focus on one of the earliest and most stringent predictions of transactional models, which concerns the correlation between skew and dominant-subordinate genetic relatedness. Contradicting earlier reported success [26], empirical tests in populations of 21 different species failed to support unambiguously transactional models in all but one case (data taken from table 2.2 in [25]). Since this analysis is intended as a mere illustration, we will make several simplifying assumptions. First, we will assume that all parameters in the model are measurable to two significant digits, and that their prior expected distributions are uniform (in other words, any group from any species may exhibit a skew and relatedness ranging from 0.00 to 0.99, and individual and group productivities ranging from 0 to 99). Therefore, we assume that each of these parameters has an information content equal to 2log 10 = 6.64 bits. Second, we will assume that the data reported by [25] are a valid estimate of the average success rate of reproductive skew theory in any non-tested species. Third, we will assume that all of the parameters relevant to the theory are measured with no error. For example, we assume that for any organism in which a ‘success’ for the theory is reported, reproductive skew is explained or predicted exactly. Fourth, we will assume that the extent of applications of skew theory, i.e. n[Y], is sufficiently large to make the τ component (which contains a description of equation (3.12) as well as any other condition necessary to make reproductive skew predictions work) negligible. These assumptions make our analysis extremely conservative, leading to an over-estimation of K values. Indicating with Y, X[s], X[d], X[r], X[k] the values of p[min], x[s], x[d], r, k in equation (3.12), we obtain the value corresponding to the K of transactional models Plugging these values in equation (3.11) and re-arranging, we derive the minimal amount of increase in explanatory power that would justify adding a new parameter input X′, This suggests, for example, that if ′ is a new parameter measured to two significant digits, with ′) = 2log 10, adding it to equation ( ) would represent theoretical progress if ′ > 1.2 , in other words if it increased the explanatory power of the theory by 20%. If instead ′ represented the choice between transactional theory and a new model then, assuming conservatively that ′) = 1, we have ′ > 1.03 , suggesting that any improvement above 3% would justify it. Did the introduction of a single ‘compromise’ model represent a valuable extension of transactional theory? The informational cost of expanding transactional theory consists not only in the equations τ′ that need to be added to the theory, but also in the additional binary variable X′ that determines the choice between the two models for each new species to which the theory is applied. We will assume conservatively that the choice equals one bit. According to Nonacs & Hager [25], compromise models were successfully tested in 2 out of the 21 species examined. Therefore, the k = 3/21 = 0.14 attained by adding a compromise model amply compensated for the corresponding increased complexity of reproductive skew theory. The analysis above refers to results for tests of reproductive skew theory across groups within populations. When comparing the average skew of populations, conversely, transactional models were compatible with virtually all of the species tested, especially with regard to the association of relatedness with reproductive skew [25]. In this case, if we interpret these data as suggesting that k ≈ 1, i.e. that transactional models are compatible with every species encountered, then progress within the field (the theory) could only be achieved by simplifying equation (3.12). This could be obtained by removing or recoding the parameters with the lowest predictive power, or by deriving the theory in question from more general theories. The latter is what the authors of the review did, by suggesting that the cross-population success of the theory is explainable more economically in terms of kin selection theory, from which these models are derived [25]. These results are merely preliminary and likely to over-estimate the benefits of expanding skew theory. In addition to the conservative assumptions made above, we have assumed that only one transactional model and one compromise model exist, whereas in reality several variants of these models have been produced, which entails that the choice X′ is not simply binary, and therefore H(X′) is likely to be larger than 1. Moreover, we have assumed that the choice between transactional and compromise models is made a priori, for example based on some measurable property of organisms that tells beforehand which type of model applies. If the choice is made after the variables are known then the costs of this choice have to be accounted for, with potentially disastrous consequences (§ 3.3.2. Example 2: gender differences in personality factors In 2005, psychologist Janet Hyde proposed a ‘gender similarity hypothesis’, according to which men and women are more similar than different on most (but not all) psychological variables [27]. According to her review of the literature, human males and females exhibit average differences that, for most measured personality factors, are of small magnitude (i.e. Cohen’s d less than or equal to 0.35). Assuming that these traits are normally distributed within each gender, this finding implies that the empirical distributions of male and female personality factors overlap by more than 85% in most cases. The gender similarity hypothesis was challenged by Del Giudice et al. [28], on the basis that, even assuming that the distributions of individual personality factors do overlap substantially, the joint distribution of these factors might not. For example, if Mahalanobis distance D, which is the multivariate equivalent of Cohen’s d, was applied to 15 psychological factors measured on a large sample of adult males and females, the resulting effect was large (D = 1.49), suggesting an overlap of $30%$ or less [28] (figure 6a). The multivariate approach proposed by Del Giudice was criticized by Hyde primarily for being ‘uninterpretable’ [29], because it is based on a distance in 15-dimensional space, calculated from the discriminant function. This suggests that such a measure is intended to maximize the difference between groups. Indeed, Mahalanobis D will always be larger than the largest unidimensional Cohen’s d included in its calculation (figure 6a). The K function offers an alternative approach to examine the gender differences vs similarities controversy, using simple and intuitive calculations. With K, we can quantify directly the amount of knowledge that we gain, on average, about an individual’s personality by knowing their gender. Since most people self-identify as male and female in roughly similar proportions, knowing the gender of an individual corresponds to an input of one bit. In the most informative scenario, males and females would be entirely separated along any given personality factor, and knowing gender would return exactly one bit along any dimension. Therefore, we can test to what extent the gender factor is informative by setting up a one-bit information in each of the explananda: we divide the population in two groups, corresponding to values above and below the median for each dimension. The resulting measure, which we will call ‘multi-dimensional K’ are psychologically realistic and intuitively interpretable and are calculated as in which is the number of dimensions considered and is the theory linking gender to each dimension Note that, whereas the maximum value attainable by the unidimensional K is 1/2, that of K[md] is 15/16 = 0.938. This value illustrates how, as the explanandum is expanded to new dimensions, K[md] could approach indefinitely the value of 1, value that would entail that input about gender yields complete information about personality. Whether it does so, and therefore the extent to which applying the concept of gender to multiple dimensions represents progress, is determined by conditions in (3.11). To illustrate the potential applications of these measures, the values of K, average K, as well as K[md] were calculated from a dataset (N=10^6) simulated using the variance and covariance of personality factors estimated by [28,30]. All unidimensional personality measures were split in lower and upper 50% percentile, yielding one bit of potentially knowable information. In K[md], these were then recombined, yielding a 15-bit total explanandum. Figure 6b reports results of this analysis. As expected, the unidimensional K values are closely correlated with their corresponding Cohen’s d values (figure 6a,b, black bars). However, the multi-dimensional K value offers a rather different picture from that of Mahalanobis D. K[md] is considerably smaller than the largest unidimensional effect measured, and is in the range of the second-largest effect. Indeed, unlike Mahalanobis D, K[md] is somewhat intermediate in magnitude, although larger than a simple average (given by the orange bar in figure 6b). Therefore, we conclude that the overall knowledge conferred by gender about the 15 personality factors together is comparable to some of the larger, but not the largest, values obtained on individual factors. This is a more directly interpretable comparison of effects, which stems from the unique properties of K. We can also calculate the absolute number of bits that are gained about an individual’s personality by knowing a person’s gender. For the unidimensional variables, where we assumed H(Y) = 1, this is equivalent to the K values shown. For the multi-dimensional K[md], however, we have to multiply by 15, obtaining 0.28 (figure 6b). This value is larger than the largest unidimensional K value of approximately 0.08, and suggests that, at least among the 15 dimensions considered, receiving one bit of input about an individual’s gender allows to save at least one-quarter of a bit in predicting their personality. These results are intended as mere illustrations of the potential utility of the methods proposed. Such potential was under-exploited in this particular case, because the original data were not available, and therefore the analysis was based on a re-simulation of data derived from estimated variances and co-variances. Therefore, this analysis inherited the assumptions of normality and linear covariance that are necessary but limiting components of traditional multivariate analyses, and were a source of criticism for data on gender differences too [29]. Unlike ordinary multivariate analyses, a K analysis requires no distributional assumptions. If it were conducted on a real dataset about gender, the analysis might reveal nonlinear structures in personality factors, and/or identify the optimal level of resolution at which each dimension of personality ought to be measured (§2.3.6). This would yield a more accurate answer concerning how much knowledge about people’s personality is gained by knowing their gender. 3.3.3. Example 3: Does cumulative evidence support a hypothesis? The current tool of choice to assess whether the aggregate evidence of multiple studies supports an empirical hypothesis is meta-analysis, in which effect sizes of primary studies are standardized and pooled in a weighted summary [13]. The K function may offer a complementary tool in the form of a cumulative K, K[cum]. This is conceptually analogous to the K[md] described above but, instead of assuming that the various composing explananda lie on orthogonal dimensions and the explanans is fixed, it assumes that both explanandum and explanans lie on single dimensions, and their entropy results from a mixture of different sources. It can be shown that, for a set of RVs Y[1], Y[2] … Y[m] with probability distributions $pY1(⋅),pY2(⋅)…pYm(⋅)$, the entropy of their mixed distribution $∑wipYi$ is given by $H(∑i≤mwipYi)=∑i≤mwiH(Yi)+∑i≤mwiD( pYi∥∑i≤mwipYi)≡H(Y)¯+dY¯,$3.17 where the right-hand terms are a notation introduced for convenience, and represents the Kullback–Leibler divergence between each RV and the mixed distribution. For sequences, and particularly for those representing the theory τ, the mixture operates on an element-by-element basis. For example, if T[i,p] and T[j,p] are the RVs representing choice p in τ[i] and τ[j], respectively, a mixture of τ[i] and τ[j] will lead to choice p now being represented by a RV T[ij,p], say, which has still uniform distribution and whose alphabet is the union set of the mixed alphabets, $Tij,p={Ti,p∪T j,p}$. Remembering that the minimum alphabet size of any element of a τ is 2, it can be shown that, if for example, τ[i] = (τ[i,1], τ[i,2] … τ[i,l]) and τ[j] = (τ[j,1], τ[j,2] … τ[j,m]) are two sequences of length l and m with l > m, their mixture will yield the quantity in which is the size of the alphabet resulting from the mixture. For the mixing of theories { will be equal to the description length of the longest in the set. Indicating the latter with *, we have with the right-hand side equality occurring if the sequences have equal length and are all different from each other. For example, if the methodology τ[i] = (`randomized’, `human’, `female’) is mixed with τ[j] = (`randomized’, `human’, `male + female’), the resulting mixture would have composing RVs T[1] = {` randomized’, `not’}, T[2] = {`human’, `not’}, T[3] = {`female’, `male + female’, `not’}, and its information content would equal − log(1/2) − log(1/2) − log(1/3) = 3.58 or equivalently $τ¯+dτ¯=3+log⁡ Therefore, the value of the cumulative K is given by in which the terms represent the average divergences from the mixed expananda or explanatia. Equation ( ) is subject to the same conditions of equation ( ), which will determine whether the cumulative knowledge (e.g. a cumulative literature) is overall leading to an increase or a decrease of knowledge. The peculiarity of equation (3.20) lies in the presence of additional divergence terms, which allow knowledge to grow or decrease independently of the weighted averages of the measured effects. In particular, ignoring the repetition terms which are constant, $Kcum≥K¯ ⟺ dY|X,τ¯≤(1−K¯)dY¯−K¯(dX¯+dτ¯)$3.21 constituting the value obtained by the simple averages of each term. This property, combined with the presence of a cumulative theory/methodology component that penalizes the cumulation of diverse methodologies, makes behave rather differently from ordinary meta-analytical estimates. Figure 7 illustrates the differences between meta-analysis and K[cum]. Like ordinary meta-analysis, K[cum] depends on the within- and between-study variance of effect sizes. Unlike meta-analysis, however, K[cum] decreases if the methodology of aggregated studies is heterogeneous, independent of the statistical heterogeneity that is observed in the effect sizes (that is, K can decrease even if the effects are statistically homogeneous). Moreover, K[cum] can increase even when all included studies report null findings, if the aggregated studies cover different ranges of the explanandum, making the cumulative explanandum larger. Note that we have not specified how the weights underlying the mixture are calculated. These may consist in an inverse-variance weighting, as in ordinary meta-analysis, or could be computed based on other epistemologically relevant variables, such as the relative divergence of studies’ methodologies. The latter approach would offer an alternative to the practice of weighting studies by measures of quality, a practice that used to be common in meta-analysis and has now largely been abandoned due to its inherent subjectivity. 3.4. How reproducible is a research finding? Problem: The concept of ‘reproducibility’ is the subject of growing concerns and expanding research programmes, both of which risk being misled by epistemological confusions of at least two kinds. The first source of confusion is the conflation of the reproducibility of methods and that of results [31]. The reproducibility of methods entails that identical results are reproduced if the same data is used, indicating that data and methods were reported completely and transparently. The reproducibility of results entails that identical results are obtained if the same methods are applied to new data. Whereas the former is a relatively straightforward issue to assess and to address, the latter is a complex phenomenon that has multiple causes that are hard to disentangle. When a study is reproduced using new data, i.e. sampling from a similar but possibly not identical population and using similar but not necessarily identical methods, results may differ for reasons that have nothing to do with flawed methods in the original studies. This is a very intuitive idea, which, however, struggles to be formally included in analyses of reproducibility. The latter typically follow the meta-analytical paradigm of assuming that, in absence of research and publication biases, results of two studies ought to be randomly distributed around a ‘true’ underlying effect. The second source of confusion comes from treating the concept of reproducibility as a dichotomy—either a study is reproducible/reproduced or it is not—even though this is obviously a simplification. A scientific finding may be reproduced to varying degrees, depending on the nature of what is being reproduced (e.g. is it an empirical datum? A relation between two operationalized concepts? A generalized theory?) and contingent upon innumerable characteristics of a research which include not just how the research was conducted and reported, but also by characteristics of the research’s subject matter and general methodology. How can we distinguish the reproducibility of methods and results and define them in a single, continuous measure? Answer: The relation between a scientific study and one that reproduces it is described by the relation in which is the result of a replication study conducted at a study-specific ‘distance’ (information divergence) given by the inner-product of a vector $dd : [dY,dX,dτ1,dτ2⋯]$ of distances and a vector $λλ : [λY,λX,λτ1,λτ2…]$ of corresponding loss rates. Explanation: A study that attempts to reproduce another study is best understood as a new system that is at a certain ‘distance’ from the previous one. An identical replication is guaranteed to occur only if the exact same methods and exact same data are used, in which case the divergence between the two systems is likely to be zero on all dimensions, and the resulting K (and corresponding measure of effect size produced by the study’s results) is expected to be identical. Note that even this is an approximation, since the instruments (e.g. hardware and software) used to repeat the analyses may be different, and this could in principle generate some discrepancies. If attainable at all, a divergence of zero is only really likely to characterize the reproducibility of methods and is unlikely to occur in the reproducibility of results (in which new data are being collected). In the latter, different characteristics in the population being sampled (d[Y]), the measurements or interventions made (d[X]) and/or other critical choices made in the conduction of the study ($dτ$) may affect the outcome. Contrary to what is normally assumed in reproducibility studies, these differences cannot be assumed to exert random and symmetric influences on the result. The more likely direction of change is one of reduction: divergences in any element of the system, particularly if not dictated by the objective to increase K, are likely to introduce noise in the system, thus obfuscating the pattern encoded in the original study. Section 2.3.5 showed how the exponential function (3.22) described the decline of a system’s K due to divergences in subject matter or methodology. In practical terms, a divergence vector will consist in classifiable, countable differences in components of the methods used and/or characteristics of subject matter that, based on theory and prior data, are deemed likely to reduce the level of K by some proportional factor. Applications of equation (3.22) to individual cases require measuring study-specific divergences in explanandum and explanans and their corresponding loss rates. However, the universality of the function in equation (3.22) allows us to derive general, population-level predictions about reproducibility, as the example below illustrates. 3.4.1. Example: How reproducible is Psychological Science? The Reproducibility Initiative in Psychology (RIP) was a monumental project in which a consortium of laboratories attempted to replicate 100 studies taken from recent issues of three main psychology journals. Results were widely reported in the literature and mass media as suggesting that less than 40% of studies had been replicated, a figure deemed to be disappointingly low and indicative of significant research and publication biases in the original studies [32]. This conclusion, however, was questioned on various grounds, including: limitations in current statistical approaches used to predict and estimate reproducibility (e.g. [33–35]), methodological differences between original and replication studies [36], variable expertise of the replicators [37] and variable contextual sensitivity of the phenomena studied [38,39]. The common element behind all these concerns is that the replication study was not actually identical to the original but diverged in details that affected the results unidirectionally. This is the phenomenon that equation (3.22) can help to formalize, predict and estimate empirically. In theory, each replication study in the RIP could be examined individually using equation (3.22), but doing so would require field-specific information on the impact that various divergences may have on the results. This fine-grained analysis is not achievable, at least presently, because the necessary data are not available. However, we can use equation (3.22) to formulate a general prediction about the shape of the distribution of results of a reproducibility study, under varying frequencies and impacts of errors. Figure 8 simulated the distribution of effect sizes (here shown as correlation coefficients derived from the corresponding K) that would be observed in a set of replication studies, depending on their average distances d and impacts λ from an original or ideal study. Distances were assumed to follow a Poisson distribution, with a mean of 1, 5 and 20, respectively. The impact of these distances was increased moving from the top to the bottom row, by assuming the values of λ illustrated in the top-most panel. The dotted vertical line in each plot reports the initial value of K (i.e. the left-hand side of equation (3.22)), whereas the solid vertical line shows the mean of the distribution of results. The figure can be given different interpretations. The distances simulated in figure 8 may be interpreted as between-study differences in the explanandum or input (e.g. cultural differences in the studied populations), between-study differences in methodological choices, or as study-specific methodological errors and omissions, or a combination of all three. The dotted line may represent either the result of the original study or the effect that would be obtained by an idealized study for which the K is maximal and from which all observed studies are at some distance. Irrespective of what we assume these distances to consist in, and to the extent that they represent a loss of information, their frequency and impact profoundly affect the expected distribution of replication results. The distribution is compact and right-skewed when distances are few and of minor impact (top-left). As the number of such minor-impact distances grows, the distribution tends to be symmetrical and bell-shaped (top-right). Indeed, if the number of distances was increased further, the shape would resemble that of a Gaussian curve (mirroring the behaviour of a Poisson distribution). In such a (special) case, the distribution of replication results would meet the assumption of symmetrical and normally distributed errors that current statistical models of reproducibility make. This condition, however, is a rather extreme case and by no means the most plausible. As the impact of distances increases in magnitude, the distribution tends to become left-skewed, if distances are numerous, or bimodal if they are few (bottom-right and bottom-left, respectively). This suggests that the conditions typically postulated in analyses of reproducibility (i.e. a normal distribution around the ‘true’ or the ‘average’ effect in a population of studies) are only realized under the special condition in which between-studies differences, errors or omissions in methodologies are numerous and of minor impact. However, when important divergences in explanandum or explanans occur (presumably in the form of major discrepancies in methods used, populations examined etc.), the distribution becomes increasingly asymmetrical and concentrated around null results and may either be left-skewed or bimodal, depending on whether the number of elements subject to divergence is large or small. Data from the RIP support these predictions. Before undertaking the replication tests, the authors of the RIP had classified the studies by level of expertise required to replicate them. As figure 9 illustrates, replication results of studies that were deemed to require moderate or higher expertise are highly concentrated around zero, with a small subset of studies exhibiting medium to large effects. This distribution is markedly different from that of studies that required null or minimal expertise, which was unimodal instead. Note how the distribution of original results reported by both categories of studies are, instead, undistinguishable in shape. Additional differences between distributions might be explained by a classification of the stability of the explanandum or explanans (e.g. the contextual sensitivity suggested by Van Bavel et al. [39]). Although preliminary, these results suggest that a significant cause of reproducibility ‘failures’ in the RIP may have been high-impact divergences in the systems or methodologies employed by the replicating studies. These divergences may have occurred despite the fact that many authors of the original studies had contributed to the design of the replication attempts. A significant component of a scientists’ expertise consists in ‘tacit knowledge’ [40], manifested in correct methodological choices that are not codified or described in textbooks and research articles, and that are unconsciously acquired by researchers through practice. Therefore, authors of the original studies might have taken for granted, or unwittingly overlooked, important aspects of their own research design when instructing the RIP replicators. The latter, even if professionally prepared, might have lacked sufficient expertise about the systems that are the object of the replication attempt, and may therefore have made ‘tacit errors’ that neither they or the authors of the original studies were able to document. It may still be the case that p-hacking and selective publication had affected some of the studies examined by RIP. However, if research biases were the sole factor leading to low reproducibility, then the two distributions in figure 9 should look similar. The fact that studies requiring higher level of expertise are harder to reproduce ought, in retrospect, not to surprise us. It simply suggests the very intuitive idea that many scientific experiments cannot be successfully conducted by anyone who simply follows the recipe, but need to be conducted by individuals with high levels of expertise about the methodology and the phenomena being studied. This fact still raises important questions about the generalizability of published results and how to improve it, but such questions should be disentangled as much as possible from questions about the integrity and objectivity of researchers. 3.5. What is the value of a null or negative result? Problem: How scientists should handle ‘null’ and ‘negative’ results is the subject of considerable ambiguity and debate. On the one hand, and contrary to what their names might suggest, ‘null’ and ‘negative’ results undoubtedly play an important role in scientific progress, because it is by cumulation of such results that hypotheses and theories are refuted, allowing progress to be made by ‘theory falsification’, rather than verification, as Karl Popper famously argued [41]. Null and negative results are especially important in contexts in which multiple independent results are aggregated to test a single hypothesis, as is done in meta-analysis [42]. On the other hand, as Popper himself had noticed, the falsifiability of a hypothesis is typically suboptimal, because multiple ‘auxiliary’ assumptions (or, equivalently, auxiliary hypotheses) may not be controlled for. Moreover, it is intuitively clear that a scientific discovery that leads to useful knowledge is made when a new pattern is identified, and not merely when a pattern is proved not to subsist. This is why, if on the one hand there are increasing efforts to counter the ‘file-drawer problem’, on the other hand there are legitimate concerns that these efforts might generate a ‘cluttered office’ problem, in which valuable knowledge is drowned in a chaotic sea of uninformative publications of null results [43]. The problem is that the value of null and negative results is context-specific. How can we estimate it? Answer: The knowledge value of a null or negative result is given by in which ) + ) + is the knowledge gained by the conclusive refutation of a hypothesis, and is the size of the set of hypotheses being potentially tested (including all unchecked assumptions) in the study. All else equal, the maximum value of declines rapidly as increases ( figure 10 Explanation: Section 2.2.1 described knowledge as resulting from the selection of a $τ∈T$, where $T$ is the a set of possible theories (methodologies) determining a pattern between explanandum and input. These theories can, as usual, be described by a uniform random variable T. It can be shown that, because of the symmetry property of the mutual information function, i.e. the information that the set of theories contains about the data is equivalent to the information that the data contains about the theories (see appendix G). This is indeed how knowledge is attained. A theory τ is selected among available alternatives because it best fits a data $YnY,XnX$, and ideally maximizes k[adj] − k[obs] (§2.3.2). The data are obtained by experiment (or experiences) and the process is what we call learning, as it is embodied in the logic of Bayes’ theorem, the MDL principle and generally the objective of any statistical inference method. Since no knowledge (including knowledge about a theory) can be obtained in the absence of a ‘background’ conditioning theory and methodology, a more accurate representation of an experiment entails the specification of an unvarying component which we will indicate as m, which quantifies the aspects of the theory and methodology of an experiment that are not subject to uncertainty, and the component for which knowledge is sought, the random variable T, which therefore represents the hypothesis or hypotheses being tested by the experiment. The knowledge attained by the experiment is then given by It follows that the experiment is maximally informative when H(T) is as large as possible and H(T|Y, X, m) = 0, that is, when multiple candidate hypotheses are examined and each of them is in one-to-one correspondence with each of possible states of Y, X. Real-life experiments depart from this ideal condition in two ways. First, they usually retain uncertainty about the result, H(T|Y, X, m) > 0, because multiple alternative hypotheses are compatible with the same experimental outcome. Second, real experiments usually test no more than one hypothesis at a time. This entails that H(T|Y, X, m) rapidly approaches H(T), as the size of the alphabet of T increases (see appendix H). These limitations suggest that, assuming maximally informative conditions in which all tested hypotheses are equally likely and one hypothesis is conclusively ruled out by the experiment, we have $H(T)−H(T|Y=y,X=x,m)=log⁡|T|−log⁡(|T|−1)$, which gives equation (3.23). As intuition would suggest, even if perfectly conclusive, a null finding is intrinsically less valuable than its corresponding ‘positive’ one. This occurs because a tested hypothesis is ruled out when the result is positive as well as when it is negative, and therefore the value quantified in equation (3.23) is obtained with positive as well as negative results, a condition that we can express formally as K(T; Y, X, m, T = τ[1]) = K(T; Y, X, m, T = τ[0]). Positive results, however, also yield knowledge about a pattern. Therefore, whereas a conclusive rejection of a non-null hypothesis yields at most K(T; Y, X, m, T = τ[0]) = h/H(Y), a conclusive rejection of the null hypothesis in favour of the alternative yields K(T; Y, X, m, T = τ[1]) + K(Y; X, τ[1]) > h/H(Y). Perfect symmetry between ‘negative’ and ‘positive’ results is only attained in the ideal conditions mentioned above, in which H(T|Y, X, m) = 0 and H(T) = H(Y), and therefore each experimental outcome identifies a theory with empirical value and at the same time refutes other theories. This is the scenario in which ‘perfect’ Popperian falsificationism can operate, and real-life experiments depart from this ideal in proportion to the number $log⁡(|T|−1)$ of auxiliary hypotheses that are not addressed by the experiment. The departure from ideal conditions is especially problematic in biological and social studies that are testing a fixed ‘null’ hypothesis τ[0] that predicts K(Y; X, τ[0]) = 0 against a non-specified alternative τ[1] for which K(Y; X, τ[1]) > 0. First of all, due to noise and limited sample size, $K(Y;X,τ0)>0$. This problem can be substantially reduced by increasing statistical power but can never be fully eliminated, especially in fields in which large sample sizes and high accuracy (resolution) are difficult or impossible to obtain. Moreover, and regardless of statistical power, a null result is inherently more likely to be compatible with multiple ‘auxiliary’ hypotheses/assumptions, which real-life experiments may be unable to control. 3.5.1. Example 1: A simulation To offer a practical example of the theoretical argument made above, figure 11 reports a simulation. The value of K(T; X, Y), i.e. how much we know about a hypothesis given data, was first calculated when a single hypothesis h[1] is at stake, and all other conditions are fixed (figure 11a). Subsequently, the alphabet of T (the set of hypotheses in the experiment) was expanded to include a second condition, with two possible states τ[a] or τ[b], the former of which produces a null finding regardless of h[1]. The state of this latter condition (hypothesis/assumption) is not determined in the experiment. The corresponding value of K(T; X, Y) is measurably lower, even if rescaled to account for the greater complexity of the explanandum (i.e. the number of tested hypotheses, figure 11b). This is a simple illustration of how the value of negative results depends on the number of uncontrolled conditions and/or possible hypotheses. If field-specific methods to estimate the number of auxiliary hypotheses are developed, the field-specific and study-specific informativeness of a null result could be estimated and compared. The conclusions reached in this section, combined with the limitations of replication studies discussed in §3.4, may offer new insights into debates over the problem of publication bias and how to solve it. This aspect is briefly discussed in the example below. 3.5.2. Example 2: Should we publish all negative results? Debates on whether publication bias is a bane or boon in disguise recur in the literature of the biological and social sciences. A vivid example was offered by two recent studies that used virtually identical methods and arguments but reached opposite conclusions concerning whether ‘publishing everything is more effective than selective publishing of statistically significant results’ [44,45]. Who is right? Both perspectives may be right or wrong, depending on specific conditions of a field, i.e. of a research question and a methodology. An explicit but rarely discussed assumption made by most analyses of publication bias is that the primary studies subjected to bias are of ‘similar quality’. What this quality specifically consists in is never defined concretely. Nonetheless, it seems plausible to assume that quality, like any other property of studies, will be unequally distributed within a literature, and the level of heterogeneity will vary across fields. This field-specific heterogeneity, however, cannot be overlooked, because it determines the value of H(T|Y, X, m) and |𝒯|, i.e. the falsifiability of the main hypothesis being tested. Therefore, to properly estimate the true prevalence and impact of publication bias and determine cost-effective solutions, the falsifiability of hypotheses needs to be estimated on a case-by-case (i.e. field-specific or methodology-specific) basis. In general, the analysis above suggests that current concerns for publication bias and investments to counter it are most justified in fields in which methodologies are well codified and hypotheses to be tested are simple and clearly defined. This is likely to be the condition of most physical sciences, in which not coincidentally negative results appear to be valued as much or more than positive results [46,47]. It may also reflect the condition of research in clinical medicine, in which clearly identified hypotheses (treatments) are tested with relatively well-codified methods (randomized controlled trials). This would explain why concerns for publication bias have been widespread and most proactively addressed in clinical medicine [42]). However, the value of negative results is likely to be lower in other research fields, and therefore the cost–benefit ratio of interventions to counter publication bias need to be assessed on a case-by-case basis. Methods proposed in this article might help us determine relevant field-specific and study-specific conditions. In particular, the statistical relevance of a null result produced by a study with regard to a specified hypothesis is likely to be inversely related to the expected divergence of the study from a standard (or an ideal) methodology and explanandum $λ⋅d$ (§3.4). This effect is in turn modulated by the complexity and flexibility of a field’s methodological choices and magnitude of effect sizes, both quantifiable in terms of the K function proposed in this study. 3.6. How much knowledge do we lose from questionable research practices? Problem: In addition to relatively well-defined forms of scientific misconduct, studies and policies about research integrity typically address a broader category of ‘questionable research practices’ (QRP). This is a class of rather loosely defined behaviours such as ‘dropping outliers based on a feeling that they were inaccurate’, or ‘failing to publish results that contradicted one’s previous findings’. Behaviours that, by definition, may or may not be improper, depending on the context [48]. Since QRP are likely to be more frequent than outright fraud, it has long been argued that their impact on the reliability of the literature may be very high—indeed, even higher than that of data fabrication or falsification (e.g. [49]). However, besides obvious difficulties in quantifying the relative frequency of proper versus improper QRP, there is little epistemological or methodological basis for grouping together an extremely heterogeneous set of practices and branding them as equally worrying [50]. Setting aside ethical breaches that do not affect the validity of data or results—which will not be considered here—it is obvious that our concerns for QRP ought to be proportional not simply to the frequency of their use but to the frequency of their use multiplied by the distorting effect on the literature. How can we quantify the impact of misconduct and QRP? Answer: The impact on knowledge of a Questionable Research Practice is given by a ‘bias-corrected’ K value in which ) is the ‘unbiased’ n[Y] H n[Y] H ) + n[X] H ) − log )) and n[Y] H n[Y] H ) + n[X] H ) − log ) − )) are the the hardness terms for the study, without and with bias, respectively, and is the bias caused by the practice. Explanation: Equation (3.26) is derived by a similar logic to that of predictive success, discussed in §2.3.2. If a research practice is deemed epistemologically improper, that is because it must introduce a bias in the result. This implies that the claim made using the biased practice β is different from the claim that is declared or intended: K(Y; X, τ, β) ≠ K(Y; X, τ). Just as in the case of prediction costs, therefore, we can adjust the K value by subtracting from it the costs required to derive the claimed result from the observed one, costs that are here quantified by B (equation ( Differently from the case of prediction, however, in the presence of bias the methods employed are of different size. In particular, the bias introduced in the results has required an additional methodology β. Following our standard approach, we posit that β is an element of the alphabet of a uniform random variable B. Similarly to τ, −log p(β) is the description length of a sequence of choices and n[β] will be the number of times these choices have to be made. For example, a biased research design (that is, an ante hoc bias) will have n[β] = 1, and therefore a cost −log p(β) corresponding to the description length of the additional components to be added to τ. Conversely, if the bias is a post hoc manipulation of some data or variables, then β may be as simple as a binary choice between dropping and retaining data (see example below), and n[β] may be as high as n[Y] or higher. The term h[u]/h[b] quantifies the relative costs of the biased methodology. An important property of equation (3.26) is that the condition holds regardless of the direction of the bias. The term B is always non-negative, independent of how results are shifted. Therefore, a QRP that nullified an otherwise large effect (in other words, a bias against a positive result) would require a downwards correction just as one that magnified it. 3.6.1. Example 1: Knowledge cost of data fabrication The act of fabricating an entire study, its dataset, methods, analysis and results can be considered an extreme form of ante hoc bias, in which the claimed effect was generated entirely by the Let β represent the method that fabricated the entire study. By assumption, the effect observed without that method is zero, yielding Hence, an entirely fabricated study yields no positive knowledge and yields indeed knowledge. This result suggests a solution to an interesting epistemological conundrum raised by the scenario in which a fabricated study reports a true fact: if independent, genuine studies confirm the made-up finding, then technically the fabricated study did no damage to knowledge. Shall we therefore conclude that data fabrication can help scientific progress? Equation (3.26) may shed new light on this conundrum. We can let K represent the amount of genuine knowledge attained within a field. The fabricated study’s K[corr] is then K − (h[u]/h[b])B ≤ 0, because B = K and h[u] > h[b]. The extra information costs of fabricating the entire study generate a net loss of information, even if the underlying claim is correct. 3.6.2. Example 2: Knowledge cost of arbitrarily dropping data points Let’s imagine a researcher who collected a sample of n data points and made a claim K(Y^n; X^n, τ) > 0 without explicitly declaring that during the analysis she had dropped a certain number n[β] of data points which made her results look ‘better’—i.e. her K appear larger than it is. How egregious was this behaviour? From equation (3.26), we derive the minimum conditions under which a bias is tolerable (K[corr] > 0) as The choice to drop or not a data point is binary, and therefore −log p (β) = 1. In the best-case scenario, the researcher identified possible outliers based on a conventional threshold of 3σ, and was therefore confronted with the choice of dropping only 0.3% of her data points, i.e. n[β] = 0.003n. This leads to h[u]/h[b] ≈ 1 and the simplified condition, K > B, in which the bias has to be smaller than the total effect reported. For B ≥ K to occur under these conditions (in other words, to generate the full reported effect by dropping no more than $0.3%$ of data points), it has to be the case that either the reported effect K was extremely small, and therefore unlikely to be substantively significant, or that the dropped outliers were extremely deviant from the normal range of data. In the latter case, the outliers ought to have been removed and, if naively retained in the dataset, their presence and influence would not go unnoticed to the reader. Therefore, arbitrariness in dropping statistical outliers has a minor impact on knowledge. In the worst-case scenario, however, the researcher has inspected each of the n data points and decided whether to drop them or not based on their values. In this case, n[β] = n, and −log p(β) ≫ 1 because the bias consists in a highly complex procedure in which each value of the data is assessed for its impact on the results, and then retained or dropped accordingly. For the purposes of illustration, we will assume that β is as complex as the dataset, in which case with the latter approximation derived from assuming that is large. In this case, therefore the QRP would be tolerable only if > 2 , i.e. the result obtained without the QRP is twice as large as that produced with the QRP. However, if the was very large to begin with, then the researcher would have little improper reasons to drop data points, unless she was biased against producing a result (in which case and therefore < 0). Therefore, under the most likely conditions in which it occurs, selecting data points indiscriminately would be an extremely damaging practice, leading to < 0. The two examples above illustrate how the generic and very ambiguous concept of QRP can be defined more precisely. A similar logic could be applied to all kinds of QRP, to assess their context-specific impact, to distinguish the instances that are innocuous or even positive from the ones of concern, and to rank the latter according to the actual damage they might do to knowledge in different research fields. This logic may also aid in assessing the egregiousness of investigated cases of scientific misconduct. 3.7. What characterizes a pseudoscience? Problem: Philosophers have proposed a vast and articulated panorama of criteria to demarcate genuine scientific activity from metaphysics or pseudoscience (table 2). principle science non-/pseudoscience author year [ref] positivism reached the positive stage: builds knowledge on empirical data still in theological or meta-physical stages: phenomena are explained by Comte 1830 [2] recurring to deities or non-observables entities methodologism follows rigorous methods for selecting hypotheses, acquiring data fails to follow the scientific method e.g. Pearson 1900 [54], Poincaré and drawing conclusions 1914 [55] verificationism builds upon verified statements relies on non-verifiable statements Wittgenstein 1922 [56] falsificationism builds upon falsifiable, non-falsified statements produces explanations devoid of verifiable counterfactuals Popper 1959 [41] methodological generates theories of increasing empirical content, which are protects its theories with a growing belt of auxiliary hypotheses, giving Lakatos 1970 [57] falsificationism accepted when surprising predictions are confirmed rise to ‘degenerate’ research programmes norms follows four fundamental norms, namely: universalism, communism, operates on different, if not the opposite, sets of norms Merton 1942 [58] disinterestedness, organized scepticism paradigm is post-paradigmatic, meaning it solves puzzles defined and is pre-paradigmatic: lacks a unique and unifying intellectual framework or Kuhn 1974 [59] delimited by the rules of an accepted paradigm is fragmented into multiple competing paradigms multi-criterial bears a sufficient ‘family resemblance’ to other activities we call shares too few characteristics with activities that we consider scientific e.g. Laudan 1983 [51], Dupre 1993 [ approaches ‘science’ 52], Pigliucci 2013 [53] However, none of these criteria are accepted as universally valid, and prominent contemporary philosophers of science tend to endorse a ‘multi-criteria’ approach, in which the sciences share a ‘family resemblance’ to each other but no single universal trait is common to all of them (e.g. [51–53]). The multi-criterial solution to the demarcation problem is appealing but has limited theoretical and practical utility. In particular, it shifts the question from identifying a single property common to all the sciences to identifying many properties common to some. Proposed lists of criteria typically include normative principles or behavioural standards such as ‘rigorously assessing evidence’, ‘openness to criticism’, etc. These standards are unobjectionable but are hard to assess rigorously. Furthermore, since the minimum number of characteristics that a legitimate science should possess is somewhat arbitrary, virtually any practice may be considered a ‘science’ according to one scheme or another (e.g. intelligent design [60]). Is there a single distinctive characteristic of pseudosciences and, if so, how can we measure it? Answer: A pseudoscientific field is characterized by K[corr] < 0, because where the terms are the cumulative equivalent of the terms in equation ( Explanation: Activities such as palmistry, astrology, homeopathy or psychoanalysis are characterized by having a defined methodology, which contains its own laws, rules and procedures, let us call it ψ. This ψ is what makes these practices appear scientific, and it is believed by its practitioners to produce a K(Y; X, ψ) > 0. However, such activities are deemed epistemically worthless (and have been so, in many cases, for centuries before the concept of science was formalized), because they typically manifest three conditions: (1) they (appear to) produce large amounts of explanatory knowledge but typically little predictive or causal knowledge; (2) any predictive success or causal power that their practitioners attribute to the explanans is more economically explained by well-understood and unrelated phenomena and methodologies; and/or (3) their theories and methodologies are independent from, and often incompatible with, those of well-established and successful theories and methodologies ([53]). All three properties are contained and quantified in equation (3.26). — Condition 1 implies that a field’s observed, as opposed to predicted, K is zero, leading to the condition K[adj] < 0 (§2.3.2) and therefore also to K[corr] < 0 (§3.6). — Condition 2 entails that, to any extent that a pseudoscientific methodology (appears to) successfully explain, influence or predict an outcome, the same effect can be obtained with a τ that lacks the specific component ψ. Conscious and unconscious biases in study design (e.g. failure to account for the placebo effect) and post hoc biases (e.g. second-guessing one’s interpretation) fall into this category of explainable effects. We could also interpret K as being the effect produced by standard methods τ, and B as the (identical) effect produced by the pseudoscience, which, however, has a methodology that is more complex than necessary (the sum −(log p(τ) + log p(ψ))), leading to h[u]/h[b] > 1 in equation (3.31). — Condition 3 can be quantitatively understood as a cost of combining incompatible theories. Let υ be a third theory, which represents the combination of the pseudoscientific theory ψ with other standard theories τ. When the two theories are simply used opportunistically and not unified in a single, coherent theory, then log p(υ) = log p(τ) + log p(ψ). When the two theories are entirely compatible with each other, indeed one is partially or entirely accounted for by the other, then −logp(υ) ≪−log p(τ) − log p(ψ). Conversely, to the extent that the two theories are not directly compatible, such that additional theory needs to be added and formulated to attain a coherent and unified account −log p(υ) ≫−log p(τ) − log p(ψ), leading to h[u]/h[b] ≫ 1 in equation (3.31). Formal methods to quantify theoretical discrepancies may be developed in future work. 3.7.1. Example: How pseudoscientific is Astrology? Many studies have been conducted to test the predictions of Astrology, but their results were typically rejected by practising astrologers on various methodological grounds. A notable exception is represented by [61], a study that was designed and conducted with the collaboration and approval of the National Council for Geocosmic Research, a highly prominent organization of astrologers. In the part of the experiment that was deemed most informative, practising astrologers were asked to match an astrological natal chart with one of three personality profiles produced using the California Personality Inventory. If the natal chart contains no useful information about an individual’s personality, the success rate is expected to be $33%$, giving H(Y) = 1.58. The astrologers predicted that their success rate would be at least $50%$, suggesting H(Y|X, ψ) = 1.58/2 = 0.79. The astrologer’s explanans includes the production of a natal chart, which requires the input of the subject’s birth time (hh:mm), date (dd/mm/yyyy) and location (latitude and longitude, four digits each) for a total information of approximately 50 bits. The theory ψ includes the algorithm to compute the star and planet’s position, and the relation between these and the personality of the individual. The size of ψ could be estimated, but we will leave this task to future analyses. This omission may have a significant or a negligible impact on the calculations, in proportion to how large the n[Y] is, i.e. in proportion to how unchanging the methods of astrology are. The alternative, scientific hypothesis according to which there is no effect to be observed, has h[u] = 1. Results of the experiment showed that the astrologers did not guess an individual’s personality above chance [61]. Therefore, K = 0 and equation (3.31) is satisfied. The K value of astrology from this study is estimated to be in which the inequality is due to the unspecified size of . This analysis is still likely to over-estimate the of Astrology, because the experiment offered a conservative choice between only three alternatives, whereas astrology’s claimed explanandum is likely to be much larger, as it includes multiple personality dimensions (§ 3.8. What makes a science ‘soft’? Problem: There is extensive evidence that many aspects of scientific practices and literatures vary gradually and almost linearly if disciplines are arranged according to the complexity of their subject matters (i.e. broadly speaking, mathematics, physical, biological, social sciences and humanities) [46,62–64]. This order reflects what people intuitively would consider an order of increasing scientific ‘softness’, yet this concept has no precise definition and the adjective ‘soft science’ is mostly considered denigrative. This may be why the notion of a hierarchy of the sciences is nowadays disregarded in favour of a partial or complete epistemological pluralism (e.g. [52]). How can we define and measure scientific softness? Answer: Given two fields studying systems Y[A], X[A], τ[A] and Y[B], X[B], τ[B], field A is harder than B if in which are representatively valid estimates of the fields’ bias-adjusted cumulative effects and hardness component, given by properties of their systems as well as the field’s average level of accuracy, reproducibility and bias. Explanation: equation (3.33) is a re-arrangement of the condition K(Y[A]; X[A], τ[A]) > K(Y[B]; X[B], τ[B]), i.e. the condition that field A is more negentropically efficient than field B. As argued below, this condition reflects the intuitive concept of scientific hardness. The various criteria proposed to distinguish stereotypically ‘hard’ sciences like physics from stereotypically ‘soft’ ones like sociology cluster along two relevant dimensions: — Complexity: moving across research fields from the physical to the social sciences, subject matters go from being simple and general to being complex and particular. This increase in complexity corresponds, intuitively, to an increase in the systems’ number of relevant variables and the intricacy of their interactions [65]. — Consensus: moving across research fields from the physical to the social sciences, there is a decline in the ability of scientists to reach agreement on the relevance of findings, on the correct methodologies to use, even on the relevant research questions to ask, and therefore ultimately on the validity of any particular theory [66]. table 3 , and [ ] for further references). principle property or properties author year [ref] hierarchy of the sciences simplicity, generality, quantifiability, recency, human relevance Comte 1830 [2] consilience ability to subsume disparate phenomena under general principles Whewell 1840 [67] lawfulness nomoteticity, i.e. interest in finding general laws, as opposed to idioteticity, i.e. interest in characterizing individuality Windelband 1894 [68] data hardness data that resist the solvent influence of critical reflection Russell 1914 [69] empiricism ability to calculate in advance the results of an experiment Conant 1951 [70] rigour rigour in relating data to theory Storer 1967 [71] maturity ability to produce and test mechanistic hypotheses, as opposed to mere fact collection Bunge 1967 [72] cumulativity cumulation of knowledge in tightly integrated structures Price 1970 [73] codification consolidation of empirical knowledge into succinct and interdependent theoretical formulations Zuckerman & Merton 1973 [66] consensus level of consensus on the significance of new knowledge and the continuing relevance of old Zuckerman & Merton 1973 [66] core cumulativity rapidly growing core of unquestioned general knowledge Cole 1983 [74] invariance contextual invariance of phenomena Humphreys 1990 [65] Both concepts have a straightforward mathematical interpretation, which points to the same underlying characteristic: having a relatively complex explanans and therefore a low K. A system with many interacting variables is a system for which H(X) and/or H(Y|X, τ) are large. Consequently, progress is slow (§3.3). A system in which consensus is low is one in which the cumulative methodology $τ¯+d¯τ$ expands rapidly as the literature grows. Moreover, higher complexity and particularity of subject matter entails that a given knowledge is applicable to a limited number of phenomena, entailing smaller n[Y]. Therefore, all the typical traits associated with a ‘soft’ science lead to predict a lower value of K. 3.8.1. Example: mapping a hierarchy of the sciences The idea that the sciences can be ordered by a hierarchy, which reflects the growing complexity of subject matter and, in reverse order, the speed of scientific progress, can be traced back at least to the ideas of Auguste Comte (1798–1857). The K values estimated in previous sections for various disciplines approximately reflect the order expected based on equation (3.33), particularly if the rescaled K values are compared instead, i.e. Mathematics is a partial exception, in that its K value is likely to be in most cases higher than that of any empirical field, but its rescaled K is not (at least, not if we quantify the explanandum as a binary question). Intriguingly, mathematics were considered an exception also in August Comte’s scheme, due to their non-empirical nature. Therefore, the K account of the hierarchy of the sciences mirrors Comte’s original hierarchy rather accurately. However, the hierarchy depicted by results in this essay is merely suggestive, because the examples we used are preliminary. In addition to making frequent simplifying assumptions, the estimates of K derived in this essay were usually based on individual cases (not on cumulative evidence coming from a body of literature) and have overlooked characteristics of a field that may be relevant to determine the hierarchy (for example, the average reproducibility of a literature). Moreover, there may be yet unresolved problems of scaling that impede a direct comparison between widely different systems. Therefore, at present, equation (3.34) can at best be used to rank fields that are relatively similar to each other, whereas methods to compare widely different systems may require further methodological developments. If produced, a K-based hierarchy of the sciences would considerably extend Comte’s vision in at least two respects. Firstly, it would rank not quite ‘the sciences’ but rather scientific ‘fields’, i.e. literatures and/or research communities identified by a common explanandum and/or explanans. Although the average K values of research fields in the physical, biological and social sciences are predicted to reflect Comte’s hierarchy, the variance within each science is likely to be great. It is entirely possible that some fields within the physical sciences may turn out to have lower K values (and therefore to be ‘softer’) than some fields in the biological and social sciences and vice versa. Secondly, as illustrated in §3.7, a K-based hierarchy would encompass not just sciences but also pseudosciences. Whereas the former extend in the positive range of K values, the latter extend in the negative direction. The more negative the value, the more pseudoscientific the field. 4. Discussion This article proposed that K, a quantity derived from a simple function, is a general quantifier of knowledge that could find useful applications in meta-research and beyond. It was shown that, in addition to providing a universal measure of effect size, K theory yields concise and memorable equations that answer meta-scientific questions and may help understand and forecast phenomena of great interest, including reproducibility, bias and misconduct, and scientific progress (table 1). This section will first discuss how K theory may solve limitations of current meta-science (§4.1 and 4.2), then address the most likely sources of criticisms (§4.3), and finally it will suggest how the theory can be tested (§4.4). 4.1. Limitations of current meta-science The growing success and importance of meta-research have made the need for a meta-theory ever more salient and pressing. Growing resources are invested, for example, in ensuring reproducibility [1], but there is little agreement on how reproducibility ought to be predicted, measured and understood in different fields [31,75]. Graduate students are trained in courses to avoid scientific misconduct and questionable research practices, and yet the definition, prevalence and impact of questionable behaviours across science are far from well established [50]. Increasing efforts are devoted to measuring and countering well-documented problems such as publication bias, even though inconclusive empirical evidence [42] and past failures of similar initiatives (e.g. the withering and closure of most journals of negative results [76]) suggest that the causes of these problems are incompletely understood. At present, meta-scientific questions are addressed using theoretical models derived from very specific fields. As a consequence, their results are not easily extrapolated to other contexts. The most prominent example is offered by the famous claim that most published research findings are false [77]. This landmark analysis has deservedly inspired meta-studies in all disciplines. However, its predictions are based on an extrapolation of statistical techniques used in genetic epidemiology that have several limiting assumptions. These assumptions include that all findings are generated by stable underlying phenomena, independently of one another, with no information on their individual plausibility or posterior odds, and with low prior odds of any one effect being true. These assumptions are unlikely to be fully met even within genetic studies [78], and the extent to which they apply to any given research field remains to be determined. Similar limiting assumptions are increasingly noted in the application of meta-research methodologies. Reproducibility and bias, for example, are measured using meta-analytical techniques that treat sources of variation between studies as either fixed or random [13,79]. This assumption may be valid when aggregating results of randomized control trials [80], but may be inadequate when comparing results of fields that use varying and evolving methods (e.g. ecology [81]) and that study complex systems that are subject to non-random variation (expressed, for example, in reaction norms [82]). Statistical models can be used to explore the effects of different theoretical assumptions (e.g. [83–86]) as well as other conditions that are believed to conduce to bias and irreproducibility (e.g. [87,88]). However, the plural of ‘model’ is not ‘theory’. A genuine ‘theory of meta-science’ ought to offer a general framework that, from maximally simple and universal assumptions, explains how and why scientific knowledge is shaped by local conditions. 4.2. K theory as a meta-theory of science Why does K theory offer the needed framework? First and foremost, this theory provides a quantitative language to discuss meta-scientific concepts in terms that are general and abstract and yet specific enough to avoid confusing over-simplifications. For example, the concept of bias is often operationalized in meta-research as an excess of statistically significant findings [77] or as an exaggeration of findings due to QRP [89]. Depending on the meta-research question, however, these definitions may be too narrow, because they exclude biases against positive findings and only apply to studies that use null-hypothesis significance testing, or they may be too generic, because they aggregate research practices that differ in relevant ways from each other. Similar difficulties in how reproducibility, negative results and other concepts are used have emerged in the literature as discussed in the Results section. As illustrated by the examples offered throughout this essay, K theory avoids these limitations by proposing concepts and measures that are extremely abstract and yet adaptable to reflect field-specific contexts. Beyond the conceptual level, K theory contextualizes meta-research results at an appropriate level of generalization. Current meta-research models and empirical studies face a conundrum: they usually aim to draw general conclusions about phenomena that may occur anywhere in science, but these phenomena find contextual expression in fields that vary widely in characteristics of subject matter, theory, methodology and other aspects. As a result, meta-research studies are forced to choose between under-generalizing their conclusions by restricting them to a specific field or literature and over-generalizing them to an entire field or discipline, or even to the whole of science. One of the unfortunate consequences of this over-generalization of results has been the diffusion of a narrative that ‘science is in crisis’, narrative that has no empirical or pragmatic justification [75]. Excessive under- and over-generalizations may be avoided by systematizing meta-research results with K theory, which offers a mid-level understanding of meta-scientific phenomena that is independent of subject matter and yet measurable in context. An example of the mid-level generalizations permitted by K theory is the hierarchy of sciences and pseudosciences proposed in §3.8. A classification based on this approach, for example, could lead us to abandon traditional disciplinary categories (e.g. ‘physics’ or ‘social psychology’) in favour of epistemologically relevant categories such as ‘high-h’ fields, or ‘low-λ’ systems. Other classifications and theories about science may be derived from K theory. An alternative to the rather ill-defined ‘hard–soft’ dimension, for example, could be a continuum between two strategies. At one end of the spectrum, is what we might call a ‘τ-strategy’, which invests more resources in identifying and encoding regularities and laws that allow general explanations and long-term predictions, at the cost of contingent details. At the other end, is an ‘X-strategy’, which invests greater resources in acquiring large amounts of contingent, descriptive information that enables accurate but proximate explanations and predictions. Depending on characteristics of the explananda and the amount of resources available for the storage and processing of information, each scientific field expresses an optimal balance between τ-strategy and X-strategy. 4.3. Foreseeable criticisms and limitations At least five criticisms of this essay may be expected. The first is a philosophical concern with the notion of knowledge, which in this article is defined as information compression by pattern encoding. Critics might argue that this definition does not correspond to the epistemological notion of knowledge as ‘true, justified belief’ [90]. Even Fred Dretske, whose work extensively explored the connection between knowledge and information [10], maintained that ‘false information’ was not genuine information and that knowledge required the latter [91]. The notion of knowledge proposed in this text, however, is only apparently unorthodox. In the K formalism, a true justified belief corresponds to a system for which K > 0. It can be shown that a ‘false, unjustified’ belief is one in which K ≤ 0. Therefore, far from contradicting information-theoretic epistemologies, K theory may give quantitative answers to open epistemological questions such as ‘how much information is enough’? The second criticism may be that the ideas proposed in this essay are too simple and general not to have been proposed before. The claim made by this essay, however, is not that every concept in it is new. Rather to the contrary, the claim is that K theory unifies and synthesizes innumerable previous approaches to combining cognition, philosophy and information theory, and it does so in a formulation that, to the best of the author’s knowledge, is entirely new and original. Earlier ideas that have inspired the K function are found, for example, in Brillouin’s book Science and information theory, which discussed the information value of experiments and calculated the information content of a physical law [5]. Brillouin’s analysis, however, did not include factors that are key to the K function, including the standardization on logarithm space, the decline rate of knowledge, the number n[Y] of potential applications of knowledge and the inclusion of the information costs of the theory τ. The description length of theories (or, at least, of statistical models) is a key component of the minimum description length principle, which was first proposed by Rissanen [7 ] and is finding growing applications in problems of statistical inference and computation (e.g. [6,8]). The methods developed by MDL proponents and by algorithmic information theory are entirely compatible with the K function (and could be used to quantify τ) but differ from it in important theoretical and mathematical aspects (§2.2.2). Within philosophy, Paul Thagard’s Computational philosophy of science [11] offers numerous insights into the nature of scientific theories and methodologies. Thagard’s ideas may be relevant to K theory because, among other things, they illustrate what the τ of a scientific theory might actually contain. However, Thagard’s theory differs from K theory in substantive conceptual and mathematical aspects, and it does not offer a general quantifier of knowledge nor does it produce a meta-scientific methodology. Finally, K theory was developed independently from other recent attempts to give informational accounts of cognitive phenomena, for example, the free-energy principle (e.g. [92]) and the integrated information theory of consciousness (e.g. [93]). Whereas these theories bear little resemblance to that proposed in this essay, they obviously share a common objective with it, and possible connections may be explored in future research. The third criticism might be methodological, because entropy is a difficult quantity to measure. Estimates of entropy based on empirical frequencies can be biased when sample sizes are small, and they can be computationally demanding when data is large and multi-dimensional. Neither of these limitations, however, is critical. With regard to the former problem, as demonstrated in §2.3.6, powerful computational methods to estimate entropy with limited sample size are already available [18]. With regard to the latter problem, we may note that the ‘multi-dimensional’ K[md] used in §3.3 is the most complex measure proposed and yet it is not computationally demanding, because it is derived from computing unidimensional entropies. The ‘cumulative’ K[cum] may also be computationally demanding, as it requires estimating the entropy of mixed distributions. However, analytical approaches to estimate the entropy of mixed distributions and other complex data structures are already available and are likely to be developed further (e.g. [94,95]). The fourth criticism may regard the empirical validity of the measures proposed. As it was emphasized throughout the text, all the practical examples offered were merely illustrative and preliminary, because they generally relied on incomplete data and simplifying assumptions. In particular, it appears to be difficult to quantify exactly the information content of τ, especially for what concerns the description of a methodology. This limitation, however, is often avoidable. In most contexts of interests, it will suffice to estimate τ with some approximation and/or in relative terms. It may be a common objective within studies using K theory, for example, to estimate the divergence between two methodologies. Even if complete information about a methodology in unavailable (if anything, because it is likely to include ‘tacit’ components that are by definition hidden) relative differences documented in the methods’ description are simple to identify and therefore to quantify by K methods. These relative quantifications could become remarkably accurate and extend across research fields, if they were based on a reliable taxonomy of methods that provided a fixed ‘alphabet’ $T$ of methodological choices characterizing scientific studies. Taxonomies for research methods are already being developed in many fields to improve reporting standards (e.g. [96]) and could be extended by meta-scientists for meta-research purposes. The fifth criticism that may be moved to K theory is that it is naively reductionist, because it appears to overlook the preponderant role that historical, economic, sociological and psychological conditions play in shaping scientific practices. Quite to the contrary, K theory is not proposed as an alternative to historical and social analyses of science, but as a useful complement to them, which is necessary to fully understand the history and sociology of a research field. A parallel may be drawn with evolutionary biology: to explain why a particular species evolved a certain phenotype or to forecast its risk of extinction, we need to combine contingent facts about the species’ natural history with general theories about fitness dynamics; similarly, to better understand and forecast the trajectory taken by a field we need to combine contingent and historical information with general principles about knowledge dynamics. 4.4. Testable predictions and conclusion We can summarize the overall prediction of K theory in a generalized rule: An activity will exhibit the epistemological, historical, sociological and psychological properties associated with a science if and to the extent that: in which is the knowledge, corrected for biases, and are the costs and impacts of biases internal or external to the system. If biases are absent or not easily separable from the system, and indicating with the overall knowledge yield of the activity, the rule simplifies to This overall prediction finds specific expression in the relations reported in table 1, each of which leads to predict observable phenomena in the history and sociology of science. These predictions — Scientific theories and fields fail or thrive in proportion to the their rate of consilience, measured at all levels—from the micro (K[cum]) to the macro (K[md], and see inequality (3.11)). For example, we predict that discredited theories, such as that of phlogiston or phrenology, were characterized by a K that was steadily declining and were abandoned when K ≤ 0. Conversely, fields and theories that grow in size and importance are predicted to exhibit a positive growth rate of K. When the rate of growth of K slows down and/or when it reaches a plateau, K is ‘re-set’ to zero by the splitting in sub-fields and/or the expansion to new explananda or explanantia. — The expected reproducibility of published results is less than 100% for most if not all fields, and is inversely related to the average informational divergence, of explanandum and/or explanans, between the original study and its replications. In some instances, the divergence of methods might reflect the differential presence of bias. However, the prediction is independent of the presence of bias. — The value of null and contradictory findings is smaller or equal to that of ‘positive’ findings, and is directly related to the level of a field’s theoretical and methodological codification ($|T|$) and explanatory power (k). This value may be reflected, for example, in the rate of citations to null results, their rate of publication and the space such results are given in articles with multiple — In functional sciences, the prevalence of questionable, problematic and openly egregious research practices is inversely related to their knowledge cost. Therefore, their prevalence will vary depending on details of the practice (e.g. how it is defined) as well as the level of codification and explanatory power of the field. — The relative prestige and influence of a field is directly related to its K (scaled and/or not scaled). All else being equal, activities that can account for greater explananda with smaller explanantia are granted a higher status, reflected in symbolic and/or material investments (e.g. societal recognition and/or public research funds). — The relative popularity and influence of a pseudoscience is inversely related to its K. An activity that (pretends) to yield knowledge will acquire relatively more prestige to the extent that it promises to explain a wider range of phenomena using methods that appear to be highly codified and very complex. The testability of these predictions is limited by the need to keep ‘all else equal’. As discussed above, there is no denying that contingent and idiosyncratic factors shape the observable phenomena of science to a significant, possibly preponderant extent. Indeed, if empirical studies using theory cumulate, we may eventually be able to apply theory to itself, and it may turn out that the empirical value of theory is relatively small and that, to any extent that external confounding effects are not accounted for, the theory is large, leading to low falsifiability. The testability of K theory, however, extends beyond the cases examined in this essay. On the one hand, within meta-science, more contextualized analyses about a field or a theory will lead to more specific and localized predictions. These localized predictions will be more accurately testable, because most irrelevant factors will be controlled for more easily. On the other hand, and most importantly, the theory can in principle apply to phenomena outside the contexts of science. The focus of this article has been quantitative scientific research, mainly because this is the subject matter that inspired the theory and that represents the manifestation of knowledge that is easier to conceptualize and quantify. However, the theory and methods proposed in this essay could be adapted to measure qualitative research and other forms of knowledge. Indeed, with further development, the K function could be used to quantify any expression of cognition and learning, including humour, art, biological evolution or artificial intelligence (see appendix A), generating new explanations and predictions that may be explored in future analyses. This research does not involve the use of animal or human subject, nor the handling of sensitive information. No ethical approval and no permission to carry out fieldwork was required. Data accessibility The R code and datasets used to generate all analyses and figures are included as electronic supplementary material. Any other empirical dataset used in the analyses was obtained from publications and repositories that are publicly accessible and indicated in the text. Competing interests I declare I have no competing interests. I received no funding for this study. Marco del Giudice gave helpful comments about the analysis of gender differences in personality. Appendix A A.1. Postulates underlying K theory A.1.1. Postulate 1: information is finite The first postulate appears to reflect a simple but easily overlooked fact of nature. The universe—at least, the portion of it that we can see and have causal connection to—contains finite amounts of matter and energy, and therefore cannot contain infinite amounts of information. If each quantum state represents a bit, and each transition between (orthogonal) states represents an operation, then the universe has performed circa 10^120 operations on 10^90 bits since the Big Bang [97]. Advances in quantum information theory suggest that our universe may have access to unlimited amounts of information, or at least of information processing capabilities [98] (but see [99] for a critique). However, even if this were the case, there would still be little doubt that information is finite as it pertains to knowledge attainable by organisms. Sensory organs, brains, genomes and all other pattern-encoding structures that underlie learning are finite. The sense of vision is constructed from a limited number of cone and rod cells; the sense of hearing uses information from a limited number of hair cells, each of which responds to a narrow band of acoustic frequencies; brains contain a limited number of connections; genomes a countable number of bases, etc. The finitude of all biological structures is one of the considerations that has led cognitive scientists and biologists to assume information is finite when attempting, for example, to model the evolution of animal cognitive abilities [100]. Even mathematicians have been looking with suspicion to the notion of infinity for a long time [101]. For example, it has been repeatedly and independently shown that, if rational numbers were actually infinite, then infinite information could be stored in them and this would lead to insurmountable contradictions [102]. Independent of physical, biological and mathematical considerations, the postulate that information is finite is justifiable on instrumentalist grounds, because it is the most realistic assumption to make when analysing scientific knowledge. Quantitative empirical knowledge is based on measurements, which are technically defined as partitionings of attributes in sets of mutually exclusive categories [103]. In principle, this partitioning could recur an infinite number of times, but in practice it never does. Measurement scales used by researchers to quantify empirical phenomena might be idealized as extending to infinity, but in practice they always consist in a range of plausible values that is delimited at one or both ends. Values beyond these ends can be imagined as constituting a single set of extreme values that may occur with very small but finite probability. Therefore, following either theoretical or instrumentalist arguments, we are compelled to postulate that information, i.e. the source of knowledge, is a finite quantity. Its fundamental unit of measurement is discrete and is called the bit, i.e. the ‘difference that makes a difference’, according to Gregory Bateson’s famous definition [104]. For this difference to make any difference, it must be perceivable. Hence, information presupposes the capacity to dichotomize signals into ‘same’ and ‘not same’. This dichotomization can occur recursively and we can picture the process by which information is generated as a progressive subdivision (quantization) of a unidimensional attribute. This quantization operates ‘from the inside out’, so to speak, and by necessity always entails two ‘open ends’ of finite probability. A.1.2. Postulate 2: knowledge is information compression The second postulate claims that the essence of any manifestation of what we call ‘knowledge’ consists in the encoding of a pattern, which reduces the amount of information required to navigate the world successfully. By ‘pattern’ we intend here simply a dependency between attributes—in other words, a relationship that makes one event more or less likely, from the point of view of an organism, depending on another event. By encoding patterns, an organism reduces the uncertainty it confronts about its environment—in other words, it adapts. Therefore, postulate 2, just like postulate 1, is likely to reflect an elementary fact of nature; a fact that arguably underlies not just human knowledge but all manifestations of life. The idea that knowledge, or at least scientific knowledge, is information compression is far from new. For example, in the late 1800s, physicist and philosopher Ernst Mach argued that the value of physical laws lay in the ‘economy of thought’ that they permitted [3]. Other prominent scientists and philosophers of the time, such as mathematician Henri Poincaré, expressed similar ideas [55]. Following the development of information theory, scientific knowledge and other cognitive activities have been examined in quantitative terms (e.g. [5,105]). Nonetheless, the equivalence between scientific knowledge and information compression has been presented as a principle of secondary importance by later philosophers (including for example Popper [41]), and today does not appear to occupy the foundational role that it arguably deserves [106]. The reluctance to equate science with information compression might be partially explained by two common misconceptions. The first one is an apparent conflation of lossless compression, which allows data to be reconstructed exactly, with lossy compression, in which instead information from the original source is partially lost. Some proponents of the compression hypothesis adopt exclusively a lossless compression model, and therefore debate whether empirical data are truly compressible in this sense (e.g. [107]). However, science is clearly a lossy form of compression: the laws and relations that scientists discover typically include error terms and tolerate large portions of unexplained variance. The second, and most important, source of scepticism seems to lie in an insufficient appreciation for the fundamental role that information compression plays not only in science, but also knowledge and all other manifestations of biological adaptation. Even scientists who equate information compression with learning appear to under-estimate the fundamental role that pattern-encoding and information compression play in all manifestations of life. In their seminal introductory text to Kolmogorov complexity [6], for example, Li and Vitányi unhesitatingly claim that ‘science may be regarded as the art of data compression’ [6, p. 713], that ‘learning, in general, appears to involve compression of observed data or the results of experiments’, and that ‘in everyday life, we continuously compress information that is presented to us by the environment’, but then appear cautious and conservative in extending this principle to non-human species, by merely suggesting that ‘perhaps animals do this as well’, and citing results of studies on tactile information transmission in ants [6, p. 711]. It seems that even the most prominent experts and proponents of information compression methodologies can be disinclined to apply their favoured principle beyond the realm of human cognition and animal behaviour. This essay takes instead the view that information compression by pattern encoding is the quintessence of biological adaptation, in all of its manifestations. Changes in a population’s genetic frequencies in response to environmental pressures can be seen as a form of adaptive learning, in which natural selection reinforces a certain phenotypic response to a certain environment and weakens other responses, thereby allowing a population’s genetic codes to ‘remember’ fruitful responses and ‘forget’ erroneous (i.e. non-adaptive) ones. For these reinforcement processes to occur at all, environmental conditions must be heterogeneous and yet partially predictable. Natural selection, in other words, allows regularities in the environment to be genetically encoded. This process gives rise to biodiversity that may mirror environmental heterogeneity at multiple levels (populations, varieties, species, etc.). Such environmental heterogeneity is not exclusively spatial (geographical). Temporal heterogeneity in environmental conditions gives rise to various forms of phenotypic plasticity, in which identical genomes express different phenotypes depending on cues and signals received from the environment [108]. Whether genetic or phenotypic, adaptation will be measurable as a correlation between possible environmental conditions and alternative genotypes or phenotypes. This correlation is in itself a measurable pattern. As environments are increasingly shaped by biological processes, they become more complex and heterogeneous, and they therefore select for ever more efficient adaptive capabilities—ever more rapid and accurate ways to detect and process environmental cues and signals. Immune systems, for example, allow large multicellular plants and animals to protect themselves from infective agents and other biological threats whose rate of change far out-competes their own speed of genetic adaptation; endocrine systems allow the various parts of an organism to communicate or coordinate their internal activities in order to respond more rapidly to changes in external conditions. Similar selective pressures have favoured organisms with nervous systems of increasing size and complexity. Animal behaviour and cognition, in other words, are simply higher-order manifestations of phenotypic plasticity, which allow an organism to respond to environmental challenges on shorter temporal scales. Behavioural responses may be hard-wired in a genome or acquired during an organism’s lifetime, but in either case they entail ‘learning’ in the more conventional sense of encoding, processing and storing memories of patterns and regularities abstracted from environmental cues and signals. Human cognition, therefore, may be best understood as just another manifestation of biological adaptation by pattern encoding. At the core of human cognition, as with all other forms of biological adaptation, lies the ability to anticipate events and thus minimize error. When we say that we ‘know’ something, we are claiming that we have fewer uncertainties about it because, given an input, we can predict above chance what will come next. We ‘know a city’, for example, in proportion to how well we are able to find our way around it, by going purposely from one street to the next and/or navigating it by means of a simplified representation of it (i.e. a mental map). This ability embodies the kind of information we may communicate to a stranger when asked for directions: if we ‘know the place’, we can provide them with a series of ‘if-then’ statements about what direction to take once identifiable points are reached. In another example, we ‘know a song’ in proportion to how accurately we can reproduce its specific sequence of words and intonations with no error or hesitation, or in proportion to how readily we can recognize it when we hear a few notes from it. Similarly, we ‘know a person’ in proportion to how many patterns about them we have encoded: at first, we might only be able to recognize their facial features; after making superficial acquaintance with them, we will be able to connect these features to their name; when we know them better, we can tell how they will respond to simple questions such as ‘where are you from?’; eventually we might ‘know them well’ enough to predict their behaviour rather accurately and foretell, for example, the conditions that will make them feel happy, interested, angry, etc. The examples above aim to illustrate how the concept of ‘prediction’ underlies all forms of knowledge, not just scientific knowledge, and applies to both time (e.g. knowing a song) and space (e.g. knowing a city). Memory and recognition, too, can be qualified as forms of prediction and therefore as manifestations of information compression, whereby sequences of sensory impressions are encoded and recalled (i.e. memorized) or matched to new experiences (i.e. recognized) in response to endogenous or exogenous signals. Language is also a pattern-encoding, information compression tool. A typical sentence, which constitutes the fundamental structure of human language and thought, expresses the connection between one entity, the subject, and another entity or property, via a relation condition encoded in a verb. It is not a coincidence that the most elementary verb of all—one that is fundamental to all human languages—is the verb ‘to be’. This verb conveys a direct relation between two entities, and thus represents the simplest pattern that can be encoded: ‘same’ versus ‘not same’, as discussed in relation to Postulate 1. Even a seemingly abstract process like logical deduction and inference can be understood as resulting from pattern encoding. According to some analyses, computing itself and all other manifestations of artificial and biological intelligence may result from a simple process of pattern matching [109]. Scientific knowledge, therefore, is most naturally characterized as just one manifestation of human cognition among many and, therefore, as nothing more than a pattern-encoding activity that reduces uncertainty about one phenomenon by relating it to information about other phenomena. The knowledge produced by all fields of scientific research is structured in this way: — Mathematical theorems uncover logical connections between two seemingly unrelated theoretical constructs, proving that the two are one and the same. — Research in the physical sciences typically aims at uncovering mathematical laws, which are rather explicitly encoding patterns (i.e. relationships between quantities). Even when purely descriptive, however, physical research actually consists in the encoding of pattern and relations between phenomena—for example, measuring the atomic weight of a known substance might appear to be a purely descriptive activity, but the substance itself is identified by its reactive properties. Therefore, such research is about drawing connections between properties. — Most biological and biomedical research consists in identifying correlations or causes and/or in describing properties of natural phenomena, all of which are pattern-encoding activities. Research in taxonomy and systematics might appear to be an exception, but it is not: organizing the traits of a multitude of species into a succinct taxonomical tree is the most elementary form of data — Quantitative social and behavioural sciences operate in a similar manner to the biological sciences. Even qualitative, ethnographic, purely descriptive social and historical research consists in data compression, because it presupposes that there are general facts about human experiences, individuals, or groups that can be communicated, entailing that they can be described, connected to each other and/or summarized in a finite amount of text. — The humanities aim to improve our understanding of complex and often unique human experiences, and might therefore appear to have fundamentally different objectives from the natural and social sciences. To any extent that they offer knowledge and understanding, however, these come in the form of information compression. Research in History, for example, is guided by the reconstruction and succinct description of events, which is based on logic, inference and drawing connections to other events, and therefore it follows the principles of economy of thought and compression. The study of literary works, to make another example, produces knowledge by drawing connections and similarities between texts, identifying general schemata and/or uncovering new meaning in texts by recurring to similes and metaphors [110]. Similarities, connections, schemata, similes and metaphors arguably constitute the basis of human cognition [110] and are all manifestations of information compression by pattern encoding. Other non-academic manifestations of human cognition, creativity and communication can be understood as stemming from a process of information compression, too. The sensual and intellectual pleasure that humans gain from music and art, for example, seems to derive from an optimal balance between perception of structure (pattern that generates predictions and expectations) and perception of novelty (which stimulates interest by presenting new and knowable information) [111]. The sense of humour similarly seems to arise from the sudden and unexpected overturning of the predicted pattern, which occurs when an initially plausible explanation of a condition is suddenly replaced by an alternative, unusual and yet equally valid one [112]. The intellectual and artistic value of a work of art lies in its ability to reveal previously unnoticed connections between events or phenomena in the world (thereby revealing a pattern) and/or in its capacity to synthesize and communicate effectively what are otherwise highly individual, complex and ineffable human experiences—thereby lossy-compressing and transmitting the experience. Appendix B B.1. Relation with continuous distribution Indicating with f(x) a probability density function and with $h(X)=−∫f(x)log⁡f(x) d(x)$ the corresponding differential entropy, we have $H(XΔ)≈h(X)+log⁡1Δ=h(X)+n$B 1 in which Δ = 2 is the size of the length of the bin in which ) is quantized, and corresponds to the number of bits required to describe the function to accuracy. Evidently, we can always rescale in order to have Δ = 1. Equation (B 1) applies to any probability density function. Here we will consider in particular the case of the normal distribution, the differential entropy of which is simply $h(x)=log⁡2πeσy$. Therefore, if y is a continuous RV, quantized to n bits, for a given x and τ we have $K(y;x,τ) =log⁡(2πeσy)+n−log⁡(2πeσy|x,τ)−nlog⁡(2πeσy)+n+x+τ=log⁡2πe+log⁡σy−log⁡2πe−log⁡σy|x,τlog⁡2πe+log⁡σy2+n+x+τ =log⁡σy−log⁡σy|x,τlog⁡σy+x+τ+log⁡2πe+n→log⁡σy′−log⁡σy|x,τ′log⁡σy′+x+τ+log⁡2πe= log⁡σy′−log⁡σy|x,τ′log⁡σy′+x+τ+C ,$B 2 in which ′ corresponds to rescaled to a common lowest significant digit (e.g. from = 0.123 to ′ = 123). Appendix C $Kadj ≡h(H(Y)−∑p(y,x|τ)log1p(y|x,τ^)H(Y)) =h(H(Y)−∑p(y,x|τ)log1p(y|x,τ)−∑p(y,x|τ)log p(y|x,τ)p(y|x,τ^)H(Y)) =K(Y;X,τ)−D(Y|X,τ∥Y|X,τ^)hH(Y)≡Kobs−D(Y|X,τ∥Y|X,τ^)hH(Y)$C 1 Appendix D Firstly note that, independently of the size of the vectors $λλ$ and $dd$ in equation (2.29), their inner product yields a number. Therefore, for the purposes of our discussion we can assume λ and d to be single numbers. Equation (2.29) claims that there exists a $λ∈R$ such that $λ=1dlogA⁡K(Y;X,τ)K(Y′;X′,τ′)$D 1 in which > 0 expresses the divergence between systems, and is an arbitrary basis. This statement is self-evidently true, as long as ′) ≠ 0 and ) ≠ 0 or, equivalently, if we allow to be approximately infinite in the case that goes to zero in one step = 1. However, two rather useful conclusions can be derived about this equation: • (i)Under most conditions, K is a non-increasing function of divergence. That is, K(Y′; X′, τ′) ≤ K(Y; X, τ) and therefore λ ≥ 0. • (ii)The larger the divergence, the larger the decline of K, such that under typical conditions we have K(Y[d+1]; X, τ) = K(Y[d]; X, τ)A^−λ = K(Y; X, τ)A^−λ(d+1) for distances in the explanandum, and similarly for distances in the explanans. We will review each argument separately. D.1. Statement (i) From equation (D 1), if λd ≥ 0, and regardless of the base A chosen for the logarithm, we have $log⁡H(Y′)+H(X′)−log⁡p(τ′)H(Y)+H(X)−log⁡p(τ)≥log⁡H(Y′)−H(Y′|X′,τ′)H(Y)−H(Y|X,τ)≡log⁡I(Y′;X′,τ′)I(Y;X,τ)$D 2 in which ) = ) − ) is the mutual information function. Claiming that the explanandum Y[d] is at a divergence d from Y implies that not all information about Y[d] may be contained in Y. This condition is typically described mathematically as a Markov chain (MC). An MC is said to be formed by random variables (RVs) X, Y, Z in that order, and is indicated as X → Y → Z, when the distribution of Z is conditionally independent of X. In other words, the best predictor of Z is Y, and if Y is known, X adds nothing. In entropy terms, this entails that H(Z|Y, X) = H(Z|Y), and it formalizes our intuition that information transmitted along a noisy channel tends to be lost. Markov chains are used to model a variety of systems in the physical, biological and social sciences. An isolated physical system, for example, would be represented as an MC, in which the transition probabilities from one state of the system to the next are determined by the laws of physics. In the K formalism, the laws of physics would be encoded in a τ, whereas a Markov chain may consist in the input X and subsequent states of Y, i.e. X → Y → Y[d] → Y[d+1] … . Other representations are possible. For example, if no input is present, then the MC would consist in Y → Y[d] → Y[d+1] … or, if the state of both input and explanandum is allowed to change, then the MC is (X, Y) → (X[d], Y[d]) → (X[d+1], Y[d+1]) … . Regardless of how it is formalized in K, a system describable by an MC is subject to a central result of information theory, the data processing inequality (DPI), which states that the mutual information between explanandum and explanans will be non-increasing. We will repeat here the proof of the DPI assuming a constant τ and a Markov chain X → Y → Y[d]. We consider the mutual information between input and two states of the explanandum, and note that it can be expressed in two different ways: $I(Y,Yd;X)=I(Y;X)+I(Yd;X|Y)=I(Yd;X)+I(Y;X|Yd)$D 3 since by Markovity, ) = 0, and remembering that the mutual information is always non-negative we re-arrange and conclude that $I(Yd;X)=I(Y;X)−I(Y;X|Yd)≤I(Y;X),$D 4 which proves the DPI. Applying this result to inequality ( D 2 ), we obtain $log⁡H(Yd)+H(X)−log⁡p(τ)H(Y)+H(X)−log⁡p(τ)≥log⁡I(Yd;X,τ)I(Y;X,τ)≤0.$D 5 Therefore, inequality (D 2) is always satisfied when H(Y[d]) ≥ H(Y) (which makes the left-hand side of the inequality larger or equal to 0). In other words, K will always be non-increasing, as long as the entropy in the explanandum is stable or increasing. A stable or increasing entropy is the most probable condition of physical phenomena. Although a less likely occurrence, it may be the case that the entropy of the explanandum actually declines with divergence, in which case inequality (D 1) may or may not be satisfied. To examine this case, let H(Y[d]) < H(Y) = H(Y[d]) + d[Y], with d[Y] > 0 quantifying the divergence. And, similarly, let H(Y|X, τ) = H(Y[d]|X, τ) + d[Y|X]. Then inequality (D 1) can be arranged as $H(Yd)+dY+H(X)−log⁡p(τ)H(Yd)+H(X)−log⁡p(τ)≤H(Yd)+dY−H(Yd|X,τ)−dY|XH(Yd)−H(Yd|X,τ),$D 6 which with a few re-arrangements leads to the condition $K(Yd;X,τ)≤dY−dY|XdY,$D 7 which is not guaranteed to be true, but can in principle always be met. This follows because by definition either > 0 and (otherwise we would have ) > ), contradicting the DPI), or < 0, and again , because ≥ 0. Therefore, the right-hand side is always non-negative, so it could in principle be larger than the value on the left-hand side. However, if , then the inequality is certainly false because in that case ) > 0. Therefore, we conclude that may increase with divergence, when the information in (the uncertainty, complexity of) the explanandum decreases, which, however, is a less likely occurrence. For the case of a theory/methodology τ′ = τ[d] at a divergence d from another τ, the argument is only slightly different. Crucial, in this case, is the assumption that the divergence d represents a random deviation from τ, i.e. one that is independent of τ itself and is not determined by the value of K(Y; X, τ[d]). This assumption is equivalent to that of made for a Markov chain, in which the τ is subjected to a level of noise proportional to d. However, the effects on K require a different analysis. Firstly, note that the two components may have the same description length, log p(τ[d]) = log p(τ), or not. In the former case, τ and τ′ differ solely in some of the symbols that compose them—in other words, they encode the same number and types of choices, but differ in some of the specific choices made. In the latter case, the distance d quantifies the information that is missing—in other words, the choices encoded in τ that are not specified in τ[d]—and logp(τ) = logp(τ[d]) + d. Starting with the case that τ[d] is not shorter than τ, the consequences of a divergence d can be understood by defining a set $Td : {τ1,τ2…τd}$ of all possible (components of) theories of description length −logp(τ), that are at an information distance d from the ‘original’ theory/methodology τ. To avoid confusion, we will henceforth indicate the latter with τ*. Now, let T[d] be the uniform RV corresponding to this set, and let $Kd : {K(Y;X,τi) : τi∈Td}$ be the set of K values corresponding to each instantiation of T[d]. Clearly, $Kd$ has one maximum, except for the special case in which $K(Y;X,τi)=K(Y;X,τ j)∀τi,τ j∈Td$, and all K have exactly the same value irrespective of the theory. If the latter were the case, then τ[i] would be a redundant element of the theory/ methodology, in other words an unnecessary specification. However, such redundancies should not be a common occurrence, if τ is fixed to maximize K. Therefore, excluding the improbable case in which τ[i] is redundant, then $Kd$ has a maximum. If τ* is the theory corresponding to the maximum value K(Y; X, τ*) in $Kd$, then for all the remaining τ[ i] ≠ τ*, 0 ≤ K(Y; X, τ[i]) < K(Y; X, τ*) and therefore K(Y; X, τ[d]) < K(Y; X, τ*) or equivalently H(Y) − H(Y|X, τ[d]) < H(Y) − H(Y|X, τ*), which satisfies inequality (D 1). Lastly, if τ* and τ[d] are both elements drawn at random from $Td$ (in other words, neither was fixed because of its resulting value of K), then their respective effects will both correspond, on the average, to the expected value of the set: $H(Y)−H(Y|X,τd) =H(Y)−H(Y|X,τ∗) =H(Y)−∑τi∈Td Pr{Td=τi}H(Y|X,Td=τi)$D 8 which, on the average, would meet condition ( D 1 ) as it entails equality (no decline in ). In practice, the difference in between two randomly chosen * and would be randomly distributed around the value of zero. The case of τ* and τ[d] being random elements, however, is again generally implausible and unrealistic. In the most probable scenario, a τ was selected because it optimized the value of K in specific conditions. If those conditions remain and the τ is altered, then the default assumption must be that the corresponding K will be lower. This assumption of random differences is a rarely questioned standard in statistical modelling. In meta-analysis, for example, between-study heterogeneity is assumed to be random and normally distributed, which translates into assuming that the variance of effects produced by methodologically heterogeneous studies is symmetrically distributed around a true underlying effect [79]. However, examined from the perspective of how methods are developed to produce knowledge, a random distribution of between-study differences does not appear to be the most likely, indeed the most realistic, The logic above can be extended to the case in which the two τ components do not have the same description length. In particular, let τ[d] represent a theory/methodology of shorter description length, −logp(τ*) = −logp(τ[d]) + d, and let T[d] be an RV with alphabet $Td : {τ1,τ2…τd}$ representing the set of all possible theories that have distance d from τ*. Then inequality (D 1) can be re-arranged as $H(Y)+H(X)−log⁡p(τd)+dH(Y)+H(X)−log⁡p(τd)≤H(Y)−H(Y|X,Td=τ∗)H(Y)−∑τi∈Td Pr{Td=τi}H(Y|X,Td=τi),$D 9 which leads to the condition $d≤E[H(Y|X,Td)]−H(Y|X,Td=τ∗)K(Y;X,Td)$D 10 in which $E[H(Y|X,Td)]=∑τi∈Td Pr{Td=τi}H(Y|X,Td=τi)$ is the expected value of the residual entropy across every possible specification of the . Since > 0, the inequality will not be satisfied if )] ≤ *), i.e. * yields a larger residual entropy than the average element in . However, as argued above, this is the least likely scenario, as it would presuppose that the original and longer theory/methodology * had not been selected because it generated a relatively large D.2. Statement (ii) With regard to divergences in the explanandum, the statement follows from the recursive validity of the DPI. The statement entails that $λ =log⁡H(Yd+1)+H(X)−log⁡p(τ)H(Yd+H(X)−log⁡p(τ)−log⁡H(Yd+1)−H(Yd+1|X,τ)H(Yd)−H(Yd|X,τ) =log⁡H(Yd+1)+H(X)−log⁡p(τ)H(Yd+H(X)−log⁡p(τ)−log⁡H(Yd)−H(Yd|X,τ)−(H(Yd|Yd+1)−H(Yd|Yd+1,X,τ))H(Yd)−H(Yd|X,τ) = log⁡H(Yd+1)+H(X)−log⁡p(τ)H(Yd+H(X)−log⁡p(τ)−log⁡(1−I(Yd;X,τ|Yd+1)I(Yd;X,τ)).$D 11 Therefore, λ is a constant as long as the proportional loss of mutual information and/or the increase in entropy of Y is constant. As before, whereas there may be peculiar circumstances in which this is not the case, in general a proportional change follows from assuming that the loss is due to genuine noise. Indeed, exponential curves describe how a Markov chain reaches a steady state [113]. Exponential curves are also used to model the evolution of chaotic systems. A system is said to be chaotic when it is highly sensitive to initial conditions. Since accuracy of measurement of initial states is limited, future states of the system become rapidly unpredictable even when the system is seemingly simple and deterministic. Paradigmatic chaotic systems, such as the three-body problem or the Lorenz weather equations, share the characteristics of being strikingly simple and yet are extremely sensitive to initial conditions, which make their instability particularly notable [114,115]. In standard chaos theory, the rapidity with which a system diverges from the predicted trajectory is measured by an exponential function in the form: $dNd0≈eλN,$D 12 in which is the relative offset after N steps (i.e. recalculations of the state of the system), and is known as Liapunov exponent, a parameter that quantifies sensitivity of the system to initial conditions. Positive Liapunov exponents correspond to a chaotic system, and negative values correspond to stable systems, i.e. systems that are resilient to perturbation. There is a clear analogy between Liapunov exponents and in equation ( ), but the two are not equivalent, and the relationship between chaos theory and theory remains to be explored in future research. The argument for a proportionality between the divergence d in τ and the corresponding decline of K is weaker, although rather intuitive. As already argued when formulating the theoretical argument for K, the larger the set $Tb$ of possible theories, the lower the expected value of K in the set, K(Y, X, T[b]), because most of the theories/methodologies in the set are likely to be nonsensical and yield K ≈ 0. Therefore, at least in very general terms, the relation of equation (2.29) holds for divergences in τ as well. The argument in this case is weaker because the relation between the divergence of τ, d[τ] and K(Y; X, τ[d]) is likely to be complex and idiosyncratic. For any given d, multiple different τ[d] are possible. For example, if one binary choice in τ is missing from τ[d], then d = 1 but the values of K(Y; X, τ[d]) can vary greatly, from being approximately identical to K to being approximately zero, depending on what element of the methodology is missing. Mathematically, this fact can be expressed by allowing different values of λ for any given distance. These values may be specific to a system and may need to be estimated on a case-by-case basis. Therefore, to allow practical applications, the relationship between K and d[τ] is best modelled as the inner product of two vectors, e.g. $λλ⋅dd=dYλY+dτ1λτ1+dτ2λτ2+⋯+dτlλτ1$, in which $λλ= λτ1+λτ2+⋯+λτ1$ contains empirically derived estimates of the impact that distances of specific elements of the theory/methodology have on K. Extending this model to divergences in explanandum and input leads to the general formulation of equation (2.29). Appendix E Let X^α be an RV quantized to resolution (i.e. bin size, or accuracy) α, and let $a∈N$ be the size of the alphabet of X, such that α = 1/a. At no cost to generality, let an increase of resolution consist in the progressive sub-partitioning of α, such that α′ = α/q with $q∈N$, q ≥ 2. Then $0<H(Xα′)−H(Xα)≤log⁡(q).$E 1 If $H(Xα)=−∑1a p(x)log⁡p(x)$, with x representing any one of the a partitions, then $H(Xα′)=−∑1a×q p(x′)log⁡p(x′)=−∑1a∑1q p(a)p(q|a)log⁡[p(a)p(q|a)]≡H(A)+H(Q|A)$, where Q and A are the random variables resulting from the partitions. Known properties of entropy tell us that the entropy produced by the q-partition of α is smaller or equal to the logarithm of the number q of partitions with equality if and only if the q-partitions of α have all the same probability, i.e. H(Q|A) ≤ logq. □ E.1. Definition: maximal resolution Let X^α be a generic quantized random variable with resolution α, and let α′ = α/q represent a higher resolution. The measurement error of X^α is a quantity $e>0, e∈Q$ such that: $H(Xα′)−H(Xα)=log⁡(q),∀α≤e$E 2 E.2. Definition: empirical system A system is said to be empirical if the quantization of explanandum and input has a maximal resolution. Equivalently, a non-empirical, (i.e. logico-deductive) system is a system for which e = 0. The effect that a change in resolution has on K depends on the characteristic of the system, and in particular on the speed with which the entropy of the explanandum and/or explanans increase relative to their joint distribution. For every empirical system for which there is a $τ≠∅$ such that K(Y; X, τ) > 0, the system’s quantization $YαY,XαX$ has optimal values of resolution $αy∗$ and $αx∗$ such that: $K(YαY∗;XαX∗,τ)>K(YαY;XαX,τ),∀αY≠αY∗, αX≠αX∗$E 3 If α is the resolution of Y and α′ = α/q is a higher resolution then, assuming for simplicity that τ is constant: $K(Yα′;X,τ)>K(Yα;X,τ) ⟺ H(Yα′)−H(Yα′|X,τ)H(Yα)−H(Yα|X,τ)>H(Yα′)+H(X)+τH(Yα)+H(X)+τ.$E 4 From equation (E 1), we know that H(Y^α′) ≤ H(Y^α) + log (q), assuming equality and re-arranging equation (E 4) we get the condition: $H(Yα′|X,τ)−H(Yα|Xτ)<(1−K(Yα;X,τ))log⁡(q),$E 5 which is only satisfied when K(Y^α; X, τ) is small and H(Y^α′|X, τ) − H(Y^α|X, τ) ≪ log (q). The corresponding condition for X is $H(Y|Xα′,τ)−H(Y|Xα,τ)<−log⁡(q)K(Y;Xα,τ),$E 6 where the left-hand side has a lower bound in −H(Y|X^α, τ), whereas the right-hand side can be arbitrarily negative. Combining equations (E 4) and (E 6) yields the general condition: $K(YαY/qY;XαX/qX,τ)>K(YαY;XαX,τ) ⟺ H(YαY/qY|XαX/qX,τ)−H(YαY|XαX,τ)<(1−K)log⁡qY−Klog⁡qX,$E 7 in which $K≡K(YαY;XαX,τ)$. The left-hand side of equation (E 7) is bounded between $H(YαY)$ and $−H(YαY|XαX,τ)$, whereas the right-hand side is bounded between log q[Y] when K = 0 and −log q[X] when K = 1. The only scenario in which K never ceases to grow with increasing resolution entails e = 0 and thus a non-empirical system (definition (E 2)).□ Appendix F To simplify the notation, we will posit that the explanans is expanded by two positive elements H(X′) and −log p(τ′). $K(YnY;XnX,X′nX,τ,τ′)>K(YnY;XnX,τ) →nYH(Y)−nYH(Y|X,X′,τ,τ′)nYH(Y)+nXH(X)+nXH(X′)−log⁡p(τ)−log⁡p(τ′) >nYH(Y)−nYH(Y|X,τ)nYH(Y)+nXH(X)−log⁡p(τ) →(nYH(Y)+nXH(X)−log⁡p(τ))(nYH(Y)−nYH(Y|X,X′,τ,τ′)) −(nYH (Y)+nXH(X)+nXH(X′)−log⁡p(τ)−log⁡p(τ′)) ×(nYH(Y)−nYH(Y|X,τ))>0 →(nYH(Y)+nXH(X)−log⁡p(τ))(nYH(Y|X,τ)−nYH(Y|X,X′,τ,τ′)) >(nXH(X′)−log⁡p(τ′))(nYH(Y)−nYH(Y|X,τ)) →nYH(Y|X,τ)−nYH(Y|X,X′,τ,τ′)>(nXH(X′) −log⁡p(τ′))K(Y;X,τ) →(nYH(Y)−nYH(Y|X,X′,τ,τ′))−(nYH(Y)−nYH(Y|X,τ)) >(nXH(X′)−log⁡p(τ′))K(Y;X,τ) →k′−k>nXH(X′)−log⁡p(τ′)nYH(Y)kh.$F 1 The same result would be derived if H(X′) ≡ ΔH(X) and −log p(τ′) ≡ −Δlog p(τ) represented any difference in size, positive or negative, between two explanantia. Appendix G $K(Y;X,T) =hH(Y)(H(Y)−H(Y|X,T))=hH(Y)(H(Y)−H(Y,X,T)+H(X,T)) =hH(Y)(H(Y)−H(Y,X,T)+H(X)+H(T))=hH(Y)(H(T)−(H(Y,X,T)−H(Y)−H(X)) =hH(Y)(H(T)−H(T|Y,X))=K(T;Y,X).$G 1 Appendix H Let T be a random variable (RV) of alphabet $T={τ1,τ2…τz}$, probability distribution p(τ) and entropy $H(T)=−∑ip(τi)log⁡p(τi)$. Let T′ be an RV derived from T by removing from its alphabet the element $τ j∈T$ of probability p(τ[j]). Then $H(T′)=11−p(τ j)∑i≠jp(τi)log⁡1p(τi)−log⁡11−p(τ j).$H 1 When $|T|=2$, H(T′) = 0 regardless of the probability distribution of T. Otherwise, the value rapidly approaches H(T) as p(τ[j]) decreases (e.g. as the alphabet of T increases in size). Note that under specific conditions H(T′) > H(T)—for example, if T equals p(τ[j]) = 0.9, p(τ[k]) = 0.05, P(τ[k]) = 0.05. This entails that the uncertainty about a condition might momentarily increase if the most probable case is excluded. However, the effect is circumscribed since, as more elements are removed from the alphabet, H(T′) tends to 0. □ © 2019 The Authors. Published by the Royal Society under the terms of the Creative Commons Attribution License http://creativecommons.org/licenses/by/4.0/, which permits unrestricted use, provided the original author and source are credited.
{"url":"https://royalsocietypublishing.org/doi/full/10.1098/rsos.181055","timestamp":"2024-11-11T18:05:14Z","content_type":"text/html","content_length":"642841","record_id":"<urn:uuid:a5c6c713-4286-41c2-beb4-2e70dfa69571>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00355.warc.gz"}
Cash Game - $40 buy-in/dealer choice cash game starting stacks? I host a monthly game where we play cash for the first 1-2 hours and then we switch to a T10,000 tournament $25 buy-in + 5 dollar bounty. Cash game - 8 players $40 max buy-in cash game where we play dealer choice games. Instead of 25¢ ante we have the dealer put in 1 dollar, the min-bet is 25¢. My current starting stacks are (16 - 25¢), (16 - $1), and (4 - $5). These starting stacks have been working well for us. I think our game plays pretty small compared to others, it really is just social hour while we wait for everybody to show up. We normally have 2-3 rebuys total each night. It is a game among co-workers and friends that has been going on for 10+ years. Here is my question, I am getting a 300 chip set from the design contest here and I am trying to work out the correct breakdown. If I keep my current starting stacks 8 players would need (128 - 25¢), (128 - $1), (32 - $5) for a total of 288 chips, I could then add (12 - $20) chips to round out the 300. I would then add a sample set on top I was wondering if it would be better change starting stacks (12 - 25¢), (17 - $1), (4 - $5). 8 players would need (96 - 25¢), (136 - $1), (32 - $5) for a total of 264 chips, I could then add (16 - $5) chips and (8 - $20) chips and round out the set with 1 12-chip sample set. Would reducing the 25¢ chips from 16 down to 12 make that big of a difference? and is there a better breakdown? Feb 19, 2014 Reaction score My cash game is similar in stakes. I think 12 quarters per player is fine. I use just 1 rack of quarters, no problems. We use lots of $1's and a few fivers. I don't use any bigger denoms as $20 bills Oct 29, 2014 Reaction score I think 12 x quarters are plenty, but I'd go even racks and change your starting stacks slightly to 12-12-5. The set would be: 100 x quarter 100 x $1 80 x $5 20 x $20 Straight Flush Aug 23, 2013 Reaction score I think 12 x quarters are plenty, but I'd go even racks and change your starting stacks slightly to 12-12-5. The set would be: 100 x quarter 100 x $1 80 x $5 20 x $20 ya, for the stakes you're talking about, and total chip count (300), I like this breakdown. I do similar stakes, but prefer full barrels for simplicity. 20/20/3. Reloads are done with Redbirds. A 300 chip set won't support this though. Nov 6, 2014 Reaction score I think 12 x quarters are plenty, but I'd go even racks and change your starting stacks slightly to 12-12-5. The set would be: 100 x quarter 100 x $1 80 x $5 20 x $20 12/12/5 is what I use for my game, works perfectly well. Rebuys are usually 8/8/x, depending on what they rebuy for, until I run out of quarters and $1s. Thanks for the feedback, I think I will go with 12/12/5, and maybe bump it up to 330 chips 120/120/80/10. This would allow me to handle 10 players if needed and have enough bank for rebuys. I know, I know my racks won't be full. Will my OCD kick in? Will I bump it to 400 chips? Feb 2, 2015 Reaction score I know, I know my racks won't be full. Will my OCD kick in? Will I bump it to 400 chips? Yes. Yes it will, and yes you will. Think of it this way - you'd be paying maybe $70 to get the exact 400 chip set you want (that you created, btw) and put your OCD at ease. Small price for custom Yes. Yes it will, and yes you will. Think of it this way - you'd be paying maybe $70 to get the exact 400 chip set you want (that you created, btw) and put your OCD at ease. Small price for custom chippies! Yes, time to explain how a free set of chips cost me $70 plus money spent on sample sets to the wife Oct 29, 2014 Reaction score I do similar stakes, but prefer full barrels for simplicity. 20/20/3. Reloads are done with Redbirds. A 300 chip set won't support this though. I do this too actually, and if 400 chips it'd be easier to disperse and let players make change after getting all the quarters and $1's on table. 120 x quarter 120 x $1 140 x $5 20 x $20 Just do 20-20-3 for the first six players then get into the $5's. Feb 7, 2015 Reaction score 12-12-5 is fine for NLHE, but for limit games (and seeing as you're playing Dealer's Choice I'm assuming you also play limit games) I'd recommend more quarters. 16-16-4 is better suited for limit and mixed games. 20-20-3 would be even better, but of course you'll never make that work within a 300 chip set. If you can stretch it to 400 chips, you could get 140-140-120 (so you can fill the barrels with the same chips). This will allow up to 10 rebuys (in $5 chips, there will be plenty of low denoms on the table). Apr 25, 2013 Reaction score In cash games, you don't need to give everybody exactly the same starting stack. I like to distribute full barrels as much as possible, until all of the lowest denoms are on the table. With a $40 buy-in and a 100/100/80/20 bank, the first five players get 20 quarters, 20 $1's, and 3 $5's. When the quarters and ones are all on the table, buy-ins and rebuys are $5's. When they run out, rebuys are $20's. Players make change at the table. The primary advantage is easy distribution of chips. ymmv. Thanks for all the advice, I really like the idea of starting stacks of 20/20/3. It is so simple, why didn't I think of that. Right now, my plan is 120/120/70/10 for a total of 320. Our normal group is 6-8 players with 4 or 5 rebuys. This breakdown will handle this group just fine. If we get a larger group, I also have two other cash sets. I have my current custom woody chips (500 custom labelled cheap casino 12g chips) and a new set of 750 CPS chips. Not sure the cheap 12g chips will ever see the felt again, they aren't bad for what they are. Just not in the same league of the other Nov 6, 2014 Reaction score My preference of starting stacks are 12 x .25 17 x 1 8 x 5 Aug 24, 2013 Reaction score In cash games, you don't need to give everybody exactly the same starting stack. I like to distribute full barrels as much as possible, until all of the lowest denoms are on the table. Nov 6, 2014 Reaction score My preference of starting stacks are 12 x .25 17 x 1 8 x 5 My first 8 get this starting stack. Then I'm about out of quarters. I pile on the ones and fives Create an account or login to comment You must be a member in order to leave a comment Create account Create an account and join our community. It's easy! Already have an account? Log in here.
{"url":"https://www.pokerchipforum.com/threads/40-buy-in-dealer-choice-cash-game-starting-stacks.5233/","timestamp":"2024-11-06T07:51:14Z","content_type":"text/html","content_length":"152911","record_id":"<urn:uuid:0f0765ae-1ae5-4b73-9d91-257b8bf66f8e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00668.warc.gz"}
Is our dark energy future cosmic horizon the Wheeler-Feynman total absorber? The escaping member of the photon pair, which has torn apart by the strong gravitational tidal forces near the event horizon change to The escaping member of the photon pair, which has been torn apart by the strong gravitational tidal forces near the event horizon So, now I need to understand Ibison's argument in detail as part of the question of how to apply the thermodynamics to our future cosmological horizon. Ray Chiao writes: "“Black holes” are “black,” in the sense that they are perfect absorbers of every kind of particle, including photons at all frequencies [1]. Once particles have passed through the event horizon of a BH, they can never get out again, at a classical level of description. For in order for a particle to be able to escape from the black hole, it would need somehow to acquire an escape velocity which effectively exceeds the speed of light at the event horizon." My basic idea that I tried to explain to Roger Penrose at Castiglioncello in 2008 is simply to apply the above idea for the black hole, to our future cosmological horizon. Therefore, trivially we have the Wheeler-Feynman total absorber final boundary condition in our accelerating universe that is heading for the de Sitter solution. We are not de Sitter in the past - an important Arrow of Time asymmetry there. The only Hawking radiation we can see back-from-the-future is Wheeler-Feynman advanced thermal radiation that may well be the dark energy, I also tried to explain this to Bernard Carr at King's College London - I think he got it, but Penrose did not because it contradicts his current idea of cyclic big bangs. We are outside a black hole horizon, but inside our future horizon which is also observer dependent. I don't yet understand Ibison's "There is no conflict however if electromagnetic interactions on the advanced cone are principally negative rather than positive energy interactions. If indeed they were, then the emission of positive energy radiation on the retarded cone of a local source can be re-interpreted as an increment in the magnitude of negative binding energy propagating (in forwards time) along the (here, necessarily) advanced cone of that source. No future sinks or sources are then required. The predominance of retarded radiation as commonly understood then follows from the asymmetry of advanced Greens functions which are the consequence of the boundary condition associated with a future time-like Ibison has this idea of a "time mirror" to replace the Wheeler-Feynman total absorber future boundary condition. My original idea was simply this 1) No thermodynamic difference between a black hole horizon and our future cosmological horizon - other differences of course. 2) In both cases the static LNIF near the horizon sees infinite blue shift/Unruh temperature 3) i.e. for black hole g(r) = (rs/r^2)(1 - 2rs/r)^-1/2 horizon at r = 2rs 4) for cosmological future horizon we are at r = 0 and g(r) = c^2/\^1/2(1 - /\r^2)^-1/2 horizon at r = /\^-1/2
{"url":"https://stardrive.org/index.php/all-blog-articles/3039-is-our-dark-energy-future-cosmic-horizon-the-wheeler-feynman-total-absorber","timestamp":"2024-11-11T23:31:23Z","content_type":"text/html","content_length":"18620","record_id":"<urn:uuid:aa66b713-a905-49c5-8f72-4ef57dbe62e3>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00075.warc.gz"}
Associative Property in Math: How It Works and Why It Matters Mathematics can sometimes feel a bit abstract, but when we break it down into simple ideas and real-life connections, it becomes a lot easier to understand. One of the fundamental properties that helps make complex calculations simpler is the associative property. In this blog post, we’ll explore what the associative property is, how it works with both addition and multiplication, and how it helps us solve math problems more efficiently. What is the Associative Property? The associative property is a rule in mathematics that states you can change the grouping of numbers when adding or multiplying without changing the result. The key to understanding the associative property is recognizing that grouping does not affect the final answer. The word “associative” comes from the idea of “associating” or grouping numbers together in different ways. Let’s look at an example to understand this better. When adding three numbers, like 2, 3, and 4, you can either group them as (2 + 3) + 4 or as 2 + (3 + 4). Either way, the result will be the same: • (2 + 3) + 4 = 5 + 4 = 9 • 2 + (3 + 4) = 2 + 7 = 9 The outcome is always 9, no matter how you group the numbers. This is the essence of the associative property of addition. Similarly, the associative property applies to multiplication. For instance, if you multiply 2, 3, and 4, you can group them differently: • (2 × 3) × 4 = 6 × 4 = 24 • 2 × (3 × 4) = 2 × 12 = 24 Again, the result remains the same regardless of how the numbers are grouped. This is known as the associative property of multiplication. Associative Property of Addition The associative property of addition makes it easier to add multiple numbers by allowing us to regroup them in a way that simplifies the calculation. This property is particularly useful when dealing with mental math. For example, when adding 5 + 8 + 12, it might be easier to first add 8 and 12 to get 20, and then add 5 to get 25. By regrouping the numbers to simplify the calculation, we make the process faster and less error-prone. Associative Property of Multiplication The associative property of multiplication also helps us perform calculations more efficiently by allowing us to change the grouping of numbers. For instance, when multiplying 2 × 7 × 5, it might be simpler to first calculate (2 × 5) = 10, and then multiply by 7 to get 70. This property is especially helpful when working with larger numbers or when breaking down calculations into smaller, more manageable steps. For example, if you are trying to find the product of 7 × 25 × 4, it might be easier to first calculate 25 × 4, which is 100, and then multiply by 7 to get 700. Does the Associative Property Apply to Subtraction and Division? It is important to note that the associative property only applies to addition and multiplication. This property does not apply to subtraction or division. Changing the grouping of numbers when subtracting or dividing can lead to different results, which means these operations are not associative. For example: • (10 – 5) – 2 = 5 – 2 = 3, but 10 – (5 – 2) = 10 – 3 = 7. The results are different, so subtraction is not associative. • (12 ÷ 3) ÷ 2 = 4 ÷ 2 = 2, but 12 ÷ (3 ÷ 2) = 12 ÷ 1.5 = 8. Again, the results are different, so division is not associative. FAQ about the Associative Property Q: What is the associative property in simple terms? A: The associative property allows you to change the grouping of numbers when adding or multiplying, without changing the result. Q: Does the associative property work with subtraction or division? A: No, the associative property only works with addition and multiplication. It does not apply to subtraction or division. Q: Why is the associative property important? A: The associative property is important because it allows flexibility in calculations, making it easier to solve problems, especially in mental math and complex equations. Q: Can the associative property help in solving real-life problems? A: Yes, the associative property can help simplify calculations in everyday situations, such as grouping items or calculating totals. Q: How is the associative property different from the commutative property? A: The associative property is about changing the grouping of numbers, while the commutative property is about changing the order of numbers. Both properties apply to addition and multiplication. By understanding and using the associative property, you can make math more manageable and less intimidating. It’s all about finding simpler ways to solve problems, which makes learning math a more enjoyable experience. Interested in taking your child’s math skills to the next level? Sign up for a FREE trial class with Spark Math by Spark Education today or try our FREE Online Math Assessment for a detailed report on your child’s math skills! Spark Math is the flagship math course under Spark Education, offering small group classes taught by experienced and engaging real-life teachers. Our program is designed to ignite your child’s passion for learning math, providing a rich array of math resources and an immersive learning experience. Come and see how Spark Math can make a difference in your child’s
{"url":"https://blog.sparkedu.com/blog/2024/10/17/associative-property-in-math-how-it-works-and-why-it-matters/","timestamp":"2024-11-02T10:52:22Z","content_type":"text/html","content_length":"96062","record_id":"<urn:uuid:79834d25-3cca-47f6-84ba-bf794ba0c8d8>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00487.warc.gz"}
sum() function in Python Kodeclik Blog Python’s sum() function Python’s sum function is a very useful function to work with iterables. Recall that iterables are data structures in Python that are capable of returning their elements one by one so that you can use them in a for loop, for instance. Example iterables are lists, dictionaries, sets, and tuples. Using the Python sum() function Consider the following simple piece of code: numbers = [1,3,5,7,9] As shown above, numbers is a list containing the first five odd natural numbers. The sum function does what you think it does, namely add up these numbers and produce the output: Note that you can use other iterables in place of a list. Let us adapt the above code to work with lists, sets, and tuples: numbers_list = [1,3,5,7,9] numbers_set = {1,3,5,7,9} numbers_tuple = (1,3,5,7,9) The output is, as expected: Let us also try it with a dictionary. In the below dictionary, keys are the indices and the values are the numbers that we had before, i.e., the first five odd positive integers. numbers_dict = {1: 1,2: 3,3: 5,4: 7,5: 9} Whoa - what happened? Why did we get 15 instead of 25? This is because by default the sum() function applied to a dictionary operates on its keys, not its values. If you wish the sum() function to operate on the values, you must do: numbers_dict = {1: 1,2: 3,3: 5,4: 7,5: 9} This produces the output we are looking for: Using the Python sum() function with an argument In each of the above code, you can imagine the sum function taking an optional second argument which is the starting value for the sum. This value is considered to be zero by default when unspecified. You can make this explicit by: numbers_list = [1,3,5,7,9] numbers_set = {1,3,5,7,9} numbers_tuple = (1,3,5,7,9) numbers_dict = {1: 1,2: 3,3: 5,4: 7,5: 9} If you update the code to have: numbers_list = [1,3,5,7,9] numbers_set = {1,3,5,7,9} numbers_tuple = (1,3,5,7,9) numbers_dict = {1: 1,2: 3,3: 5,4: 7,5: 9} Note that the sum() function will not work if your input contains for instance strings, eg: numbers_list = [1,3,5,'hello',7,9] Traceback (most recent call last): File "main.py", line 2, in <module> TypeError: unsupported operand type(s) for +: 'int' and 'str' In this blogpost, we have learnt about Python sum(), a very useful function to reduce a given iterable into a single number. How will you make use of it? Interested in more things Python? See our blogpost on Python's enumerate() capability. Also if you like Python+math content, see our blogpost on Magic Squares. Finally, master the Python print
{"url":"https://www.kodeclik.com/python-sum-function/","timestamp":"2024-11-02T03:06:14Z","content_type":"text/html","content_length":"107827","record_id":"<urn:uuid:5174b752-2253-4910-a972-3284df6cad24>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00825.warc.gz"}
Nnneveryday mathematics grade 4 reference book pdf Billy and ant are back, this time in billy and ant lie, book 4 of the popular billy growing up series. Lesson multiplying fractions by whole numbers 7 12a. Everyday mathematics 4th edition, grade 4, spanish student reference book this hardcover resource contains explanations of key mathematical content, with directions to the everyday mathematics games. Everyday low prices and free delivery on eligible orders. Anyone can use this book globally, although the curriculum may differ slightly from one region to the other. Practice books, grades k5 the math learning center. Everyday mathematics grade 4 student reference book. Everyday mathematics student reference book grade 6 youtube. Everyday mathematics 4th edition, grade 4, spanish student. Everyday math 4 comprehensive student materials set with home links, redbird and arrive math booster, 7years, grade 4. When printing the pdf files for the three math sessions, be sure to set the page scaling dropdown. Mar 19, 20 everyday mathematics, grade 4 student math journal, volume 1 9780076045822 max bell, andy isaacs, john bretzlauf, james mcbride, amy dillard, isbn. Choose from 500 different sets of everyday math unit 3 vocabulary flashcards on quizlet. Learn vocabulary, terms, and more with flashcards, games, and other study tools. Everyday mathematics 4, grade 4, student reference book 9780021436972. Buy everyday mathematics, grade 5, student math journal 2 3rd ed. Everyday mathematics, grade 4 student math journal. Martha ruttle bridges practice books single copy b4pb pack of 10 b4pb10 for pricing or to order please call 1 800 5758. This grade 4 mathematics course was originally implemented in 2008. National instructional materials accessibility center books are available only to students with an iep individual education plan. Everyday mathematics, grade 5, student reference book by. Everyday mathematics, grade 5, student math journal 2. In this case, the student is working in unit 5, lesson 4. Everyday mathematics, teachers reference manual, grades 4. Everyday mathematics 4, grade 4, student reference book. September 2005, all mathematics programs for grades 1 to 8 will be based on. Student reference book for everyday mathematics, grade 3 by bell, et al u. Mathematics grade 4 a curriculum guide 2014 government of. Everyday mathematics, grade 5, student reference book book. A mathematics reference sheet, which students may use for all sessions, is located on. Bridges in mathematics grade 4 practice book blacklines. Student reference book volume 8 of everyday mathematics. Everyday mathematics 4, grade 4, comprehensive student material set, 1 year. Everyday math 4 comprehensive student materials set with arrive math booster, 5years, grade 4. Everyday mathematics is divided into units, which are divided into lessons. School mathematics project, isbn 1570399921, 9781570399923. Billy and ant lie is the fourth book in the billy growing up series, aimed at elementary children to help them understand emotions and teach them important lessons in behavior. Everyday mathematics student reference book grade 5 march. Lesson multiplying fractions by whole numbers 7 12a use the number lines to help you solve the problems. Everyday mathematics student reference book grade 5 by max bell, march 30, 2007, wright group mcgrawhill edition, hardcover in english student edition. This is so because the core content of mathematics is the same around the world. Practice books, grades k5 bridges practice books provide activities and. This website is created solely for jee aspirants to download pdf, ebooks, study materials for free. K to 12 learning modulematerial in mathematics for grade 4 quarter 1 to. If you think the materials are useful kindly buy these legally from publishers. The unit number is the first number you see in the icon, and the lesson number is the second number. In the upperleft corner of the home link, you should see an icon like this. This book is intended to be used by children ages 5 to 6. Bridges in mathematics grade 4 practice book blacklines the math learning center, po box 12929, salem, oregon 97309. Learn everyday math unit 3 vocabulary with free interactive flashcards. Everyday mathematics, teachers reference manual, grades 46 common core edition 9780076577217. Student reference book for everyday mathematics, grade 3. If you put this book on a group reading list, students without ieps will not be able to open it.
{"url":"https://sualibourva.web.app/793.html","timestamp":"2024-11-12T12:30:53Z","content_type":"text/html","content_length":"9725","record_id":"<urn:uuid:21dd9370-1025-4a8f-9d01-b46778112876>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00337.warc.gz"}
How do you find the 200th term? Starts here2:11Finding the 200th term in an Arithmetic Sequence – YouTubeYouTubeStart of suggested clipEnd of suggested clip57 second suggested clipPlus the n is the 200. Minus 1 what is my common difference. Five. Final order operations subtractMorePlus the n is the 200. Minus 1 what is my common difference. Five. Final order operations subtract that gives you 1 99. Times 5 gives you 1995 take away to take away 40. How do you find the term? Starts here2:12Find a Term in a Sequence (Arithmetic) – YouTubeYouTube What is the 200th? 200th – the ordinal number of two hundred in counting order. two-hundredth. ordinal – being or denoting a numerical order in a series; “ordinal numbers”; “held an ordinal rank of seventh” Based on WordNet 3.0, Farlex clipart collection. What is the number of the 1000th term in the series? Hence the 1000th term is 45. Please upvote if it is helpful. Just count terms in triangular numbers, the series of 1+2+3+…infinity. The numbers between the last and current triangular numbers is the number of times that number is in the sequence. What is the 1000th term of the number 44? Since ,the number 44 is ending on the term 990.So it’s next term would be 45 and it will be repeated 45 times so 1000 th term is ten values after 990 th term . Hence the 1000th term is 45. Please upvote if it is helpful. What is the 990 t h term of the number 45? It’s 45. The third term ( 1 + 2) is 2. The sixth term ( 1 + 2 + 3) is 3. So what is needed is to sum the first n natural numbers to get a sum total closest to 1000. So that means that the 990 t h term is 44. The next 45 terms after that will be 45. In other words, the 1000 t h term will be 45, as well as the 35 terms after it. How do you find the nth term of a graph? If we want a formula for the “nth” term, start by looking at it the other way around. In which term does the number “n” first appear? Taking the first term to be the “0th” term. “1” appears in term 0, “2” appears in term 1, “3” appears in term 1+2=3, “4” appears in term 1+2+3= 6, etc.
{"url":"https://profoundadvices.com/how-do-you-find-the-200th-term/","timestamp":"2024-11-03T03:15:40Z","content_type":"text/html","content_length":"52646","record_id":"<urn:uuid:ffa03b16-a3f6-4013-be7c-e51ccbf572f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00697.warc.gz"}
Hexagonal chessboards – Green Chess Hexagonal chessboards This page contains detailed information about the nature of hexagonal chess boards and the movement of pieces on them. Pawn movement is not described here because that differs among different See affected games at Hexagonal chess variants, and some others at Three-player chess variants. About the board The boards consist of hexagonal fields. Usually everything is one and a half as many as on the rectangular board: each field has six sides instead of four, there are three field colours instead of two, and there are six natural directions instead of four. Pieces can usually move in one and a half times the directions than on the rectangular board. Movement of the pieces The rooks can move in six natural directions, in straight lines. The bishops can move along diagonal lines. Here diagonal adjacency means two same-coloured fields whose vertices are connected by a short edge. Therefore bishops always move on fields with the same colour. Three bishops are needed to access every field of the board. The queen combines the power of the rook and the bishop, so it can move in 12 directions. The king can move not only to the six adjacent fields (in rook directions), but also to the six diagonally adjacent fields (in bishop directions). Therefore the king can move in the same directions as the queen, but only one field. The knights can move to 12 fields. Its movement can be described by moving one field diagonally and then one field outwards straight. The 12 destination fields lay in a circular shape. Knights, as in chess, always end up on a differently coloured field, but here that can be two different colours. This property makes the knight more powerful than in chess, for example it can make three consecutive moves and arrive back to its original position (this is called triangulation). Pawn movement rules vary among different variants. Value of pieces As pieces move differently on a hexagonal board than on a rectangular one, their relative values also change. The following table shows estimations about average piece values on a hexagonal board and for comparison on a rectangular (8x8) board. Hexagonal 1 3 4 7 10 Rectangular 1 3 3 5 9 See also: Value of pieces. Basic mates On a hexagonal board different basic checkmates work than on a rectangular board (which you can see at: Basic endgames). Pieces Result Notes Win &nbsp Win &nbsp Draw &nbsp Draw &nbsp Win Unlike chess, here the two knights are strong enough to force checkmate. Draw Mate exists but cannot be forced – unlike in chess. Draw Mate exists but cannot be forced – unlike in chess. Win Requirement: the three bishops move on three different field colours. External links
{"url":"https://greenchess.net/rules.php?type=hex-board","timestamp":"2024-11-13T09:31:59Z","content_type":"text/html","content_length":"23200","record_id":"<urn:uuid:12855e56-98f9-4d25-8603-5b8ee160c6de>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00694.warc.gz"}
Fitting a manifold of large reach to noisy data Let ℳ ⊂ ℝn be a C2-smooth compact submanifold of dimension d. Assume that the volume of ℳ is at most V and the reach (i.e. the normal injectivity radius) of ℳ is greater than τ. Moreover, let μ be a probability measure on ℳ whose density on ℳ is a strictly positive Lipschitz-smooth function. Let xj ∈ ℳ j = 1, 2,⋯,N be N independent random samples from distribution μ. Also, let ζj, j = 1, 2,⋯,N be independent random samples from a Gaussian random variable in ℝn having covariance σ2I, where σ is less than a certain specified function of d,V and τ. We assume that we are given the data points yj = xj + ζj, j = 1, 2,⋯,N, modeling random points of ℳ with measurement noise. We develop an algorithm which produces from these data, with high probability, a d dimensional submanifold ℳo ⊂ ℝn whose Hausdorff distance to ℳ is less than Δ for Δ > Cdσ2/τ and whose reach is greater than cτ/d6 with universal constants C,c > 0. The number N of random samples required depends almost linearly on n, polynomially on Δ-1 and exponentially on d. All Science Journal Classification (ASJC) codes • Analysis • Geometry and Topology Dive into the research topics of 'Fitting a manifold of large reach to noisy data'. Together they form a unique fingerprint.
{"url":"https://collaborate.princeton.edu/en/publications/fitting-a-manifold-of-large-reach-to-noisy-data","timestamp":"2024-11-07T10:43:09Z","content_type":"text/html","content_length":"49510","record_id":"<urn:uuid:8c5e6020-cb59-4b45-b53a-3fdce28df606>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00409.warc.gz"}
Competency Based Questions for Class 10 Maths Chapter 7 Coordinate Geometry - Rankers Study TutorialCompetency Based Questions for Class 10 Maths Chapter 7 Coordinate Geometry Competency Based Questions for Class 10 Maths Chapter 7 Coordinate Geometry Competency Based Questions are new type of questions asked in CBSE Board exam for class 10. Practising the following Competency Based Questions will help the students in facing Board Questions. Hint: Identify x and y coordinate in order to plot points on the graph. Question.1. Sheena was asked to plot a point 10 unit on the left of the origin and other point 4 units directly above the origin. Which of the following are the two points? (a) (10,0) and (0,-4) (b) (-10,0) and (4,0) (c) (10,0) and (0,4) (d) (-10, 0) and (0, 4) Answer. (d) (-10, 0) and (0, 4) Question.2. Three points lie on a vertical line. Which of the following could be those points? (a) (0, 4), (4, 0), (0, 0) (b) (4, 3), (5, 3), (-12, 3) (c) (-8, 7), (-8,-8), (-8, -100) (d) (-8,3), (-8, 8), (8,7) Answer. (c) (-8, 7), (-8,-8), (-8, -100) Hint: Apply and derive distance formula in order to determine the distance between two coordinates on the graph. Question.3. On a graph, two-line segments, AB and CD of equal length are drawn. Which of these could be the coordinates of the points, A, B, C and D? (a) A(-3,4) B(-1,2) and C(3,4) D(1,2) (b) A(-3,-4) B(-1,2) and C(3,4) D(1,2) (c) A(-3,4) B(-1,-2) and C(3,4) D(1,2) (d) A(3,4) B(-1,2) and C(3,4) D(1,2) Answer. (a) A(-3,4) B(-1,2) and C(3,4) D(1,2) Question.4. The distance between two points, M and N, on a graph is given as \sqrt{10^{2}+7^{2}}. The coordinates of point M are (–4. 3). Given that the point N lies in the first quadrant, which of the following is true about the all possible x-coordinates of point N? (a) They are multiple of 2. (b) They are multiples of 3. (c) They are multiples of 5. (d) They are multiples of 6. Answer. (b) They are multiples of 3. Hint: Apply distance formula in order to solve various mathematical and real-life situations graphically. Question.5. On a coordinate grid, the location of a bank is (–4, 8) and the location of a post office is (2, 0). The scale used is 1 unit = 50 m. What is the shortest possible distance between the bank and the post office? (a) 200 m (b) 500 m (c) 700 m (d) 800 m Answer. (b) 500 m Question.6. The graph of a circle with centre O with point R on its circumference is shown. (a) 2 \sqrt{41} Units (b) \sqrt{41} Units (c) 3 \sqrt{17} Units (d) 6 \sqrt{17} Units Answer. (a) 2 \sqrt{41} Units Hint: Apply and derive section formula in order to divide the line segment in a given ratio. Question.7. A point G divides a line segment in the ratio 3:7. The segment starts at the origin and ends at a point K having 20 as its abscissa and 40 as its ordinate. Given that G is closer to the origin than to point K, which of the following are the coordinates of point G? (a) (14, 28) (b) (28, 14) (c) (12, 6) (d) (6, 12) Answer. (d) (6, 12) Question.8. Two poles are to be installed on an elevated road as shown in the diagram. (a) Q (10,9) and R(12,8) (b) Q(10,8) and R (12,11) (c) Q (10,9) and R(12,10) (d) Q(-10, 9) and R(0, 11) Answer. (c) Q (10,9) and R(12,10) Hint: Apply distance and section formula in order to determine the vertices/ diagonals/ mid points of given geometrical shapes. Question.9. Which of the following are the coordinates of the intersection points of the diagonals of the rectangle ABCD with vertices A(0,3), B(3,0), C(1,-2) and D(-2,1)? (a) (1.5, 1.5) (b) \left(\frac{1}{2},\frac{1}{2} \right) (c) \left(-\frac{1}{2},-\frac{1}{2} \right) (d) (2, -1) Answer. (b) \left(\frac{1}{2},\frac{1}{2} \right) Question.10. The figure shows a parallelogram with one of its vertices intersecting the y-axis at 3 and another vertex intersecting the x-axis at 2. (a) m = 0.5 + n (b) m = n – 0.5 (c) m = 1.50 + n (d) m = n – 1.50 Answer. (a) m = 0.5 + n Hint: Apply and derive the formula of area of triangle geometrically in order to determine the area of quadrilateral/triangle. Question.11. A triangle is drawn on a graph. Two of the vertices of the triangle intersect the y-axis at -3 and x-axis at 5. The third vertex is at (2, 4). What is the area of the triangle? (a) 16 square units (b) 14.5 square units (c) 8 square units (d) 6.5 square units Answer. (b) 14.5 square units Question.12. Observe the triangles PMN and PQR shown below. (a) 3.5 square units (b) 7 square units (c) 2.5 square units (d) 1 square unit Answer. (a) 3.5 square units
{"url":"https://rstudytutorial.com/cbse-class-10-maths-chapter-7-competency-based-questions/","timestamp":"2024-11-10T21:05:20Z","content_type":"text/html","content_length":"190321","record_id":"<urn:uuid:f6933341-71cb-489f-aaf5-d922b04ea7a6>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00392.warc.gz"}
Sum of n Natural Numbers and Their Cubes Know About Sum of n Natural Numbers and Their Cubes Mathematics is full of patterns. An example of such a pattern is the sum of cubes of n natural numbers. There is also different patterns involved in the Basic Set Theory. Numbers are exponentiated by their cubes. Three cubes are 3^3 = 27, for example. The cubes of greater natural numbers will also be very large if we keep going. Using less time and energy, how can we find the sum of cubes of n natural numbers? This much confusion is there to find out the Complement of a Set. In this guide, you will understand about simple formulas for carrying out this in this article that will ease your calculations. Sum of Cubes of First n Natural Numbers A natural number is a number that starts at 1 and continues indefinitely. Add the cubes of a specific number of natural numbers starting from 1 to find the sum of cubes of first n natural numbers. To illustrate, the sum of cubes of the first 5 natural numbers can be expressed as 13 + 23 + 33 + 43 + 53, and the sum of cubes of the first 10 natural numbers as 1^3 + 2^3 + 3^3 + 4^3 + 5^3 + 6^3 + 7^3 + 8^3 + 9^3 + 10^3. Here are some examples of the sum of cubes of n natural numbers. The sum of the cubes of the first two natural numbers is 1^3 + 2^3 = 1 + 8 = 9. 1 + 8 + 27 = 36 when the cubes of the first three (1^3 + 2^3+3^3) natural numbers are added together. The sum of the cubes of the first four natural numbers is 1^3 + 2^3 + 3^3 + 4^3 = 1 + 8 + 27 + 64 = 100. The sum of the cubes of more natural numbers is becoming increasingly difficult to calculate as we go along. This is where a formula for the sum of cubes of n natural numbers comes into play. What is formula? Here is the formula for sum of cubes of n natural numbers: sum of cubes of n natural numbers If we have n cubes, 13 + 23 + 33 + 43 +... + n3, the formula is, Sum (S) = Starting from 1, n represents the total number of natural numbers. The formula for finding the sum of cubes of n natural numbers is now clear to you. In order to memorize the formula without understanding the logic and reasoning behind it, you must understand the proof of the sum of cubes of n natural numbers. For more details on it, you must follow your regular curriculum and textbook chapters to know how this formula is derived. Sum of cubes formula is used to find the addition of two polynomials, a^3 + b^3. Here are a few examples that illustrate the sum of cubes formula. In solving algebraic expressions of various types, this factoring formula comes in handy. This formula can also be memorized within minutes. As well, it is very similar to the difference between cubes formula. Among the most important algebraic identities is the sum of cubes formula. As a cube plus a cube, it is represented by a^3 + b^3. A^3 + b^3 = (a + b) (a^2 - ab + b^2) is the sum of cubes (a^3 + b^3) So, this was the post about cubes and their sum. Hope you have found it informative.
{"url":"https://booksfy.in/blogs/news/sum-of-n-natural-numbers-and-their-cubes","timestamp":"2024-11-04T14:52:53Z","content_type":"text/html","content_length":"368976","record_id":"<urn:uuid:036c860c-82d0-4708-a231-28ceb506037a>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00232.warc.gz"}
Re: [Numpy-discussion] Question about numpy.arange() 2 May 2010 2 May '10 12:57 a.m. On 1 May 2010 16:36, Gökhan Sever <gokhansever@gmail.com> wrote: Is "b" an expected value? I am suspecting another floating point arithmetic issue. I[1]: a = np.arange(1.6, 1.8, 0.1, dtype='float32') I[2]: a O[2]: array([ 1.60000002, 1.70000005], dtype=float32) I[3]: b = np.arange(1.7, 1.8, 0.1, dtype='float32') I[4]: b O[4]: array([ 1.70000005, 1.79999995], dtype=float32) A bit conflicting with the np.arange docstring: " Values are generated within the half-open interval ``[start, stop)`` (in other words, the interval including `start` but excluding `stop`). " This is a floating-point issue; since 1.79999995 does not actually equal 1.8, it is included. This arises because 0.1, 1.7, and 1.8 cannot be exactly represented in floating-point. A good rule to avoid being annoyed by this is: only use arange for integers. Use linspace if you want floating-point. Anne -- Gökhan _______________________________________________ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
{"url":"https://mail.python.org/archives/list/numpy-discussion@python.org/message/E2GMUZALK474FUYT7DD32V77QNXY3J2H/","timestamp":"2024-11-02T05:01:13Z","content_type":"text/html","content_length":"13685","record_id":"<urn:uuid:5873da65-f13f-49f9-9e1c-cd01f3ca119e>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00401.warc.gz"}
mathabx – Three series of mathematical symbols Mathabx is a set of 3 mathematical symbols font series: matha, mathb and mathx. They are defined by METAFONT code and should be of reasonable quality (bitmap output). Things change from time to time, so there is no claim of stability (encoding, metrics, design). The package includes Plain TeX and LaTeX support macros. A version of the fonts, in Adobe Type 1 format, is also available. Sources /fonts/mathabx Home page http://www-math.univ-poitiers.fr/~phan/ Licenses The LaTeX Project Public License Maintainer Anthony Phan Contained in TeXLive as mathabx MiKTeX as mathabx Topics MF Font Font symbol maths Download the contents of this package in one zip archive (939.5k). Community Comments Maybe you are interested in the following packages as well. Package Links
{"url":"https://ctan.org/pkg/mathabx","timestamp":"2024-11-09T13:01:23Z","content_type":"text/html","content_length":"16685","record_id":"<urn:uuid:64548016-cd6c-448b-8b40-57f8e53b92d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00689.warc.gz"}
Fast Fourier Transform with Perlin Noise Fast Fourier Transform with Perlin Noise# This example shows how to apply a Fast Fourier Transform (FFT) to a pyvista.ImageData using pyvista.ImageDataFilters.fft() filter. Here, we demonstrate FFT usage by first generating Perlin noise using pyvista.sample_function() to sample pyvista.perlin_noise, and then performing FFT of the sampled noise to show the frequency content of that noise. from __future__ import annotations import numpy as np import pyvista as pv Generate Perlin Noise# Start by generating some Perlin Noise as in Sample Function: Perlin Noise in 2D example. Note that we are generating it in a flat plane and using a frequency of 10 in the x direction and 5 in the y direction. The unit of frequency is 1/pixel. Also note that the dimensions of the image are powers of 2. This is because the FFT is much more efficient for arrays sized as a power of 2. freq = [10, 5, 0] noise = pv.perlin_noise(1, freq, (0, 0, 0)) xdim, ydim = (2**9, 2**9) sampled = pv.sample_function(noise, bounds=(0, 10, 0, 10, 0, 10), dim=(xdim, ydim, 1)) # warp and plot the sampled noise warped_noise = sampled.warp_by_scalar() warped_noise.plot(show_scalar_bar=False, text='Perlin Noise', lighting=False) Perform FFT of Perlin Noise# Next, perform an FFT of the noise and plot the frequency content. For the sake of simplicity, we will only plot the content in the first quadrant. Note the usage of numpy.fft.fftfreq() to get the frequencies. sampled_fft = sampled.fft() freq = np.fft.fftfreq(sampled.dimensions[0], sampled.spacing[0]) max_freq = freq.max() # only show the first quadrant subset = sampled_fft.extract_subset((0, xdim // 2, 0, ydim // 2, 0, 0)) Plot the Frequency Domain# Now, plot the noise in the frequency domain. Note how there is more high frequency content in the x direction and this matches the frequencies given to pyvista.perlin_noise. # scale to make the plot viewable subset['scalars'] = np.abs(subset.active_scalars) warped_subset = subset.warp_by_scalar(factor=0.0001) pl = pv.Plotter(lighting='three lights') pl.add_mesh(warped_subset, cmap='blues', show_scalar_bar=False) axes_ranges=(0, max_freq, 0, max_freq, 0, warped_subset.bounds[-1]), xtitle='X Frequency', ytitle='Y Frequency', pl.add_text('Frequency Domain of the Perlin Noise') Low Pass Filter# Let’s perform a low pass filter on the frequency content and then convert it back into the space (pixel) domain by immediately applying a reverse FFT. When converting back, keep only the real content. The imaginary content has no physical meaning in the physical domain. PyVista will drop the imaginary content, but will warn you of it. As expected, we only see low frequency noise. low_pass = sampled_fft.low_pass(1.0, 1.0, 1.0).rfft() low_pass['scalars'] = np.real(low_pass.active_scalars) warped_low_pass = low_pass.warp_by_scalar() warped_low_pass.plot(show_scalar_bar=False, text='Low Pass of the Perlin Noise', lighting=False) High Pass Filter# This time, let’s perform a high pass filter on the frequency content and then convert it back into the space (pixel) domain by immediately applying a reverse FFT. When converting back, keep only the real content. The imaginary content has no physical meaning in the pixel domain. As expected, we only see the high frequency noise content as the low frequency noise has been attenuated. high_pass = sampled_fft.high_pass(1.0, 1.0, 1.0).rfft() high_pass['scalars'] = np.real(high_pass.active_scalars) warped_high_pass = high_pass.warp_by_scalar() warped_high_pass.plot(show_scalar_bar=False, text='High Pass of the Perlin Noise', lighting=False) Sum Low and High Pass# Show that the sum of the low and high passes equals the original noise. grid = pv.ImageData(dimensions=sampled.dimensions, spacing=sampled.spacing) grid['scalars'] = high_pass['scalars'] + low_pass['scalars'] 'Low and High Pass identical to the original:', np.allclose(grid['scalars'], sampled['scalars']), pl = pv.Plotter(shape=(1, 2)) pl.add_mesh(sampled.warp_by_scalar(), show_scalar_bar=False, lighting=False) pl.add_text('Original Dataset') pl.subplot(0, 1) pl.add_mesh(grid.warp_by_scalar(), show_scalar_bar=False, lighting=False) pl.add_text('Sum of the Low and High Passes') Low and High Pass identical to the original: True Animate the variation of the cutoff frequency. def warp_low_pass_noise(cfreq, scalar_ptp=None): """Process the sampled FFT and warp by scalars.""" if scalar_ptp is None: scalar_ptp = np.ptp(sampled['scalars']) output = sampled_fft.low_pass(cfreq, cfreq, cfreq).rfft() # on the left: raw FFT magnitude output['scalars'] = output.active_scalars.real warped_raw = output.warp_by_scalar() # on the right: scale to fixed warped height output_scaled = output.translate((-11, 11, 0), inplace=False) output_scaled['scalars_warp'] = output['scalars'] / np.ptp(output['scalars']) * scalar_ptp warped_scaled = output_scaled.warp_by_scalar('scalars_warp') warped_scaled.active_scalars_name = 'scalars' # push center back to xy plane due to peaks near 0 frequency warped_scaled.translate((0, 0, -warped_scaled.center[-1]), inplace=True) return warped_raw + warped_scaled # Initialize the plotter and plot off-screen to save the animation as a GIF. plotter = pv.Plotter(notebook=False, off_screen=True) plotter.open_gif("low_pass.gif", fps=8) # add the initial mesh init_mesh = warp_low_pass_noise(1e-2) plotter.add_mesh(init_mesh, show_scalar_bar=False, lighting=False, n_colors=128) for freq in np.geomspace(1e-2, 10, 25): mesh = warp_low_pass_noise(freq) plotter.add_mesh(mesh, show_scalar_bar=False, lighting=False, n_colors=128) plotter.add_text(f"Cutoff Frequency: {freq:.2f}", color="black") # write the last frame a few times to "pause" the gif for _ in range(10): The left mesh in the above animation warps based on the raw values of the FFT amplitude. This emphasizes how taking into account more and more frequencies as the animation progresses, we recover a gradually larger proportion of the full noise sample. This is why the mesh starts “flat” and grows larger as the frequency cutoff is increased. In contrast, the right mesh is always warped to the same visible height, irrespective of the cutoff frequency. This highlights how the typical wavelength (size of the features) of the Perlin noise decreases as the frequency cutoff is increased since wavelength and frequency are inversely proportional. Total running time of the script: (0 minutes 47.633 seconds)
{"url":"https://docs.pyvista.org/examples/01-filter/image-fft-perlin-noise","timestamp":"2024-11-06T18:41:30Z","content_type":"text/html","content_length":"67243","record_id":"<urn:uuid:261bd375-9930-4922-a582-10f8ad0cf1c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00873.warc.gz"}
to ellucidate effieciency of ball mill WEBMar 1, 2006 · Choice of the operating parameters for ball milling. Steel balls with a density of 7800 kg/m 3 were used. The total load of balls was calculated by the formal fractional mill volume filled by balls (J), using a bed porosity of fractional filling of voids between the balls (U) can be calculated by U = fc / ; fc is the formal fractional . WhatsApp: +86 18838072829 WEBJul 24, 2023 · An increasing trend of anthropogenic activities such as urbanization and industrialization has resulted in induction and accumulation of various kinds of heavy metals in the environment, which ultimately has disturbed the biogeochemical balance. Therefore, the present study was conducted to probe the efficiency of conocarpus (Conocarpus . WhatsApp: +86 18838072829 WEBApr 8, 2002 · 5. Conclusions. Laboratory batch ball milling of 20×30 mesh quartz feed in water for a slurry concentration range of 20% to 56% solid by volume exhibited an acceleration of specific breakage rate of this size as fines accumulated in the mill. A quantitative measure of this acceleration effect was expressed in terms of the . WhatsApp: +86 18838072829 WEBDec 14, 2015 · BOND BALL MILL GRINDABILITY LABORATORY PROCEDURE. Prepare sample to 6 mesh by stage crushing and screening. Determine Screen Analysis. Determine Bulk Density Lbs/Ft 3. Calculate weight of material charge. Material Charge (gms) = Bulk Density (Lbs/Ft 3) x 700 cc/ Lbs/Ft 3. Material charge = Bulk Wt. (gm/lit.) x 700 . WhatsApp: +86 18838072829 WEBJan 1, 2016 · abrasive and impact wear due to their large. (75 – 100 mm) dia meters. Ball mill balls. experience a greater number of impacts, but at. lower magnitude than SAG mill balls, due t o. the smaller ... WhatsApp: +86 18838072829 WEBStirred mills are primarily used for fine and ultrafine grinding. They dominate these grinding appliions because greater stress intensity can be delivered in stirred mills and they can achieve better energy efficiency than ball mills in fine and ultrafine grinding. Investigations were conducted on whether the greater performance of stirred mills over . WhatsApp: +86 18838072829 WEBThe factors affecting milling efficiency are ball size, type and density, the grinding circuit parameters, mill internals such as the liner profile, etcetera, the mill operating parameters (velocity, percentage of circulating load and pulp density). WhatsApp: +86 18838072829 WEBIf a ball mill uses little or no water during grinding, it is a 'dry' mill. If a ball mill uses water during grinding, it is a 'wet' mill. A typical ball mill will have a drum length that is 1 or times the drum diameter. Ball mills with a drum length to diameter ratio greater than are referred to as tube mills. WhatsApp: +86 18838072829 WEBJul 15, 2013 · The basis for ball mill circuit sizing is still B ond's methodology (Bond, 1962). The Bond ball and rod. mill tests are used to determine specific energy c onsumption (kWh/t) to grind from a ... WhatsApp: +86 18838072829 WEBAll Ball mill or tube mill calculation, Critical speed, Ball Size calculations, Separator efficiency, Mill power cnsumption calculation, production at blain. Optimization; ... Critical Speed (nc) Mill Speed (n) Degree of Filling (%DF) Maximum ball size (MBS) Arm of gravity (a) Net Power Consumption (Pn) Gross Power Consumption (Pg) Go To ... WhatsApp: +86 18838072829 WEB10. Which of the following is the capacity of a roll crusher? a) 1 to 50 T/hr. b) 3 to 120 T/hr. c) 4 to 120 T/hr. d) 5 to 100 T/hr. View Answer. Sanfoundry Global Eduion Learning Series – Mechanical Operations. To practice all areas of Mechanical Operations, here is complete set of 1000+ Multiple Choice Questions and Answers. WhatsApp: +86 18838072829 WEBSep 29, 2018 · The article presents the results of laboratoryscale research on the determination of the impact of ball mill parameters and the feed directed to grinding on its effectiveness and comparing it with the efficiency of grinding in a rod mill. The research was carried out for grinding copper ore processed in O/ZWR KGHM PM WhatsApp: +86 18838072829 WEBYou've already forked sbm 0 Code Issues Pull Requests Packages Projects Releases Wiki Activity WhatsApp: +86 18838072829 WEBThis set of Mechanical Operations Multiple Choice Questions Answers (MCQs) focuses on "Ball Mill". 1. What is the average particle size of ultrafine grinders? a) 1 to 20 µm. b) 4 to 10 µm. c) 5 to 200 µm. WhatsApp: +86 18838072829 WEBJun 10, 2011 · The combination of Eqs. (2), (8), (9) allows one to describe or predict the effect of ball size on the selection function. An example of this is shown in Fig. general trend shows that for a given diameter of media, the milling rate increases with particle size, reaches a maximum at the effective particle size x m, and then decreases . WhatsApp: +86 18838072829 WEBNov 1, 2015 · Abstract. Ball size distribution is commonly used to optimise and control the quality of the mill product. A simulation model combining milling circuit and ball size distribution was used to determine the best makeup ball charge. The objective function was to find the ball mix that guarantees maximum production of the floatable size range . WhatsApp: +86 18838072829 WEBSep 1, 2018 · The article presents the results of laboratoryscale research on the determination of the impact of ball mill parameters and the feed directed to grinding on its effectiveness and comparing it with the efficiency of grinding in a rod mill. The research was carried out for grinding copper ore processed in O/ZWR KGHM PM WhatsApp: +86 18838072829 WEBOct 9, 2021 · There is no doubt about the practical interest of Fred Bond's methodology in the field of comminution, not only in tumbling mills design and operation but also in mineral raw materials grindability characterization. Increasing energy efficiency in comminution operations globally is considered a significant challenge involving several Sustainable . WhatsApp: +86 18838072829 WEBJan 31, 2024 · Ceramic ball milling has demonstrated remarkable energysaving efficiency in industrial appliions. However, there is a pressing need to enhance the grinding efficiency for coarse particles. This paper introduces a novel method of combining media primarily using ceramic balls supplemented with an appropriate proportion of steel balls. . WhatsApp: +86 18838072829 WEBMar 1, 2006 · For this purpose, the energy efficiency factor defined by the production of 3500 cm 2 /g surface area per unit of specific grinding energy was quantified under different conditions in a laboratory batch ball mill. WhatsApp: +86 18838072829 WEBNov 1, 2002 · In terms of this concept, the energy efficiency of the tumbling mill is as low as 1%, or less. For example, Lowrison (1974) reported that for a ball mill, the theoretical energy for size reduction (the free energy of the new surface produced during grinding) is % of the total energy supplied to the mill setup. WhatsApp: +86 18838072829 WEBOct 19, 2018 · The formula given above should be used for calculation of SCI. The results of the calculations are shown below. SCI (under normal load) = kgkg. SCI (under tight load) = kgkg. The calculations clearly illustrate that the grinding balls mill load is twice efficient under the tight loading compared to the normal load. WhatsApp: +86 18838072829 WEBJan 26, 2024 · a Schematic illustration of a ball mill process using triboelectric Comparison between piezoelectric and contactelectrifiion (CE) effects, which suggests that alyzing reactions WhatsApp: +86 18838072829 WEBDOI: / Corpus ID: ; Effect of ball size and powder loading on the milling efficiency of a laboratoryscale wet ball mill article{Shin2013EffectOB, title={Effect of ball size and powder loading on the milling efficiency of a laboratoryscale wet ball mill}, author={Hyunho Shin and Sangwook Lee . WhatsApp: +86 18838072829 WEBOct 12, 2016 · The simplest grinding circuit consists of a ball or rod mill in closed circuit with a classifier; the flow sheet is shown in Fig. 25 and the actual layout in Fig. 9. ... On account of the greater efficiency of the bowl classifier the trend of practice is towards its installation in plants grinding as coarse as 65 mesh. WhatsApp: +86 18838072829 WEBCement grinding with our highly efficient ball mill. An inefficient ball mill is a major expense and could even cost you product quality. The best ball mills enable you to achieve the desired fineness quickly and efficiently, with minimum energy expenditure and low maintenance. With more than 4000 references worldwide, the FLSmidth ball mill is ... WhatsApp: +86 18838072829 WEBHere are ten ways to improve the grinding efficiency of ball mill. 1. Change the original grindability. The complexity of grindability is determined by ore hardness, toughness, dissociation and structural defects. Small grindability, the ore is easier to grind, the wear of lining plate and steel ball is lower, and the energy consumption is also ... WhatsApp: +86 18838072829 WEBJul 1, 2017 · The grinding process in ball mills is notoriously known to be highly inefficient: only 1 to 2% of the inputted electrical energy serves for creating new surfaces. There is therefore obvious room for improvement, even considering that the dominant impact mechanism in tumbling mills is a fundamental liability limiting the efficiency. WhatsApp: +86 18838072829 WEBOct 1, 2020 · Fig. 1 a shows the oscillatory ball mill (Retsch® MM400) used in this study and a scheme (Fig. 1 b) representing one of its two 50 mL milling jars. Each jar is initially filled with a mass M of raw material and a single 25 mmdiameter steel ball. The jars vibrate horizontally at a frequency chosen between 3 and 30 Hz. The motion of the jar follows a . WhatsApp: +86 18838072829 WEB1. The document discusses formulas for calculating key performance metrics of ball mills, including power consumption, production rate, and gypsum set point. 2. It provides definitions of symbols used in the formulas along with examples of values for some metrics, like mill power of 899 kW and a production rate of 110 tph. 3. Formulas are given for . WhatsApp: +86 18838072829 WEBApr 7, 2018 · 884/463 = x – meters ( feet) Therefore, use one meter ( foot) diameter inside shell meter ( foot) diameter inside new liners by meter ( foot) long overflow ball mill with a 40 percent by volume ball charge. For rubber liners add 10% or meters (approximately 2 feet) to the length. WhatsApp: +86 18838072829 WEBAug 4, 2023 · High throughput: SAG mills are capable of processing large amounts of ore, making them ideal for operations that require high production can handle both coarse and fine grinding, resulting in improved overall efficiency. Energy savings: Compared to traditional ball mills, SAG mills consume less energy, leading to . WhatsApp: +86 18838072829 WEBA ball mill is a type of grinder used to grind and blend materials, and the ball milling method can be applied in mineral dressing, paints, ceramics etc. The ball milling owns the strengths of simple raw materials and high efficiency, and it . WhatsApp: +86 18838072829 WEBIF YOU WORK IN A CEMENT PLANT AND YOU NEED COURSES AND MANUALS LIKE THIS MANUAL AND BOOKS AND EXCEL SHEETS AND NOTES I SPENT 23 YEARS COLLECTING THEM YOU SHOULD CLICK HERE TO DOWNLOAD THEM NOW. (Mill output Vs. Blaine) (Mill output Vs. Residue) (Mill output Vs. Blaine) (sp. power Vs. Blaine) WhatsApp: +86 18838072829
{"url":"https://tresorsdejardin.fr/to/ellucidate/effieciency/of/ball/mill-198.html","timestamp":"2024-11-06T06:04:38Z","content_type":"application/xhtml+xml","content_length":"30856","record_id":"<urn:uuid:9fb795d6-5164-47e1-be90-11124eb1de36>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00338.warc.gz"}
10.3: Estimating the Difference in Two Population Means Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) Learning Objectives • Construct a confidence interval to estimate a difference in two population means (when conditions are met). Interpret the confidence interval in context. Confidence Interval to Estimate μ[1] − μ[2] In a hypothesis test, when the sample evidence leads us to reject the null hypothesis, we conclude that the population means differ or that one is larger than the other. An obvious next question is how much larger? In practice, when the sample mean difference is statistically significant, our next step is often to calculate a confidence interval to estimate the size of the population mean The confidence interval gives us a range of reasonable values for the difference in population means μ[1] − μ[2]. We call this the two-sample T-interval or the confidence interval to estimate a difference in two population means. The form of the confidence interval is similar to others we have seen. $\begin{array}{l}(\mathrm{sample}\text{}\mathrm{statistic})\text{}&PlusMinus;\text{}(\mathrm{margin}\text{}\mathrm{of}\text{}\mathrm{error})\\ (\mathrm{sample}\text{}\mathrm{statistic})\text{}& Sample Statistic Since we’re estimating the difference between two population means, the sample statistic is the difference between the means of the two independent samples: ${\stackrel{¯}{x}}_{1}-{\stackrel{¯}{x}}_ Critical T-Value The critical T-value comes from the T-model, just as it did in “Estimating a Population Mean.” Again, this value depends on the degrees of freedom (df). For two-sample T-test or two-sample T-intervals, the df value is based on a complicated formula that we do not cover in this course. We either give the df or use technology to find the df. Standard Error The estimated standard error for the two-sample T-interval is the same formula we used for the two-sample T-test. (As usual, s[1] and s[2] denote the sample standard deviations, and n[1] and n[2] denote the sample sizes.) Putting all this together gives us the following formula for the two-sample T-interval. Conditions for Use The conditions for using this two-sample T-interval are the same as the conditions for using the two-sample T-test. • The two random samples are independent and representative. • The variable is normally distributed in both populations. If it is not known, samples of more than 30 will have a difference in sample means that can be modeled adequately by the T-distribution. As we discussed in “Hypothesis Test for a Population Mean,” T-procedures are robust even when the variable is not normally distributed in the population. If checking normality in the populations is impossible, then we look at the distribution in the samples. If a histogram or dotplot of the data does not show extreme skew or outliers, we take it as a sign that the variable is not heavily skewed in the populations, and we use the inference procedure. Confidence Interval for the “Calories and Context” Study In the preceding few pages, we worked through a two-sample T-test for the “calories and context” example. In this example, we use the sample data to find a two-sample T-interval for μ[1] − μ[2] at the 95% confidence level. Recap of the Situation • Population 1: Let μ[1] be the mean number of calories purchased by women eating with other women. • Population 2: Let μ[2] be the mean number of calories purchased by women eating with men. Sample Statistics Size (n) $\mathrm{Mean}\text{}(\stackrel{¯}{x})$ SD (s) Sample 1 45 850 252 Sample 2 27 719 322 Standard Error We found that the standard error of the sampling distribution of all sample differences is approximately 72.47. $\sqrt{\frac{{{s}_{1}}^{2}}{{n}_{1}}+\frac{{{s}_{2}}^{2}}{{n}_{2}}}\text{}=\text{}\sqrt{\frac{{252}^{2}}{45}+\frac{{322}^{2}}{27}}\text{}\approx \text{}72.47$ Critical T-value For these two independent samples, df = 45. We find the critical T-value using the same simulation we used in “Estimating a Population Mean.” Reading from the simulation, we see that the critical T-value is 1.6790. Confidence Interval We can now put all this together to compute the confidence interval: $({\stackrel{¯}{x}}_{1}-{\stackrel{¯}{x}}_{2})\text{}&PlusMinus;\text{}{T}_{c}\text{}⋅\text{}\mathrm{SE}\text{}=\text{}(850-719)\text{}&PlusMinus;\text{}(1.6790)(72.47)\text{}\approx \text{}131\text Expressing this as an interval gives us: $(\mathrm{9,\; 253})$ We are 95% confident that the true value of μ[1] − μ[2] is between 9 and 253 calories. We can be more specific about the populations. We are 95% confident that at Indiana University of Pennsylvania, undergraduate women eating with women order between 9.32 and 252.68 more calories than undergraduate women eating with men. In this next activity, we focus on interpreting confidence intervals and evaluating a statistics project conducted by students in an introductory statistics course. Try It Improving Children’s Math Skills Students in an introductory statistics course at Los Medanos College designed an experiment to study the impact of subliminal messages on improving children’s math skills. The students were inspired by a similar study at City University of New York, as described in David Moore’s textbook The Basic Practice of Statistics(4th ed., W. H. Freeman, 2007). The participants were 11 children who attended an afterschool tutoring program at a local church. The children ranged in age from 8 to 11. All received tutoring in arithmetic skills. At the beginning of each tutoring session, the children watched a short video with a religious message that ended with a promotional message for the church. The statistics students added a slide that said, “I work hard and I am good at math.” This slide flashed quickly during the promotional message, so quickly that no one was aware of the slide. Children who attended the tutoring sessions on Mondays watched the video with the extra slide. Children who attended the tutoring sessions on Wednesday watched the video without the extra slide. The experiment lasted 4 weeks. The children took a pretest and posttest in arithmetic. Here are some of the results: Let’s Summarize Hypothesis tests and confidence intervals for two means can answer research questions about two populations or two treatments that involve quantitative data. In “Inference for a Difference between Population Means,” we focused on studies that produced two independent samples. Previously, in “Hpyothesis Test for a Population Mean,” we looked at matched-pairs studies in which individual data points in one sample are naturally paired with the individual data points in the other sample. The hypotheses for two population means are similar to those for two population proportions. The null hypothesis, H[0], is a statement of “no effect” or “no difference.” • H[0]: μ[1] – μ[2] = 0, which is the same as H[0]: μ[1] = μ[2] The alternative hypothesis, H[a], takes one of the following three forms: • H[a]: μ[1] – μ[2] < 0, which is the same as H[a]: μ[1] < μ[2] • H[a]: μ[1] – μ[2] > 0, which is the same as H[a]: μ[1] > μ[2] • H[a]: μ[1] – μ[2] ≠ 0, which is the same as H[a]: μ[1] ≠ μ[2] As usual, how we collect the data determines whether we can use it in the inference procedure. We have our usual two requirements for data collection. • Samples must be random in order to remove or minimize bias. • Sample must be representative of the population in question. We use the two-sample hypothesis test and confidence interval when the following conditions are met: • The two random samples are independent. • The variable is normally distributed in both populations. If this variable is not known, samples of more than 30 will have a difference in sample means that can be modeled adequately by the t-distribution. As we discussed in “Hypothesis Test for a Population Mean,” t-procedures are robust even when the variable is not normally distributed in the population. Therefore, if checking normality in the populations is impossible, then we look at the distribution in the samples. If a histogram or dotplot of the data does not show extreme skew or outliers, we take it as a sign that the variable is not heavily skewed in the populations, and we use the inference procedure. The confidence interval for μ[1] − μ[2] is Hypothesis test for H[0]: μ[1] – μ[2] = 0 is We use technology to find the degrees of freedom to determine P-values and critical t-values for confidence intervals. (In most problems in this section, we provided the degrees of freedom for you.) Contributors and Attributions CC licensed content, Shared previously
{"url":"https://stats.libretexts.org/Courses/Lumen_Learning/Concepts_in_Statistics_(Lumen)/10%3A_Inference_for_Means/10.03%3A_Estimating_the_Difference_in_Two_Population_Means","timestamp":"2024-11-02T05:23:50Z","content_type":"text/html","content_length":"154444","record_id":"<urn:uuid:32c2c2b5-6816-4987-900d-e75578aced4f>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00742.warc.gz"}