content
stringlengths
86
994k
meta
stringlengths
288
619
Queer math camp...without the math A math camp without…math. Sure, if math comes up they’ll talk about it. But really, “queer” kids, that’s up to you! The point is that you “queeries” get to express your identity! Thanks, Mathematical Association of America! One of the sponsorshttps://t.co/vh756kooiE — Resolute Mama (@ResoluteMama) May 12, 2023 The Mathematical Association of America is not some radical group of Left-wing ideologues; or rather, it didn’t use to be. It is almost 110 years old and used to exist to promote the study and use of … mathematics. Now, apparently, it exists as a place to provide grooming for high school kids into the alphabet cult. Let’s take a look at the description of this “math” camp. It is a critical theory tour-de-force. “Camp” of Mathematical Queeries is a five week virtual mathematics enrichment program for LGBTQ+ students entering grades 9-12. The enrichment is designed to tap into the rich funds of knowledge of the LGBTQ+ community and to provide a space in which LGBTQ+ and mathematical identity are affirmed as interconnected entities, central to the teaching and learning of mathematics, in Only two sentences into the description and you can see how steeped this class is in critical theory and total bullsh!t. What, pray tell, do LGBTQ+ people know about math that others don’t? What “rich fund of knowledge” is being “tapped into?” Is there some special queer math witchcraft about which nobody has ever heard? Were Newton and Leibniz secret gay lovers when they “independently” invented calculus?! And how, exactly, are alphabet identities and mathematical identities “interconnected?” Explain, please. This is all critical theory word salad. But wait, it gets much worse. Even the name of the program, “Camp” of Mathematical Queeries, has been designed to tap into the cultural histories of LGBTQ+ individuals, who remain vastly underrepresented in STEM fields. The word “Camp” is in quotation marks to invoke the aesthetic style of camp, which is closely associated with LGBTQ+ culture, especially the practice of drag. First of all, duh! Even I, a stupid normie, got the joke about “camp.” Wordplay is also very much a critical theory trademark because people who practice critical theory believe that words define reality they love playing with words, especially redefining them. Any mental masturbation with words is fine, and in this case, real-world masturbation is likely involved at some point. And drag. You MUST. HAVE. DRAG. Queeries, a queer play on the term query, when used as a verb means to question, often as a form of doubt. In the context of “Camp” the word “Queeries” is meant to honor the traditions of LGBTQ+ individuals, especially those who are Black, Brown, and/or disabled, that have sought to live their lives authentically by exploring routes and questions outside of the dominant, normative Unbelievably precious, isn’t it? We’ve got wordplay, sexual deviance, race consciousness, and rejection of ableism, all in the service of authentic living by undermining normative culture. I wonder how many colors the hair of the professor who wrote this is dyed? Do they wear a bow tie ironically? Do they look androgynous? Remember–this is a description that supposedly exists to entice high school students to join a math camp. It reads like a grant application for a gender studies professor’s sabbatical. Which, likely, it was originally. Throughout the five weeks students will engage in activities of mathematical problem posing and problem solving through group-worthy mathematical tasks centered on LGBTQ+ culture and history. I can’t even begin to imagine what kind of math tasks are involved in things “centered on LGBTQ+ culture and history,” but I fear it has to do with Monkeypox and orgies. “How quickly does an STD spread from Africa if one attends 3 orgies a week vs. 5?” “How many plane trips does it take to have Monkeypox spread to 5 continents?” Is Anthony Fauci involved somehow? (Apologies to my gay readers–I am against orgies no matter what your sexual identity! I love you guys, but monogamy forever! It is much healthier for the body and the soul.) All that critical theory bull is in just one paragraph. But that isn’t even the best part. There is no actual requirement that math takes place in this math camp. There is no math agenda, per se. The goal is to gather a bunch of students who apparently love drag and wordplay and then let them figure out what they want to do. While LGBTQ+ identity is often relegated to the margins and ignored in subjects like mathematics, “Camp” of Mathematical Queeries was created to resist this normative view. Our program was designed to illustrate that students’ LGBTQ+ identities are powerful assets to be utilized in the nurturing of positive mathematical identity. Our program honors the sentiments of Ocean Vuong, who wrote, …[W]hen I look at my life, I saw that queerness demanded an alternative innovation from me. I had to make alternative routes; it made me curious; it made me ask, ‘Is this enough for me?’ Our program taps into LGBTQ+’s folx natural propensity to explore alternative routes and ask questions that others may not. We believe this is exactly the liberatory approach needed to radically transform how we view what counts as mathematics and what it means to be mathematical. The methods and ideas for “Camp” of Mathematical Queeries has been inspired by a number of sources including (but not limited to!): □ Kai Rands’ works in mathematical Inqu[ee]ry Alexander S. Moore’s work on the “Queer Identity Intersection of Mathematics Education” Luis Leyva’s work on mathematics as a white, masculine space Kyne Santos’ (from Canada’s Drag Race!) Tiktok math videos. Rochelle Gutierrez’s work in creative insubordination and rehumanizing mathematics Brown and Walter’s work on mathematical problem posing Lessons from the book High School Mathematics Lessons to Explore, Understand, and Respond to Social Injustice The work of programs such as Indigenous STEAM, Girls STEM Institute, Girls who Code, and Love & LiteraTea Lots of math talk in all that, if you poke around. What as they say, what counts as math anyway? But the kicker is: What Math Will We Learn? That will be up to you! This is not a traditional enrichment camp or school program. Our program is completely student generated. While we may provide a specific context (e.g., mathematics re: queer representation in media) at various points throughout the program, students will be encouraged to pose whatever mathematical questions they find interesting related to that context (as well as other contexts they generate themselves). For this reason, there is no prescribed mathematical content that we will learn in the program. The mathematics we learn will depend entirely upon the questions students choose to explore and the mathematics that might aid them in that exploration. Similarly, we will never have required assignments or anything like a traditional school assignment. The goal of the program is to provide a space for mathematical play, discovery, and joy for LGBTQ+ students and for them to engage with mathematics in ways they find interesting and And there you have it. The Mathematical Association of America has become, apparently, a queer grooming organization focused on spreading critical theory, “centering” identity, equity, and…well, no actual math. Critical theory is a cancer in Western culture, and it has metastasized, destroying the core of our intellectual tradition, including physics and mathematics. People in STEM fields used to joke about how the Liberal Arts were going to hell, but they assumed that the harder sciences would be immune from the cultural Marxist rot. We tried to warn them, but they scoffed. Now? It’s probably too late. Imagine being a white-haired on-the-spectrum mathematics professor trying to object to the hijacking of the Mathematical Society of America. You would be crucified for objecting to this dreck. Academia is hopeless. It is time to wipe the slate clean and start over. Join the conversation as a VIP Member
{"url":"https://hotair.com/david-strom/2023/05/12/queer-math-camp-without-the-math-n550283","timestamp":"2024-11-03T18:31:40Z","content_type":"text/html","content_length":"98166","record_id":"<urn:uuid:166d3877-62b4-41a7-8e3e-f91e51045a80>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00181.warc.gz"}
Bread Pit Problem H Bread Pit New breeds of underground bacteria help in terraforming Mars. The bacteria feed mainly on regular bread loafs which are baked in huge quantities on the Mars surface and subsequently dropped into a specially constructed pit. To distribute the bread evenly in the subsurface, the pit consists of nearly vertical tunnels arranged in a tree-like structure. Each tunnel ends either in a cave filled with bacterial colonies or in a gate which connects into one or more other downward tunnels. The gates are mechanized and each of them keeps open only one downward tunnel at a time. When a loaf falls through a gate into a downward tunnel, the gate closes the tunnel and opens another one, to which it is connected, for the next arriving loaf. Opening the downward tunnels works in a cyclic fashion: When a gate closes the last downward tunnel it again opens the one which was open first. The order of open downward tunnels in each gate is fixed. At most one of the gates is on the surface. All loafs falling through at least one tunnel also fall through this topmost gate. The exception to the described scheme is the situation when the topmost gate is completely closed for maintenace, all tunnels become inaccessible and the loafs remain at the surface. In this scenario, for formal reasons, the surface is considered to be a cave and simutaneously the only node in the whole pit. When the system started operating, before any bread loaf was deposited in it, the first downward tunnel in each gate was open. Both caves and gates are commonly denoted as nodes, each node is labeled by a unique integer. Determine which cave did each bread loaf fall into, as they were subsequently dropped, one after another, into the pit. The first input line contains two integers $N$, $Q$ ($1 \le N, Q \le 3 \cdot 10^5$), the number of all nodes (gates and caves) in the pit and the number of bread loafs dropped into the pit. The nodes are labeled by integers $0, 1, \dots , N - 1$, the surface gate node is labeled by $0$. The second line contains $N - 1$ integers. The value of $i$’th integer on the line is the label of the predecessor of node $i$. The predecessor of a node is the closest gate from which a falling loaf arrives to node $i$. The second input line also encodes the order of open downward tunnels in each gate. When value $X$ appears on the $j$’th and $k$’th positions and $j < k$, the tunnel connecting X to j opens before the tunnel connecting $X$ to $k$ opens. Print $Q$ lines. The $i$’th line should contain the label of the cave that received the $i$’th loaf dropped into the pit. Sample Input 1 Sample Output 1 Sample Input 2 Sample Output 2
{"url":"https://open.kattis.com/contests/ncpc22practice3/problems/breadpit","timestamp":"2024-11-02T20:56:53Z","content_type":"text/html","content_length":"32257","record_id":"<urn:uuid:aa957582-cd75-45a6-91db-3e1b8ce3ec34>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00835.warc.gz"}
MS43 KV6 (FREELANDER NAS) (SM084) Realtime live display of the information the electronic control unit of the selected vehicle system is currently deriving from its input sensors. • Radiator Outlet Coolant Temperature:This shows the current temperature of the engine coolant measured by the radiator coolant temperature sensor at the radiator outlet. Expected values are between -48°C and 143°C • Manifold Pressure: The absolute air pressure at the throttle plate in kiloPascals, as measured by the air temp/MAP sensor. The value will vary if the engine is runing and different loads are • Ambient Air Pressure: The current ambient air pressure in kiloPascals, as seen by the MS43 ECU. • Battery Voltage: The current battery voltage, as seen by the MS43 ECU. Expected values 11V to 14.5V. • Idle Control Setpoint: The current idle control setpoint, possible values between 0 RPM and 2550 RPM. • Sensor Sypply 1: The current sensor supply 1 voltage, as seen by the MS43 ECU. Expected values between 0V and 5V • Sensor Supply 2: The current Sensor Supply 2 voltage, as seen by the MS43 ECU. Expected values between 0V and 5V • Pedal 1 Demand: The current driver demand pedal 1 voltage, as seen by the MS43 ECU. Expected values between 0V and 5V. • Pedal 2 Demand: The current driver demand pedal 2 voltage, as seen by the MS43 ECU. Expected values between 0V and 5V. • Throttle Pot 1: The current position of throttle pot. 1 in volts, as seen by the MS43 ECU. Expected values between 0V and 5V. • Throttle Pot 2: The current position of throttle pot. 2 in volts, as seen by the MS43 ECU. Expected values between 0V and 5V. • Throttle Angle: The current throttle angle as a percentage, as determined by the MS43 ECU. • Driver Demand: The current pedal angle as a percentage, as determined by the MS43 ECU. • Aircon Request: The status of aircon request ON or OFF • Aircon Compressor: The status of aircon compressor ON or OFF • Brake Light Switch: The state of the Brake light switch input. The hall-effect brake switch generates two brake switch status signals which are directly fed to the MS43 ECU. This is to conform with the safety concept of drive-by-wire vehicles. This status is used by the cruise control system to cancel cruise operation when braking is detected and by the body-controller and • Brake Light Test Switch: The state of the brake light test switch input. The brake switch status signal is used by the cruise control system to cancel cruise operation when braking is detected. • A/D Sensor Supply: A/D input to the MS43 ECU for the sensor supply signal. • A.D MAFM: A/D input to the MS43 ECU for the airflow meter signal. • A/D Coolant Temperature:A/D input to the MS43 ECU for the coolant temperature sensor signal. • A/D Inlet Air Temperature: A/D input to the MS43 ECU for the inlet air temperature sensor signal. • A/D Radiator Coolant: A/D input to the MS43 ECU for the radiator outlet coolant temperature sensor signal. • A/D Ignition Key: A/D input to the MS43 ECU for the ignition supply signal. • A/D DTML Pump: A/D input to the MS43 ECU for the DMTL pump current monitoring signal. • A/D Atmospheric Pressure Sensor: A/D input to the MS43 ECU for the atmospheric pressure sensor signal. • Cylinder 1 Ignition Angle: The current ignition angle in degrees, measured at cylinder 1, as seen by the MS43 ECU. • Knock Sensor 1/2: The output from knock sensor 1 (cylinders 1,3,5)/2 (cylinders 2,4,6). The knock sensors work by the piezo-electric effect and generates an output corresponding to the noise level of engine detonation. The engine management reads this signal and adjusts the ignition timing accordingly. Expected values between 0V and 5V • Noise Signal Cylinder 1-6: Noise signal for the cylinder determined from the knock sensor output. Expected values between 0V and 5V. • Time Since Engine Started: The time in seconds since the engine started. • Ignition Switch: The Ignition switch voltage as determined by the MS43 ECU. • Engine Start Temperature: The temperature of the engine when started, as determined by the MS43 ECU. • Engine Fan Control Duty Cycle: The duty ratio of the MS43 engine fan control output. • Gearbox Detection: • Coolant Temperature: This is the temperature measured by the coolant sensor. The value is used to adapt fuelling and timing levels of the engine in order to maintain performance and emissions as the engine temperature varies. It is also used to ensure a good quality of engine start. Expected values are between -48°C and 143°C • Inlet Air Tempereture: This shows the current temperature of the air at the engine air intake, as measured by the air temperature sensor in the inlet manifold. Expected values are between -48°C and 143°C • Injection Time: The current injection time in milliseconds, as seen by the MS43 ECU. • Calculated Load Value: Engine load value calculated from the mass airflow. • Mass Air Flow: This shows the rate of airflow through the manifold in kilograms per hour. Mass airflow is an indication of engine load and can be used as an indicator of air leaks in the • Engine Speed: The current engine speed in rpm • O2 Sensor Heater Upstream 1/2: The Pulse Width Modulation (PWM) of the oxygen sensor heater. The oxygen sensors must operate at high temperatures. In order to achieve this, are fitted with heater elements controlled by the EMS. The heaters are operated when the exhaust gas temperature is not sufficient in order to mentain the sesor temperature (immediately after engine start and low load). The sensor heating is limited during the condensation phase to ensure the ceramic temperature does not exceed the critical temperature value. • O2 Sensor Heater Downstream 1/2: The Pulse Width Modulation (PWM) of the oxygen sensor heater. The oxygen sensors must operate at high temperatures. In order to achieve this, are fitted with heater elements controlled by the EMS. The heaters are operated when the exhaust gas temperature is not sufficient in order to mentain the sesor temperature (immediately after engine start and low load). The sensor heating is limited during the condensation phase to ensure the ceramic temperature does not exceed the critical temperature value. • O2 Sensor Heater Upstream 1/2 A/D Input: The A/D input voltage for lambda sensor upstream bank 1 (cylinders 1,3,5) and bank 2 (cylinders 2,4,6). Exhaust oxygen levels are measured upstream of the catalyst to allow closed loop to control the fuelling for maximum catalyst conversion efficiency. • O2 Sensor Heater Downstream 1/2 A/D Input: The A/D input voltage for lambda sensor downstream bank 1 (cylinders 1,3,5) and bank 2 (cylinders 2,4,6). Exhaust oxygen levels are measured downstream of the catalyst to monitor catalyst efficiency and check performance of the upstream sensors. • Lambda Control Bank 1/2: The status of the lambda control system for bank 1/2. The staus can be: Undefined, Open Loop - Waiting, Closed Loop, Open Loop - Driving Conditions, Open Loop - System Fault. All displayed operating states are acceptable except for Open loop due to system fault. • Lambda Control Factor Bank 1/2: When closed loop fuelling is enabled, indicates the required change from rich-lean or lean-rich AFR as determined from the outputs of the oxygen sensors. The value is used to adjust the injection time accordingly. • Lambda Adaption Factor Bank 1/2:The adaption correction used in the calculation of the injection time. • Lambda Additive Correction Bank 1/2: The additive adaption correction used in the calculation of the injection time. • Lambda Multiplicative Correction Bank 1/2:The multiplicative adaption correction used in the calculation of the injection time.
{"url":"https://blackbox-solutions.com/help/SM084.html","timestamp":"2024-11-04T16:42:17Z","content_type":"text/html","content_length":"18014","record_id":"<urn:uuid:d8e12f9c-3532-4bb0-bf89-9b1421213159>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00586.warc.gz"}
Residual Income Models: Understanding Equity Valuation 3.3.5 Residual Income Models Residual Income Models (RIM) are a cornerstone in the realm of equity valuation, offering a nuanced approach to determining a company’s intrinsic value. Unlike traditional valuation models that focus primarily on dividends or cash flows, RIM emphasizes economic profit, providing a more comprehensive picture of a company’s financial health and value creation capabilities. This section delves into the intricacies of residual income, the mechanics of the Residual Income Model, and its applicability in various financial contexts. Understanding Residual Income Residual income is defined as the net income generated by a company after accounting for the cost of capital employed. It represents the economic profit, which is the surplus income after covering the opportunity cost of capital. This concept is crucial as it highlights the value a company creates beyond the minimum required return expected by investors. Residual Income Formula: $$ \text{Residual Income} = \text{Net Income} - (\text{Equity Capital} \times \text{Cost of Equity}) $$ This formula underscores the importance of considering the cost of equity, which is often overlooked in traditional accounting profit calculations. By deducting the cost of equity from net income, residual income provides a clearer picture of the economic value added by the company. The Residual Income Model (RIM) The Residual Income Model is a valuation method that calculates a company’s intrinsic value by considering both its book value and the present value of expected residual income. This approach is particularly useful for companies that do not pay dividends or have unpredictable cash flows, as it focuses on the value generated from operations rather than cash distributions. RIM Formula: $$ \text{Intrinsic Value} = \text{Book Value per Share} + \text{Present Value of Expected Residual Income} $$ This formula integrates the book value of equity with the future economic profits expected to be generated by the company, discounted back to their present value. The model emphasizes the importance of value creation beyond the required returns, making it a robust tool for equity valuation. Steps to Apply the Residual Income Model Applying the Residual Income Model involves several key steps: 1. Calculate Net Income: Determine the company’s net income from its financial statements. 2. Determine Equity Capital: Assess the total equity capital employed in the business. 3. Estimate Cost of Equity: Calculate the cost of equity using models such as the Capital Asset Pricing Model (CAPM). 4. Compute Residual Income: Use the residual income formula to calculate the economic profit. 5. Forecast Future Residual Income: Project the expected residual income for future periods. 6. Discount Future Residual Income: Calculate the present value of the forecasted residual income using an appropriate discount rate. 7. Calculate Intrinsic Value: Add the present value of expected residual income to the book value per share to determine the intrinsic value. Importance of Economic Profit Over Accounting Profit Economic profit, as measured by residual income, provides a more accurate reflection of a company’s value creation capabilities than traditional accounting profit. While accounting profit focuses on net income, it often ignores the cost of capital, leading to an incomplete assessment of financial performance. Residual income, on the other hand, accounts for the opportunity cost of equity, offering a more comprehensive evaluation of a company’s profitability and value generation. Example: Evaluating Shareholder Value Consider a company with a net income of $500,000, equity capital of $2,000,000, and a cost of equity of 10%. The residual income is calculated as follows: $$ \text{Residual Income} = \$500,000 - (\$2,000,000 \times 0.10) = \$500,000 - \$200,000 = \$300,000 $$ In this example, the company generates a positive residual income, indicating that it is creating value for shareholders. Conversely, if the residual income were negative, it would suggest that the company is destroying shareholder value, even if it reports a positive net income. Applicability and Limitations of Residual Income Models Residual Income Models are particularly useful in scenarios where traditional valuation methods fall short. They are ideal for companies with irregular dividend payments or unpredictable cash flows, as they focus on the value generated from operations. However, RIM also has its limitations: • Dependence on Accounting Data: The model relies heavily on accounting data, which can be subject to manipulation or inaccuracies. • Assumptions About Future Profitability: RIM requires assumptions about future residual income, which can be challenging to estimate accurately. • Complexity: The model involves complex calculations and requires a thorough understanding of financial statements and valuation techniques. Despite these limitations, the Residual Income Model remains a valuable tool for investors and analysts seeking to assess a company’s intrinsic value and economic profit. Residual Income Models offer a sophisticated approach to equity valuation, emphasizing economic profit and value creation beyond required returns. By integrating the cost of equity into the valuation process, RIM provides a more comprehensive assessment of a company’s financial health and potential for growth. While the model has its limitations, its applicability in various financial contexts makes it an essential tool for investors and analysts alike. Quiz Time! 📚✨ Quiz Time! ✨📚 ### What is residual income? - [x] Net income minus a charge for the cost of capital employed - [ ] Net income plus a charge for the cost of capital employed - [ ] Total revenue minus total expenses - [ ] Operating income minus interest expenses > **Explanation:** Residual income is calculated as net income minus a charge for the cost of capital employed, representing economic profit. ### How does the Residual Income Model calculate intrinsic value? - [x] Book Value per Share + Present Value of Expected Residual Income - [ ] Net Income + Present Value of Future Cash Flows - [ ] Book Value per Share + Present Value of Dividends - [ ] Equity Capital + Net Income > **Explanation:** The Residual Income Model calculates intrinsic value by adding the book value per share to the present value of expected residual income. ### Why is economic profit considered more important than accounting profit? - [x] It accounts for the cost of equity and opportunity cost - [ ] It is easier to calculate - [ ] It is always higher than accounting profit - [ ] It is based on cash flows > **Explanation:** Economic profit considers the cost of equity and opportunity cost, providing a more accurate reflection of a company's value creation capabilities. ### What is the first step in applying the Residual Income Model? - [x] Calculate Net Income - [ ] Determine Equity Capital - [ ] Estimate Cost of Equity - [ ] Compute Residual Income > **Explanation:** The first step in applying the Residual Income Model is to calculate the company's net income from its financial statements. # ## Which scenario is the Residual Income Model particularly useful for? - [x] Companies with irregular dividend payments - [ ] Companies with stable cash flows - [ ] Companies with high debt levels - [ ] Companies with high growth rates > **Explanation:** The Residual Income Model is useful for companies with irregular dividend payments or unpredictable cash flows, as it focuses on value generated from operations. ### What is a limitation of the Residual Income Model? - [x] Dependence on accounting data - [ ] It is too simple - [ ] It ignores the cost of equity - [ ] It overestimates intrinsic value > **Explanation:** A limitation of the Residual Income Model is its dependence on accounting data, which can be subject to manipulation or inaccuracies. ### How is residual income calculated? - [x] Net Income - (Equity Capital × Cost of Equity) - [ ] Net Income + (Equity Capital × Cost of Equity) - [ ] Total Revenue - Total Expenses - [ ] Operating Income - Interest Expenses > **Explanation:** Residual income is calculated as net income minus the product of equity capital and cost of equity. ### What does a negative residual income indicate? - [x] The company may be destroying shareholder value - [ ] The company is generating high profits - [ ] The company has a high book value - [ ] The company is undervalued > **Explanation:** A negative residual income indicates that the company may be destroying shareholder value, even if it reports a positive net income. ### What is the role of the cost of equity in the Residual Income Model? - [x] It represents the opportunity cost of capital - [ ] It is used to calculate net income - [ ] It determines the book value per share - [ ] It is ignored in the model > **Explanation:** The cost of equity represents the opportunity cost of capital and is crucial in calculating residual income and intrinsic value. ### True or False: The Residual Income Model is only applicable to companies that pay dividends. - [ ] True - [x] False > **Explanation:** False. The Residual Income Model is particularly useful for companies that do not pay dividends or have unpredictable cash flows.
{"url":"https://csccourse.ca/3/3/5/","timestamp":"2024-11-05T00:55:29Z","content_type":"text/html","content_length":"91158","record_id":"<urn:uuid:e553bb81-ed6a-4b83-aff3-8656e8e2af37>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00487.warc.gz"}
exponential exponential exponential exponential last updated: 2003-07-19 The curve is the exponential of an exponential. Its inverse is the logarithmic logarithm, e.g. y = ln (ln x). This logarithmic logarithm is an approximation of the inverse prime summation. The Gompertz curve ^1) is a linear form of the exponential exponential. 1) the Gompertz relation has the form of:
{"url":"https://www.2dcurves.com/exponential/exponentialee.html","timestamp":"2024-11-13T18:29:17Z","content_type":"text/html","content_length":"3314","record_id":"<urn:uuid:d28c7c1b-972d-4dd0-b216-fad100a202ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00305.warc.gz"}
SURFER Gallery by Bianca Violet The National Institute for Mathematical Sciences (NIMS) presents a very special NIMS-IMAGINARY exhibition in collaboration with the ICM committee and the Mathematisches Forschungsinstitut Oberwolfach (MFO). It will feature the best of all IMAGINARY modules of the last years and a lot of new software, images, films and sculptures. It will be the biggest IMAGINARY exhibition shown so far.
{"url":"https://www.imaginary.org/fr/node/939","timestamp":"2024-11-03T21:46:26Z","content_type":"text/html","content_length":"96787","record_id":"<urn:uuid:fc28a7ea-7e58-40ae-9d23-06918d465136>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00858.warc.gz"}
What is a Sampling Distribution? The sampling distribution is one of the most important concepts in inferential statistics, and often times the most glossed over concept in elementary statistics for social science courses. This article will introduce the basic ideas of a sampling distribution of the sample mean, as well as a few common ways we use the sampling distribution in statistics. When we conduct a study in psychology, this almost always includes taking a sample and measuring some aspect or characteristic about that sample. While we assume that a large enough sample will represent the population enough to make statistical inferences, there can be natural variation between two different samples taken from the same population. This sampling variation is random, allowing means from two different samples to differ. The sampling distribution of the sample mean models this randomness. Definition In statistical jargon, a sampling distribution of the sample mean is a probability distribution of all possible sample means from all possible samples (n). In plain English, the sampling distribution is what you would get if you took a bunch of distinct samples, and plotted their respective means (mean from sample 1, mean from sample 2, etc.) and looked at the distribution. Except a “bunch of” samples is really ALL samples, and this distribution can be used to infer the probability you got a specific mean from any sample. That last sentence was a bit confusing right? Let’s look at an example to clarify. Example Say you are curious about the average height for a college student at College X. You then go to four random classrooms on campus, and measure all of the students’ heights in the class. Each classroom has one sample mean, but they slightly differ. For example, the four means from the four samples could be as follows: Class 1 mean height → 67 inches Class 2 mean height → 70 inches Class 3 mean height → 66 inches Class 4 mean height → 68 inches If you plot these points, it looks like this: Not really much of a distribution, right? Now let’s say we went to a lot more classes, took students’ height measurements, calculated their means and added them to our previous four means. It could look something like this: In fact, if we somehow were able to take ALL samples from the entire population of college students at College X, you would see something like this: This last distribution is the sampling distribution of sample means. It is a distribution created by every possible mean from every possible sample. And if you already noted that it resembles a normal distribution, well then you would be correct! According to the Central Limit Theorem, if the samples used to create each mean of the distribution are large enough, the sampling distribution of the mean of any independent random sample will be normally distributed, even if the population distribution is not perfectly normal. The sampling distribution of the sample mean is very useful because it can tell us the probability of getting any specific mean from a random sample. Put more simply, we can use this distribution to tell us how far off our own sample mean is from all other possible means, and use this to inform decisions and estimates in null hypothesis statistical testing. Standard Error of the Mean One aspect we often use from the sampling distribution in inferential statistics is the standard error of the mean (noted as SE, or SEM). The SEM is a hard concept to grasp for many individuals, but once you understand the sampling distribution it’s actually quite simple. The SEM is the standard deviation of the sampling distribution, calculated by dividing the standard deviation by the square root of the sample size (n) for a given sample. We often use elements of the standard error of the mean when we make inferences in statistics. For example, the t statistic for an independent samples t test uses the SEM as the denominator. In this sense, the numerator of this t statistic is the difference in means between group 1 and group 2, and the denominator is the standard deviation of all possible means from all possible samples. What the t value then represents is how different the means of group 1 and group 2 are in standard units. Further, to get a confidence interval of your mean estimate for an independent samples t test, you also use the SEM. Using the SEM allows you to calculate a confidence interval of the mean estimate because it brings in the element of variability in any given sample mean. The larger the standard deviation of the sampling distribution is, the larger your confidence interval will be. Conclusion The sampling distribution of the sample mean represents the randomness of sampling variation of sample means. This distribution is an integral part to many of the statistics we use in our everyday research, though it doesn’t receive much of the spotlight in traditional introductory statistics for social science classrooms. If you would like to learn more about sampling distributions, visit the follow websites which talk more about sampling distributions. [1] https://www.khanacademy.org/math/probability/statistics-inferential/sampling-distribution/v/sampling-distribution-of-the-sample-mean [2] http://stattrek.com/sampling/sampling-distribution.aspx [3] http://www.psychstat.missouristate.edu/introbook/sbk19.htm [4] http://www.vassarstats.net/textbook/ch6pt1.html
{"url":"https://sites.lifesci.ucla.edu/psych-pia/2016-08-13-what-is-a-sampling-distribution/","timestamp":"2024-11-07T04:09:20Z","content_type":"text/html","content_length":"89435","record_id":"<urn:uuid:ff37875d-cad4-41a9-a783-dd9fabee4cc4>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00373.warc.gz"}
naive bayes classifier Archives - Intuitive Tutorials Introduction In this post, let’s take a look at the intuition behind Naive Bayes Classifier used in machine learning. Naive Bayes classifier is one of the basic algorithms often encountered in machine learning applications. If linear regression was based on concepts from linear algebra and calculus, naive Bayes classifier mostly backed up by probability theory. … An Intuitive Explanation of Naive Bayes Classifier Read More »
{"url":"https://intuitivetutorial.com/tag/naive-bayes-classifier/","timestamp":"2024-11-07T09:27:37Z","content_type":"text/html","content_length":"61671","record_id":"<urn:uuid:f7b6f625-a31b-48a4-b1b5-252ec7ed25c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00701.warc.gz"}
More Cellular Automata: A New Kind of Science | Online by Stephen Wolfram [Page 53] More Cellular Automata The first picture below shows the rules used in the four cellular automata on the facing page. The overall structure of these rules is the same in each case; what differs is the specific choice of new colors for each possible combination of previous colors for a cell and its two neighbors. There turn out to be a total of 256 possible sets of choices that can be made. And following my original work on cellular automata these choices can be numbered from 0 to 255, as in the second picture below. But so how do cellular automata with all these different rules behave? The next page shows a few examples in detail, while the following two pages [55, 56] show what happens in all 256 possible At first, the diversity of what one sees is a little overwhelming. But on closer investigation, definite themes begin to emerge. In the very simplest cases, all the cells in the cellular automaton end up just having the same color after one step. Thus, for example, in The rules used for the four examples of cellular automata on the facing page. In each case, these specify the new color of a cell for each possible combination of colors of that cell and its immediate neighbors on the previous step. The rules are numbered according to the scheme described above. The sequence of 256 possible cellular automaton rules of the kind shown above. As indicated, the rules can conveniently be numbered from 0 to 255. The number assigned is such that when written in base 2, it gives a sequence of 0's and 1's that correspond to the sequence of new colors chosen for each of the eight possible cases covered by the rule.
{"url":"https://www.wolframscience.com/nks/p53--more-cellular-automata/","timestamp":"2024-11-03T01:02:56Z","content_type":"text/html","content_length":"92972","record_id":"<urn:uuid:15868af8-b062-4ca2-a9b0-ff0b418bbbc3>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00556.warc.gz"}
Prealgebra Companion Book Answer Key This answer key companion provides answers to the worksheet and interactive practice questions from our Prealgebra course and includes answers for both Volume 1 and Volume 2 course companion books. Spiral-bound to lie flat while working, this answer key is a handy resource for working offline with the Prealgebra course materials. Take a Look! Flip through a preview of the Answer Key below. (note: actual answers are hidden). Preview of Prealgebra Answer Key The Prealgebra Answer Key book is printed and fulfilled on-demand and therefore is non-refundable. Please allow 10-12 days for fulfillment & shipping. Need a copy of the answer key sooner? Contact us at support@thinkwell.com for access to a complete digital copy of the book. (68 full-color pages; ISBN: 978-1-60538-078-0)
{"url":"https://www.thinkwellhomeschool.com/products/prealgebra-companion-book-answer-key","timestamp":"2024-11-13T09:26:32Z","content_type":"text/html","content_length":"314201","record_id":"<urn:uuid:69ef0251-0f60-4c2b-ac4b-64019c94252b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00294.warc.gz"}
Understanding Normal Distribution - Finance Train Understanding Normal Distribution The normal distribution is the well-known bell-shaped curve depicted below. The bell-shaped curve comes from a statistical tendency for outcomes to cluster symmetrically around the mean (or average). Deviations from the mean are described in terms of standard deviations. In all normal distributions, 68% of outcomes will fall within 1 standard deviation to either side of the mean. Let's illustrate the concept of mean and standard deviation with a simple example. My New York subway commute every day is 30 minutes on average, with a standard deviation of 5 minutes. Assuming a normal distribution for the time it takes me to get to work, this would imply that: • 68% of the time, I can expect my daily commute to be between 25 minutes and 35 minutes (i.e., the mean of 30 minutes plus or minus 1 standard deviation, or 5 minutes). • 16% of the time, my commute is less than 25 minutes (because the normal distribution is symmetrical around the mean, I expect this event to occur 16% of the time, or (100%-68%)/2). • 16% of the time, my commute is greater than 35 minutes (again, because the normal distribution is symmetrical). Or, in other words, my 84% confidence level worst-case commute is 35 minutes (for example, only 16% of the time I would expect longer commute). From this example, it makes sense that the more standard deviations we move from the mean, the lower the probability is of such an event occurring. For example, a delay of 10 minutes or more (2 standard deviations) only has a 2.5% chance of occurring, compared to a 16% probability of a delay of 5 minutes or more (1 standard deviation). The table below relates standard deviations to lower tail probabilities (lower tail probabilities quantify the chance of an event of that magnitude or greater occurring): │Standard Deviations│Lower Tail Probability│Commuting Example │ │1 │16% │Delay of 5 minutes or more │ │1.28 │10 │Delay of 6.4 minutes or more │ │1.65 │5 │Delay of 8.25 minutes or more │ │2 │2.5 │Delay of 10 minutes or more │ │2.33 │1 │Delay of 11.65 minutes or more │ Data Science in Finance: 9-Book Bundle Master R and Python for financial data science with our comprehensive bundle of 9 ebooks. What's Included: • Getting Started with R • R Programming for Data Science • Data Visualization with R • Financial Time Series Analysis with R • Quantitative Trading Strategies with R • Derivatives with R • Credit Risk Modelling With R • Python for Data Science • Machine Learning in Finance using Python Each book includes PDFs, explanations, instructions, data files, and R code for all examples. Get the Bundle for $39 (Regular $57) JOIN 30,000 DATA PROFESSIONALS Free Guides - Getting Started with R and Python Enter your name and email address below and we will email you the guides for R programming and Python.
{"url":"https://financetrain.com/understanding-normal-distribution","timestamp":"2024-11-02T08:45:07Z","content_type":"text/html","content_length":"101654","record_id":"<urn:uuid:bc054500-bb1d-49cb-8cfa-5b6fe6c4113c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00575.warc.gz"}
Gravitational Constant 6.67384(80) × 10^-11 m^3 kg^-1 s^-2 Episode #7 of the course “Most important numbers in the world” Gravity is a complex concept that continues to elude scientists in all its subtleties. Since its discovery and the beginnings of theories surrounding the properties and mechanisms of gravity, Newton and others have used a standard measurement known as “G” (or affectionately called “Big G”) to represent the force of gravity as a constant. According to the law of universal gravitation, the force between two bodies depends on their size and their relative distance to one another. The number represented by G is equal to the proportional constant that is exerted between these two bodies, and is called the “gravitational constant.” Newton’s Gravitational Constant is expressed as: F = Gm[1]m[2 ]/ r^2 where “F” is the force, “m[1]” and “m[2]” represent the larger and smaller masses being measured in relation to one another, and “r” is the distance between those masses. Mathematically, the representation of G is expressed in a series of units (time squared, length cubed divided by mass, etc) that are necessary to cancel and compute all the aspects of the units of measurement in the various equations. In addition to its importance in understanding how particles of matter relate to one another on Earth, the concept of G is especially important for understanding the properties and functions of matter in space. Astrophysics could not have expanded in the directions and understandings that allowed the space exploration and atomic energy technologies of the 20th century if it had not been for the enhanced understanding of the constant measurement of the force of gravity. Share with friends
{"url":"https://gohighbrow.com/gravitational-constant/","timestamp":"2024-11-04T11:11:48Z","content_type":"text/html","content_length":"64529","record_id":"<urn:uuid:27de489b-b4f0-40d3-a53a-bc897f5a667e>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00053.warc.gz"}
MCQ Questions Class 11 Economics Measures of Dispersion with Answer MCQs for Economics Class 11 with Answers Chapter 6 Measures of Dispersion Students of class 11 Economics should refer to MCQ Questions Class 11 Economics Measures of Dispersion with answers provided here which is an important chapter in Class 11 Economics NCERT textbook. These MCQ for Class 11 Economics with Answers have been prepared based on the latest CBSE and NCERT syllabus and examination guidelines for Class 11 Economics. The following MCQs can help you to practice and get better marks in the upcoming class 11 Economics examination Chapter 6 Measures of Dispersion MCQ with Answers Class 11 Economics MCQ Questions Class 11 Economics Measures of Dispersion provided below have been prepared by expert teachers of grade 11. These objective questions with solutions are expected to come in the upcoming Standard 11 examinations. Learn the below provided MCQ questions to get better marks in examinations. Question. When it comes to comparing two or more distributions we consider: (a) Absolute measures of dispersion (b) Relative measures of dispersion (c) Both (a) and (b) (d) Either (a) or (b) Question. The most commonly used measure of dispersion is: (a) Range (b) Standard deviation (c) Coefficient of variation (d) Quartile deviation Question. The range of 15, 12, 10, 9, 17, 20 is: (a) 5 (b) 12 (c) 13 (d) 11 Question. The standard deviation of 10, 16, 10, 16, 10, 10, 16, 16 is: (a) 4 (b) 6 (c) 3 (d) 0 Question. Corresponding to first quartile, the cumulative frequency is: (a) N/2 (b) N/4 (c) 3N/4 (d) None of these Question. What is the coefficient of range for the following wages (in `) of 8 workers? 80, 65, 90, 60, 75, 70, 72, 85 (a) 35 (b) 25 (c) 30 (d) 20 Question. ______ divide the total number of observations into 4 equal parts. (a) median (b) deciles (c) quartiles (d) percentiles Question. Above upper quartile, the frequency is equal to: (a) N/4 (b) N/2 (c) 3N/4 (d) None of these Question. What is the value of mean deviation about mean for the following numbers? 5, 8, 6, 3, 4 (a) 5.20 (b) 7.20 (c) 1.44 (d) 2.23 Question. ________ is equal to the value corresponding to cumulative frequency (N + 1)/4 from simple frequency distribution. (a) Median (b) 1st quartile (c) 3rd quartile (d) 1st decile Question. (3rd quartile – 1st quartile)/2 is: (a) coefficient of quartile deviation (b) median (c) quartile deviation (d) inter-quartile range Question. Corresponding to upper quartile, the cumulative frequency is: (a) 3N/4 (b) N/4 (c) 2N/4 (d) None of these Question. ___________ quartile is known as Upper quartile. (a) First (b) Second (c) Third (d) None of these Question. Lower quartile is: (a) first quartile (b) second quartile (c) upper quartile (d) None of these Question. Between second & upper quartile, the frequency is equal to: (a) 3N/4 (b) N/4 (c) N/2 (d) None of these Question. The values which divide the total number of observations into 10 equal parts are: (a) quartiles (b) percentiles (c) deciles (d) None of these Question. Rank of 1st quartile is: (a) (n + 1)/2 (b) (n + 1)/4 (c) 3(n + 1)/4 (d) None of these Question. The values which divide the total number of observations into 100 equal parts is: (a) percentiles (b) quartiles (c) deciles (d) None of these Question. The second quartile is known as _________. (a) median (b) lower quartile (c) upper quartile (d) None of these Question. Between first & second quartile, the frequency is equal to: (a) 3N/4 (b) N/2 (c) N/4 (d) None of these Question. 10th percentile is equal to: (a) 1st decile (b) 10th decile (c) 9th decile (d) None of these Question. Rank of 3rd quartile is: (a) 3(n + 1)/4 (b) (n + 1)/4 (c) (n + 1)/2 (d) None of these Question. To define quartile deviation we use: (a) lower & middle quartiles (b) lower & upper quartiles (c) upper & middle quartiles (d) None of these Question. Rank of median is: (a) (n + 1)/2 (b) (n + 1)/4 (c) 3(n + 1)/4 (d) None of these Fill up the blanks. Question. There are _______ deciles. Question. There are _______ percentiles. Question. Measures of dispersion reflect the quantum of ___________ in values. Question. Higher value of range implies ___________ dispersion. Question. As long as the ________________ and _____________ values remain unaltered, any change in other values does not affect range. Question. The presence of even one extremely high or low value in a distribution can reduce the utility of ___________ as a measure of dispersion. Thus, we may need a measure which is not unduly affected by the outliers or extreme values such as ____________. Range, Quartile Deviation or Standard Deviation or Mean Deviation Question. __________ (Mean/Median/Mode) is not used to calculate mean deviation. Question. Mean deviation is the least when calculated from the ________ and it will be higher if calculated from the ________. (mean/median/mode) Question. Positive square root of the _________ is the standard deviation. Question. Standard Deviation is ________. (absolute measure/relative measure) Question. The amount of variation is designated as _______ (absolute measure/relative measure) of dispersion. absolute measure Question. Standard Deviation can also be calculated from the values directly, i.e., without taking deviations. This amounts to taking deviations from __________. Question. Quartile deviation is a measure of dispersion. Question. Coefficient of standard deviation = (Standard Deviation × 100)/Mean. Question. The value of the standard deviation does not depend upon the choice of the origin. Question. Coefficient of variation is independent of the unit of measurement. We hope the above multiple choice questions for Class 11 Economics for Chapter 6 Measures of Dispersion provided above with answers based on the latest syllabus and examination guidelines issued by CBSE, NCERT and KVS are really useful for you. Measures of Dispersion is an important chapter in Class 11 as it provides very strong understanding about this topic. Students should go through the answers provided for the MCQs after they have themselves solved the questions. All MCQs have been provided with four options for the students to solve. These questions are really useful for benefit of class 11 students. Please go through these and let us know if you have any feedback in the comments section.
{"url":"https://dkgoelsolutions.com/mcqs-for-economics-class-11-with-answers-chapter-6-measures-of-dispersion/","timestamp":"2024-11-12T14:14:11Z","content_type":"text/html","content_length":"143719","record_id":"<urn:uuid:272a0db5-4575-4aa7-a891-6c2e50f12156>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00303.warc.gz"}
6th Grade Georgia Milestones Assessment Math Worksheets: FREE & Printable Don't search any further! Explore our 6th-grade Georgia Milestones Assessment math worksheet collection, where you'll find FREE, subject-categorized math problems specifically designed for the 6th-grade Georgia Milestones Assessment exam! The Readiness Improvement Success Empowerment (Georgia Milestones Assessment) is a test to determine progress. The Readiness Improvement Success Empowerment (Georgia Milestones Assessment) is an exam designed to measure student’s progress in grades 3-8. In this blog post, we’ve prepared 6th-grade Georgia Milestones Assessment math worksheets to help 6th-grade students get ready for the math portion of this test. Our 6th-grade Georgia Milestones Assessment math worksheets feature an optimal number of exercises for each topic, ensuring thorough coverage without overwhelming 6th-grade students. These exercises, found in the 6th-grade Georgia Milestones Assessment math worksheets, are carefully crafted to align with the style of questions on the 6th-grade Georgia Milestones Assessment math test, effectively enhancing students’ readiness to excel. IMPORTANT: COPYRIGHT TERMS: These worksheets are for personal use. Worksheets may not be uploaded to the internet in any form, including classroom/personal websites or network drives. You can download the worksheets and print as many as you need. You have permission to distribute the printed copies to your students, teachers, tutors, and friends. You Do NOT have permission to send these worksheets to anyone in any way (via email, text messages, or other ways). They MUST download the worksheets themselves. You can send the address of this page to your students, tutors, friends, etc. The Absolute Best Book to Ace the Georgia Milestones Assessment Math Test Original price was: $29.99.Current price is: $14.99. 6th Grade Georgia Milestones Assessment Mathematics Concepts Whole Numbers Fractions and Decimals Real Numbers and Integers Proportions, Ratios, and Percent Algebraic Expressions Equations and Inequalities Geometry and Solid Figures Statistics and Probability 6th Grade Georgia Milestones Assessment Math Exercises Fractions and Decimals Real Numbers and Integers Proportions and Ratios Algebraic Expressions Equations and Inequalities Exponents and Radicals Solid Figures Looking for the best resource to help you succeed on the 6th Grade Georgia Milestones Assessment Math test? The Most Comprehensive Review for 6th-Grade Students Related to This Article What people say about "6th Grade Georgia Milestones Assessment Math Worksheets: FREE & Printable - Effortless Math: We Help Students Learn to LOVE Mathematics"? No one replied yet.
{"url":"https://www.effortlessmath.com/blog/6th-grade-georgia-milestones-assessment-math-worksheets/","timestamp":"2024-11-13T09:36:43Z","content_type":"text/html","content_length":"123421","record_id":"<urn:uuid:8773c4d9-f32e-44f3-923f-e6e7373bb149>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00559.warc.gz"}
Jun Liu | Applied Mathematics PhD, University of Waterloo Email: j.liu@uwaterloo.ca Telephone: (519) 888-4567 Ext. 47550 Office: MC 6108 Jun Liu is a Professor of Applied Mathematics and a Canada Research Chair in Hybrid Systems and Control at the University of Waterloo, where he directs the Hybrid Systems Lab. His main research interests are in the theory and applications of hybrid systems and control, including rigorous computational methods for control design with applications in cyber-physical systems and robotics. His research also includes optimization and learning theory, with applications in robotics, data science, and AI. He is a co-author of the books "Formal Methods for Control of Nonlinear Systems" (CRC Press) and "Model-Based Reinforcement Learning: From Data to Continuous Actions with a Python-based Toolbox" (Wiley-IEEE Press). He is an associate editor for the IFAC journal Automatica and the journal Systems & Control Letters. Research interests • Systems and control theory • Formal methods for control design • Learning and optimization theory • Applications in cyber-physical systems, robotics, data science, and AI
{"url":"https://uwaterloo.ca/applied-mathematics/profiles/jun-liu","timestamp":"2024-11-07T23:06:54Z","content_type":"text/html","content_length":"110495","record_id":"<urn:uuid:6c5e036e-b75b-4f80-94b1-6b9b82504678>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00159.warc.gz"}
November 2017 In this paper, we propose and study the iteration complexity of an inexact Douglas-Rachford splitting (DRS) method and a Douglas-Rachford-Tseng’s forward-backward (F-B) splitting method for solving two-operator and four-operator monotone inclusions, respectively. The former method (although based on a slightly different mechanism of iteration) is motivated by the recent work of J. Eckstein and W. … Read more An algorithm for binary chance-constrained problems using IIS We propose an algorithm based on infeasible irreducible subsystems (IIS) to solve general binary chance-constrained problems. By leverag- ing on the problem structure we are able to generate good quality upper bounds to the optimal value early in the algorithm, and the discrete do- main is used to guide us eciently in the search of … Read more Linear Convergence Rate of the Generalized Alternating Direction Method of Multipliers for a Class of Convex Optimization Problems Rencently, the generalized aternating direction method of multipliers (GADMM) proposed by Eckstein and Bertsekas has received intensive attention from a broad spectrum of areas. In this paper, we consider the convergence rate of GADMM when applying to the convex optimization problems that the subdifferentials of the underlying functions are piecewise linear multifunctions, including LASSO, a … Read more Efficient Convex Optimization for Linear MPC Model predictive control (MPC) formulations with linear dynamics and quadratic objectives can be solved efficiently by using a primal-dual interior-point framework, with complexity proportional to the length of the horizon. An alternative, which is more able to exploit the similarity of the problems that are solved at each decision point of linear MPC, is to … Read more On global minimizers of quadratic functions with cubic regularization In this paper, we analyze some theoretical properties of the problem of minimizing a quadratic function with a cubic regularization term, arising in many methods for unconstrained and constrained optimization that have been proposed in the last years. First we show that, given any stationary point that is not a global solution, it is possible … Read more Bootstrap Robust Prescriptive Analytics We address the problem of prescribing an optimal decision in a framework where its cost depends on uncertain problem parameters $Y$ that need to be learned from data. Earlier work by Bertsimas and Kallus (2014) transforms classical machine learning methods that merely predict $Y$ from supervised training data $[(x_1, y_1), \dots, (x_n, y_n)]$ into prescriptive … Read more Analysis of the Gradient Method with an Armijo-Wolfe Line Search on a Class of Nonsmooth Convex Functions It has long been known that the gradient (steepest descent) method may fail on nonsmooth problems, but the examples that have appeared in the literature are either devised specifically to defeat a gradient or subgradient method with an exact line search or are unstable with respect to perturbation of the initial point. We give an … Read more Random Gradient Extrapolation for Distributed and Stochastic Optimization In this paper, we consider a class of finite-sum convex optimization problems defined over a distributed multiagent network with $m$ agents connected to a central server. In particular, the objective function consists of the average of $m$ ($\ge 1$) smooth components associated with each network agent together with a strongly convex term. Our major contribution … Read more Amenable cones: error bounds without constraint qualifications We provide a framework for obtaining error bounds for linear conic problems without assuming constraint qualifications or regularity conditions. The key aspects of our approach are the notions of amenable cones and facial residual functions. For amenable cones, it is shown that error bounds can be expressed as a composition of facial residual functions. The … Read more Crowd-based City Logistics Cities are drivers of economic development, providing infrastructure to support countless activities and services. Today, the world’s 750 biggest cities account for more than 57% of the global GDP and this number is expected to increase to 61% by 2030. More than half of the world’s population lives in cities, or urban areas, and this … Read more
{"url":"https://optimization-online.org/2017/11/","timestamp":"2024-11-06T07:20:16Z","content_type":"text/html","content_length":"105821","record_id":"<urn:uuid:2a31c0dc-faa8-456f-a175-20e2aece279b>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00122.warc.gz"}
Parallel Planes: How It Is Used In Calculus And Geometry? - Calculus Help Parallel Planes: How It Is Used In Calculus And Geometry? by admin_calcPosted on A parallel plane is a flat, two-dimensional surface. If you have two planes that don’t intersect, they’re parallel. Use calculus formulas to determine the distance between two parallel planes. You can solve some problems by finding random points on the plane, and then use an equation to figure the distance. What Are Parallel Planes In Calculus? You probably deal with parallel parking now and then, so you may have a vague idea of the definition of parallelism and how it might be used with figures in calculus and geometry. You’ll encounter parallel planes in your Calculus 3 classes, and focus on equations of planes and other problems. Here’s a look at planes in calculus, and how parallelism relates to them. We’ll also look at parallel postulates, and how parallel lines and planes are used in geometry and calculus. What Is A Parallel Plane? In calculus or geometry, a plane is a two-dimensional, flat surface. Two non-intersecting planes are parallel. You can find three parallel planes in cubes. The planes on opposite sides of the cube are parallel to each other. Parallel lines are mentioned much more than planes that are parallel. They are the lines in a plane that don’t meet. A plane and a line, or two planes in a 3D Euclidean space are parallel if they don’t share a point. Parallelism is used in Euclidean and affine geometry. Hyperbolic geometry may have lines with analogous features that fall under parallelism’s properties. Skew lines are two lines in a 3D space that don’t meet in a common plane. The Parallel Postulate Euclid’s parallel postulate says that for every straight line and a point that’s not on it, there’s only one straight line passing through that point that doesn’t intersect the first line, regardless of how far the line or point are extended. This mathematical postulate corresponds with Euclid’s Fifth Postulate. Euclid didn’t use this postulate until Proposition 29 of the Elements. Many experts didn’t believe the Fifth Postulate was accurate but considered it a theorem taken from Euclid’s first four postulates. The term absolute geometry refers to geometry that is based on Euclid’s first four postulates. Many parallel postulate proofs have been written and discussed by the mathematical community over the centuries. The dissertation of G. S. Klügel in 1763 called Euclid’s parallel postulate a necessary tool to prove mathematical results. The postulate wasn’t intuitive, Klügel wrote, but it was helpful. An axiom proposed by 17^th Century mathematician John Wallis stated that a triangle could be changed to be larger or smaller with any distortion of its angles or proportions. Lobachevsky and Janos Bolyai, in two separate 1823 studies, concluded that you could create non-Euclidean geometry that didn’t adhere to the parallel postulate. The parallel postulate refers to modern day Euclidean geometry. Change the phrase so only one straight line passes or no line which passes exists, or two lines or more pass, you will be describing elliptic (no line passing) or hyperbolic geometry (two or more lines passing). The following theorems and axioms are equal to the parallel postulate: One of Hilbert’s parallel axioms is also equivalent to the parallel postulate. Find The Distance Between Two Parallel Planes Our planes are II[1 ]and II[2. ]These planes are parallel if II[1’]s normal (n[1 ]= a[1 ]b[1 ]c[1]) is the scalar multiple k of II[2’]s normal n_2 = (ka_2, kb_2, kc_2) Let’s look at the planes II2:4x+8y+12z+6=0 and II1:2x+4y+6z+1=0. Get the norms of these planes to arrive at n1=(2,4,6) and n2=(4,8,12) You’ll get 2n1=n2 and finally II1∥II2. Find an arbitrary point on II1 or II2. After choosing your arbitrary point on one of the planes, use the equation for the other plane in the formula for the distance between a plane and point. You’ll now determine the distance between the two planes. The planes II2x+3y+4z−3=0 and II2:−4x−6y−8z+8=0 are the basis of our second problem. Find a random point on the first plane, for example, 0,0, and four over 3, and then use formula D (distance) equals ax0 plus by0 plus cz0 plus d over the square root of a squared plus b squared plus c squared. Now figure out the distance between the two planes using this formula. D equals 4(0) plus negative 6(0) plus negative 8(3/4) plus 8 over the square root of negative 4 to the second power plus negative 6 to the second power plus negative 8 to the second power, followed by D equals negative 6 plus 8 over the square root of 16 plus 36 plus 64, then D equals 2 over the square root of 116. Practicing With Parallel Planes Remember to practice equations involving parallel planes several times, using problems other than those assigned by your teacher. There are plenty of practice questions in textbooks and online. You should also join a study group or contact a tutor if you need more practice with geometry or calculus. Logical Learning In Calculus And Geometry The logical or mathematical style of learning works for any subject, but it works best for algebra, geometry, and calculus. Logical learning enables you to recognize patterns easily and draw connections between content that may seem unrelated. You know how to group information do you arrive at a correct conclusion. A person with a propensity for logical learning remembers the basics of geometry, algebra, and calculus without referring to a textbook. You can perform moderately difficult calculations in your You use a system to work through problems, and apply it to all types of equations and mathematical questions. Setting budgets and numerical goalposts helps you make progress when you solve complicated equations. A scientific thought process allows you to support your arguments with statistics and facts. You should identify and point out flaws in other people’s’ logic, and work out strategies for all types of projects. You may play video games that involve detective work and strategizing simulated war plans. If you have a logical thinking style, you may like science, computer programming or law as well as mathematics. You look for the logical way to solve a calculus or geometry problem. You strive to understand all the details behind why you perform certain steps in solving an equation. You aren’t satisfied with merely memorizing formulas. Explore the logical steps you use to apply to a problem, and keep them in mind when you tackle a new equation. Don’t overanalyze when you work on a difficult problem. Work with the formula and prior knowledge of similar equations. If you don’t get the answer right at first, try again, but avoid developing “analysis paralysis” when figuring out what you did wrong. The video may take a few seconds to load. Having trouble Viewing Video content? Some browsers do not support this version – Try a different browser.
{"url":"https://calculus-help.com/parallel-planes/","timestamp":"2024-11-04T13:52:24Z","content_type":"text/html","content_length":"61340","record_id":"<urn:uuid:a90d67fd-431a-4c39-b91b-bc936fa213e6>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00006.warc.gz"}
1.3: Data, Sampling, and Variation in Data and Sampling (2024) Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vectorC}[1]{\textbf{#1}}\) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}}\) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}}\) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}}}\) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{#1}}} \) Data may come from a population or from a sample. Small letters like \(x\) or \(y\) generally are used to represent data values. Most data can be put into the following categories: Qualitative data are the result of categorizing or describing attributes of a population. Hair color, blood type, ethnic group, the car a person drives, and the street a person lives on are examples of qualitative data. Qualitative data are generally described by words or letters. For instance, hair color might be black, dark brown, light brown, blonde, gray, or red. Blood type might be AB+, O-, or B+. Researchers often prefer to use quantitative data over qualitative data because it lends itself more easily to mathematical analysis. For example, it does not make sense to find an average hair color or blood type. Quantitative data are always numbers. Quantitative data are the result of counting or measuring attributes of a population. Amount of money, pulse rate, weight, number of people living in your town, and number of students who take statistics are examples of quantitative data. Quantitative data may be either discrete or continuous. All data that are the result of counting are called quantitative discrete data. These data take on only certain numerical values. If you count the number of phone calls you receive for each day of the week, you might get values such as zero, one, two, or three. All data that are the result of measuring are quantitative continuous data assuming that we can measure accurately. Measuring angles in radians might result in such numbers as \(\frac{\pi}{6}\), \(\ frac{\pi}{3}\), \(\frac{\pi}{2}\), \(\pi\), \(\frac{3\pi}{4}\), and so on. If you and your friends carry backpacks with books in them to school, the numbers of books in the backpacks are discrete data and the weights of the backpacks are continuous data. Sample of Quantitative Discrete Data The data are the number of books students carry in their backpacks. You sample five students. Two students carry three books, one student carries four books, one student carries two books, and one student carries one book. The numbers of books (three, four, two, and one) are the quantitative discrete data. Exercise \(\PageIndex{1}\) The data are the number of machines in a gym. You sample five gyms. One gym has 12 machines, one gym has 15 machines, one gym has ten machines, one gym has 22 machines, and the other gym has 20 machines. What type of data is this? quantitative discrete data Sample of Quantitative Continuous Data The data are the weights of backpacks with books in them. You sample the same five students. The weights (in pounds) of their backpacks are 6.2, 7, 6.8, 9.1, 4.3. Notice that backpacks carrying three books can have different weights. Weights are quantitative continuous data because weights are measured. Exercise \(\PageIndex{2}\) The data are the areas of lawns in square feet. You sample five houses. The areas of the lawns are 144 sq. feet, 160 sq. feet, 190 sq. feet, 180 sq. feet, and 210 sq. feet. What type of data is this? quantitative continuous data Exercise \(\PageIndex{3}\) You go to the supermarket and purchase three cans of soup (19 ounces) tomato bisque, 14.1 ounces lentil, and 19 ounces Italian wedding), two packages of nuts (walnuts and peanuts), four different kinds of vegetable (broccoli, cauliflower, spinach, and carrots), and two desserts (16 ounces Cherry Garcia ice cream and two pounds (32 ounces chocolate chip cookies). Name data sets that are quantitative discrete, quantitative continuous, and qualitative. One Possible Solution: • The three cans of soup, two packages of nuts, four kinds of vegetables and two desserts are quantitative discrete data because you count them. • The weights of the soups (19 ounces, 14.1 ounces, 19 ounces) are quantitative continuous data because you measure weights as precisely as possible. • Types of soups, nuts, vegetables and desserts are qualitative data because they are categorical. Try to identify additional data sets in this example. Sample of qualitative data The data are the colors of backpacks. Again, you sample the same five students. One student has a red backpack, two students have black backpacks, one student has a green backpack, and one student has a gray backpack. The colors red, black, black, green, and gray are qualitative data. Exercise \(\PageIndex{4}\) The data are the colors of houses. You sample five houses. The colors of the houses are white, yellow, white, red, and white. What type of data is this? qualitative data Collaborative Exercise \(\PageIndex{1}\) Work collaboratively to determine the correct data type (quantitative or qualitative). Indicate whether quantitative data are continuous or discrete. Hint: Data that are discrete often start with the words "the number of." 1. the number of pairs of shoes you own 2. the type of car you drive 3. where you go on vacation 4. the distance it is from your home to the nearest grocery store 5. the number of classes you take per school year. 6. the tuition for your classes 7. the type of calculator you use 8. movie ratings 9. political party preferences 10. weights of sumo wrestlers 11. amount of money (in dollars) won playing poker 12. number of correct answers on a quiz 13. peoples’ attitudes toward the government 14. IQ scores (This may cause some discussion.) Items a, e, f, k, and l are quantitative discrete; items d, j, and n are quantitative continuous; items b, c, g, h, i, and m are qualitative. Exercise \(\PageIndex{5}\) Determine the correct data type (quantitative or qualitative) for the number of cars in a parking lot. Indicate whether quantitative data are continuous or discrete. quantitative discrete Exercise \(\PageIndex{6}\) A statistics professor collects information about the classification of her students as freshmen, sophomores, juniors, or seniors. The data she collects are summarized in the pie chart Figure \(\ PageIndex{1}\). What type of data does this graph show? This pie chart shows the students in each year, which is qualitative data. Exercise \(\PageIndex{7}\) The registrar at State University keeps records of the number of credit hours students complete each semester. The data he collects are summarized in the histogram. The class boundaries are 10 to less than 13, 13 to less than 16, 16 to less than 19, 19 to less than 22, and 22 to less than 25. What type of data does this graph show? A histogram is used to display quantitative data: the numbers of credit hours completed. Because students can complete only a whole number of hours (no fractions of hours allowed), this data is quantitative discrete. Qualitative Data Discussion Below are tables comparing the number of part-time and full-time students at Lemoore College and Coalinga College enrolled for the spring 2024 semester. The tables display counts (frequencies) and percentages or proportions (relative frequencies). The percent columns make comparing the same categories in the colleges easier. Displaying percentages along with the numbers is often helpful, but it is particularly important when comparing sets of data that do not have the same totals, such as the total enrollments for both colleges in this example. Notice how much larger the percentage for part-time students at Lemoore College is compared to Coalinga College. Table \(\PageIndex{1}\): Spring Term 2024 (Census Lemoore College Coalinga College Number Percent Number Percent Full-time 4,688 63.7% Full-time 1,571 54.0%% Part-time 2,671 36.3% Part-time 1,340 46.0% Total 7,359 100% Total 2,911 100% Tables are a good way of organizing and displaying data. But graphs can be even more helpful in understanding the data. There are no strict rules concerning which graphs to use. Two graphs that are used to display qualitative data are pie charts and bar graphs. • In a pie chart, categories of data are represented by wedges in a circle and are proportional in size to the percent of individuals in each category. • In a bar graph, the length of the bar for each category is proportional to the number or percent of individuals in each category. Bars may be vertical or horizontal. • A Pareto chart consists of bars that are sorted into order by category size (largest to smallest). Look at Figures \(\PageIndex{3}\) and \(\PageIndex{4}\) and determine which graph (pie or bar) you think displays the comparisons better. It is a good idea to look at a variety of graphs to see which is the most helpful in displaying the data. We might make different choices of what we think is the “best” graph depending on the data and the context. Our choice also depends on what we are using the data for. Percentages That Add to More (or Less) Than 100% Sometimes percentages add up to be more than 100% (or less than 100%). In the graph, the percentages add to more than 100% because students can be in more than one category. A bar graph is appropriate to compare the relative size of the categories. A pie chart cannot be used. It also could not be used if the percentages added to less than 100%. Table \(\PageIndex{2}\): De Anza College Spring 2010 Characteristic/Category Percent Full-Time Students 40.9% Students who intend to transfer to a 4-year educational institution 48.6% Students under age 25 61.0% TOTAL 150.5% Omitting Categories/Missing Data The table displays Ethnicity of Students but is missing the "Other/Unknown" category. This category contains people who did not feel they fit into any of the ethnicity categories or declined to respond. Notice that the frequencies do not add up to the total number of students. In this situation, create a bar graph and not a pie chart. Table \(\PageIndex{2}\): Ethnicity of Students at De Anza College Fall Term 2007 (Census Day) Frequency Percent Asian 8,794 36.1% Black 1,412 5.8% Filipino 1,298 5.3% Hispanic 4,180 17.1% Native American 146 0.6% Pacific Islander 236 1.0% White 5,978 24.5% TOTAL 22,044 out of 24,382 90.4% out of 100% The following graph is the same as the previous graph but the “Other/Unknown” percent (9.6%) has been included. The “Other/Unknown” category is large compared to some of the other categories (Native American, 0.6%, Pacific Islander 1.0%). This is important to know when we think about what the data are telling us. This particular bar graph in Figure \(\PageIndex{4}\) can be difficult to understand visually. The graph in Figure \(\PageIndex{5}\) is a Pareto chart. The Pareto chart has the bars sorted from largest to smallest and is easier to read and interpret. Pie Charts: No Missing Data The following pie charts have the “Other/Unknown” category included (since the percentages must add to 100%). The chart in Figure \(\PageIndex{6}\) is organized by the size of each wedge, which makes it a more visually informative graph than the unsorted, alphabetical graph in Figure \(\PageIndex{6}\). Gathering information about an entire population often costs too much or is virtually impossible. Instead, we use a sample of the population. A sample should have the same characteristics as the population it is representing. Most statisticians use various methods of random sampling in an attempt to achieve this goal. This section will describe a few of the most common methods. There are several different methods of random sampling. In each form of random sampling, each member of a population initially has an equal chance of being selected for the sample. Each method has pros and cons. The easiest method to describe is called a simple random sample. Any group of n individuals is equally likely to be chosen by any other group of n individuals if the simple random sampling technique is used. In other words, each sample of the same size has an equal chance of being selected. For example, suppose Lisa wants to form a four-person study group (herself and three other people) from her pre-calculus class, which has 31 members not including Lisa. To choose a simple random sample of size three from the other members of her class, Lisa could put all 31 names in a hat, shake the hat, close her eyes, and pick out three names. A more technological way is for Lisa to first list the last names of the members of her class together with a two-digit number, as in Table \ Table \(\PageIndex{3}\): Class Roster ID Name ID Name ID Name 00 Anselmo 11 King 21 Roquero 01 Bautista 12 Legeny 22 Roth 02 Bayani 13 Lundquist 23 Rowell 03 Cheng 14 Macierz 24 Salangsang 04 Cuarismo 15 Motogawa 25 Slade 05 Cuningham 16 Okimoto 26 Stratcher 06 Fontecha 17 Patel 27 Tallai 07 Hong 18 Price 28 Tran 08 Hoobler 19 Quizon 29 Wai 09 Jiao 20 Reyes 30 Wood 10 Khan Lisa can use a table of random numbers (found in many statistics books and mathematical handbooks), a calculator, or a computer to generate random numbers. For this example, suppose Lisa chooses to generate random numbers from a calculator. The numbers generated are as follows: 0.94360; 0.99832; 0.14669; 0.51470; 0.40581; 0.73381; 0.04399 Lisa reads two-digit groups until she has chosen three class members (that is, she reads 0.94360 as the groups 94, 43, 36, 60). Each random number may only contribute one class member. If she needed to, Lisa could have generated more random numbers. The random numbers 0.94360 and 0.99832 do not contain appropriate two digit numbers. However the third random number, 0.14669, contains 14 (the fourth random number also contains 14), the fifth random number contains 05, and the seventh random number contains 04. The two-digit number 14 corresponds to Macierz, 05 corresponds to Cuningham, and 04 corresponds to Cuarismo. Besides herself, Lisa’s group will consist of Marcierz, Cuningham, and Cuarismo. To generate random numbers: • Press MATH • Arrow over to PRB • Press 5:randInt (Enter 0, 30). • Press ENTER for the first random number. • Press ENTER two more times for the other 2 random numbers. If there is a repeat press ENTER again. Note: randInt(0, 30, 3) will generate 3 random numbers. Besides simple random sampling, there are other forms of sampling that involve a chance process for getting the sample. Other well-known random sampling methods are the stratified sample, the cluster sample, and the systematic sample. To choose a stratified sample, divide the population into groups called strata and then take a proportionate number from each stratum. For example, you could stratify (group) your college population by department and then choose a proportionate simple random sample from each stratum (each department) to get a stratified random sample. To choose a simple random sample from each department, number each member of the first department, number each member of the second department, and do the same for the remaining departments. Then use simple random sampling to choose proportionate numbers from the first department and do the same for each of the remaining departments. Those numbers picked from the first department, picked from the second department, and so on represent the members who make up the stratified sample. To choose a cluster sample, divide the population into clusters (groups) and then randomly select some of the clusters. All the members from these clusters are in the cluster sample. For example, if you randomly sample four departments from your college population, the four departments make up the cluster sample. Divide your college faculty by department. The departments are the clusters. Number each department, and then choose four different numbers using simple random sampling. All members of the four departments with those numbers are the cluster sample. To choose a systematic sample, randomly select a starting point and take every n^th piece of data from a listing of the population. For example, suppose you have to do a phone survey. Your phone book contains 20,000 residence listings. You must choose 400 names for the sample. Number the population 1–20,000 and then use a simple random sample to pick a number that represents the first name in the sample. Then choose every fiftieth name thereafter until you have a total of 400 names (you might have to go back to the beginning of your phone list). Systematic sampling is frequently chosen because it is a simple method. A type of sampling that is non-random is convenience sampling. Convenience sampling involves using results that are readily available. For example, a computer software store conducts a marketing study by interviewing potential customers who happen to be in the store browsing through the available software. The results of convenience sampling may be very good in some cases and highly biased (favor certain outcomes) in others. Sampling data should be done very carefully. Collecting data carelessly can have devastating results. Surveys mailed to households and then returned may be very biased (they may favor a certain group). It is better for the person conducting the survey to select the sample respondents. True random sampling is done with replacement. That is, once a member is picked, that member goes back into the population and thus may be chosen more than once. However for practical reasons, in most populations, simple random sampling is done without replacement. Surveys are typically done without replacement. That is, a member of the population may be chosen only once. Most samples are taken from large populations and the sample tends to be small in comparison to the population. Since this is the case, sampling without replacement is approximately the same as sampling with replacement because the chance of picking the same individual more than once with replacement is very low. In a college population of 10,000 people, suppose you want to pick a sample of 1,000 randomly for a survey. For any particular sample of 1,000, if you are sampling with replacement, • the chance of picking the first person is 1,000 out of 10,000 (0.1000); • the chance of picking a different second person for this sample is 999 out of 10,000 (0.0999); • the chance of picking the same person again is 1 out of 10,000 (very low). If you are sampling without replacement, • the chance of picking the first person for any particular sample is 1000 out of 10,000 (0.1000); • the chance of picking a different second person is 999 out of 9,999 (0.0999); • you do not replace the first person before picking the next person. Compare the fractions 999/10,000 and 999/9,999. For accuracy, carry the decimal answers to four decimal places. To four decimal places, these numbers are equivalent (0.0999). Sampling without replacement instead of sampling with replacement becomes a mathematical issue only when the population is small. For example, if the population is 25 people, the sample is ten, and you are sampling with replacement for any particular sample, then the chance of picking the first person is ten out of 25, and the chance of picking a different second person is nine out of 25 (you replace the first person). If you sample without replacement, then the chance of picking the first person is ten out of 25, and then the chance of picking the second person (who is different) is nine out of 24 (you do not replace the first person). Compare the fractions 9/25 and 9/24. To four decimal places, 9/25 = 0.3600 and 9/24 = 0.3750. To four decimal places, these numbers are not equivalent. When you analyze data, it is important to be aware of sampling errors and nonsampling errors. The actual process of sampling causes sampling errors. For example, the sample may not be large enough. Factors not related to the sampling process cause nonsampling errors. A defective counting device can cause a nonsampling error. In reality, a sample will never be exactly representative of the population so there will always be some sampling error. As a rule, the larger the sample, the smaller the sampling error. In statistics, a sampling bias is created when a sample is collected from a population and some members of the population are not as likely to be chosen as others (remember, each member of the population should have an equally likely chance of being chosen). When a sampling bias happens, there can be incorrect conclusions drawn about the population that is being studied. Exercise \(\PageIndex{8}\) A study is done to determine the average tuition that San Jose State undergraduate students pay per semester. Each student in the following samples is asked how much tuition he or she paid for the Fall semester. What is the type of sampling in each case? 1. A sample of 100 undergraduate San Jose State students is taken by organizing the students’ names by classification (freshman, sophomore, junior, or senior), and then selecting 25 students from 2. A random number generator is used to select a student from the alphabetical listing of all undergraduate students in the Fall semester. Starting with that student, every 50th student is chosen until 75 students are included in the sample. 3. A completely random method is used to select 75 students. Each undergraduate student in the fall semester has the same probability of being chosen at any stage of the sampling process. 4. The freshman, sophomore, junior, and senior years are numbered one, two, three, and four, respectively. A random number generator is used to pick two of those years. All students in those two years are in the sample. 5. An administrative assistant is asked to stand in front of the library one Wednesday and to ask the first 100 undergraduate students he encounters what they paid for tuition the Fall semester. Those 100 students are the sample. a. stratified; b. systematic; c. simple random; d. cluster; e. convenience Example \(\PageIndex{9}\): Calculator You are going to use the random number generator to generate different types of samples from the data. This table displays six sets of quiz scores (each quiz counts 10 points) for an elementary statistics class. #1 #2 #3 #4 #5 #6 Instructions: Use the Random Number Generator to pick samples. 1. Create a stratified sample by column. Pick three quiz scores randomly from each column. □ Number each row one through ten. □ On your calculator, press Math and arrow over to PRB. □ For column 1, Press 5:randInt( and enter 1,10). Press ENTER. Record the number. Press ENTER 2 more times (even the repeats). Record these numbers. Record the three quiz scores in column one that correspond to these three numbers. □ Repeat for columns two through six. □ These 18 quiz scores are a stratified sample. 2. Create a cluster sample by picking two of the columns. Use the column numbers: one through six. □ Press MATH and arrow over to PRB. □ Press 5:randInt ( enter 1,6). Press ENTER. Record the number. Press ENTER and record that number. □ The two numbers are for two of the columns. □ The quiz scores (20 of them) in these 2 columns are the cluster sample. 3. Create a simple random sample of 15 quiz scores. □ Use the numbering one through 60. □ Press MATH. Arrow over to PRB. Press 5:randInt( and enter 1, 60). □ Press ENTER 15 times and record the numbers. □ Record the quiz scores that correspond to these numbers. □ These 15 quiz scores are the systematic sample. 4. Create a systematic sample of 12 quiz scores. □ Use the numbering one through 60. □ Press MATH. Arrow over to PRB. Press 5:randInt( and enter 1, 60). □ Press ENTER. Record the number and the first quiz score. From that number, count ten quiz scores and record that quiz score. Keep counting ten quiz scores and recording the quiz score until you have a sample of 12 quiz scores. You may wrap around (go back to the beginning). Example \(\PageIndex{10}\) Determine the type of sampling used (simple random, stratified, systematic, cluster, or convenience). 1. A soccer coach selects six players from a group of boys aged eight to ten, seven players from a group of boys aged 11 to 12, and three players from a group of boys aged 13 to 14 to form a recreational soccer team. 2. A pollster interviews all human resource personnel in five different high-tech companies. 3. A high school educational researcher interviews 50 high school female teachers and 50 high school male teachers. 4. A medical researcher interviews every third cancer patient from a list of cancer patients at a local hospital. 5. A high school counselor uses a computer to generate 50 random numbers and then picks students whose names correspond to the numbers. 6. A student interviews classmates in his algebra class to determine how many pairs of jeans a student owns, on average. a. stratified; b. cluster; c. stratified; d. systematic; e. simple random; f.convenience Exercise \(\PageIndex{11}\) Determine the type of sampling used (simple random, stratified, systematic, cluster, or convenience). A high school principal polls 50 freshmen, 50 sophomores, 50 juniors, and 50 seniors regarding policy changes for after school activities. If we were to examine two samples representing the same population, even if we used random sampling methods for the samples, they would not be exactly the same. Just as there is variation in data, there is variation in samples. As you become accustomed to sampling, the variability will begin to seem natural. Example \(\PageIndex{12}\): Sampling Suppose ABC College has 10,000 part-time students (the population). We are interested in the average amount of money a part-time student spends on books in the fall term. Asking all 10,000 students is an almost impossible task. Suppose we take two different samples. First, we use convenience sampling and survey ten students from a first term organic chemistry class. Many of these students are taking first term calculus in addition to the organic chemistry class. The amount of money they spend on books is as follows: $128; $87; $173; $116; $130; $204; $147; $189; $93; $153 The second sample is taken using a list of senior citizens who take P.E. classes and taking every fifth senior citizen on the list, for a total of ten senior citizens. They spend: $50; $40; $36; $15; $50; $100; $40; $53; $22; $22 a. Do you think that either of these samples is representative of (or is characteristic of) the entire 10,000 part-time student population? a. No. The first sample probably consists of science-oriented students. Besides the chemistry course, some of them are also taking first-term calculus. Books for these classes tend to be expensive. Most of these students are, more than likely, paying more than the average part-time student for their books. The second sample is a group of senior citizens who are, more than likely, taking courses for health and interest. The amount of money they spend on books is probably much less than the average parttime student. Both samples are biased. Also, in both cases, not all students have a chance to be in either sample. b. Since these samples are not representative of the entire population, is it wise to use the results to describe the entire population? b. No. For these samples, each member of the population did not have an equally likely chance of being chosen. Now, suppose we take a third sample. We choose ten different part-time students from the disciplines of chemistry, math, English, psychology, sociology, history, nursing, physical education, art, and early childhood development. (We assume that these are the only disciplines in which part-time students at ABC College are enrolled and that an equal number of part-time students are enrolled in each of the disciplines.) Each student is chosen using simple random sampling. Using a calculator, random numbers are generated and a student from a particular discipline is selected if he or she has a corresponding number. The students spend the following amounts: $180; $50; $150; $85; $260; $75; $180; $200; $200; $150 c. Is the sample biased? Students often ask if it is "good enough" to take a sample, instead of surveying the entire population. If the survey is done well, the answer is yes. Exercise \(\PageIndex{12}\) A local radio station has a fan base of 20,000 listeners. The station wants to know if its audience would prefer more music or more talk shows. Asking all 20,000 listeners is an almost impossible The station uses convenience sampling and surveys the first 200 people they meet at one of the station’s music concert events. 24 people said they’d prefer more talk shows, and 176 people said they’d prefer more music. Do you think that this sample is representative of (or is characteristic of) the entire 20,000 listener population? The sample probably consists more of people who prefer music because it is a concert event. Also, the sample represents only those who showed up to the event earlier than the majority. The sample probably doesn’t represent the entire fan base and is probably biased towards people who would prefer music. Collaborative Exercise \(\PageIndex{8}\) As a class, determine whether or not the following samples are representative. If they are not, discuss the reasons. 1. To find the average GPA of all students in a university, use all honor students at the university as the sample. 2. To find out the most popular cereal among young people under the age of ten, stand outside a large supermarket for three hours and speak to every twentieth child under age ten who enters the 3. To find the average annual income of all adults in the United States, sample U.S. congressmen. Create a cluster sample by considering each state as a stratum (group). By using simple random sampling, select states to be part of the cluster. Then survey every U.S. congressman in the cluster. 4. To determine the proportion of people taking public transportation to work, survey 20 people in New York City. Conduct the survey by sitting in Central Park on a bench and interviewing every person who sits next to you. 5. To determine the average cost of a two-day stay in a hospital in Massachusetts, survey 100 hospitals across the state using simple random sampling. Variation in Data Variation is present in any set of data. For example, 16-ounce cans of beverage may contain more or less than 16 ounces of liquid. In one study, eight 16 ounce cans were measured and produced the following amount (in ounces) of beverage: 15.8; 16.1; 15.2; 14.8; 15.8; 15.9; 16.0; 15.5 Measurements of the amount of beverage in a 16-ounce can may vary because different people make the measurements or because the exact amount, 16 ounces of liquid, was not put into the cans. Manufacturers regularly run tests to determine if the amount of beverage in a 16-ounce can falls within the desired range. Be aware that as you take data, your data may vary somewhat from the data someone else is taking for the same purpose. This is completely natural. However, if two or more of you are taking the same data and get very different results, it is time for you and the others to reevaluate your data-taking methods and your accuracy. Variation in Samples It was mentioned previously that two or more samples from the same population, taken randomly, and having close to the same characteristics of the population will likely be different from each other. Suppose Doreen and Jung both decide to study the average amount of time students at their college sleep each night. Doreen and Jung each take samples of 500 students. Doreen uses systematic sampling and Jung uses cluster sampling. Doreen's sample will be different from Jung's sample. Even if Doreen and Jung used the same sampling method, in all likelihood their samples would be different. Neither would be wrong, however. Think about what contributes to making Doreen’s and Jung’s samples different. If Doreen and Jung took larger samples (i.e. the number of data values is increased), their sample results (the average amount of time a student sleeps) might be closer to the actual population average. But still, their samples would be, in all likelihood, different from each other. This variability in samples cannot be stressed enough. Size of a Sample The size of a sample (often called the number of observations) is important. The examples you have seen in this book so far have been small. Samples of only a few hundred observations, or even smaller, are sufficient for many purposes. In polling, samples that are from 1,200 to 1,500 observations are considered large enough and good enough if the survey is random and is well done. You will learn why when you study confidence intervals. Be aware that many large samples are biased. For example, call-in surveys are invariably biased, because people choose to respond or not. Collaborative Exercise \(\PageIndex{8}\) Divide into groups of two, three, or four. Your instructor will give each group one six-sided die. Try this experiment twice. Roll one fair die (six-sided) 20 times. Record the number of ones, twos, threes, fours, fives, and sixes you get in the following table (“frequency” is the number of times a particular face of the die occurs): First Experiment (20 rolls) Second Experiment (20 rolls) Face on Die Frequency Face on Die Frequency Did the two experiments have the same results? Probably not. If you did the experiment a third time, do you expect the results to be identical to the first or second experiment? Why or why not? Which experiment had the correct results? They both did. The job of the statistician is to see through the variability and draw appropriate conclusions. Critical Evaluation We need to evaluate the statistical studies we read about critically and analyze them before accepting the results of the studies. Common problems to be aware of include • Problems with samples: A sample must be representative of the population. A sample that is not representative of the population is biased. Biased samples that are not representative of the population give results that are inaccurate and not valid. • Self-selected samples: Responses only by people who choose to respond, such as call-in surveys, are often unreliable. • Sample size issues: Samples that are too small may be unreliable. Larger samples are better, if possible. In some situations, having small samples is unavoidable and can still be used to draw conclusions. Examples: crash testing cars or medical testing for rare conditions • Undue influence: collecting data or asking questions in a way that influences the response • Non-response or refusal of subject to participate: The collected responses may no longer be representative of the population. Often, people with strong positive or negative opinions may answer surveys, which can affect the results. • Causality: A relationship between two variables does not mean that one causes the other to occur. They may be related (correlated) because of their relationship through a different variable. • Self-funded or self-interest studies: A study performed by a person or organization in order to support their claim. Is the study impartial? Read the study carefully to evaluate the work. Do not automatically assume that the study is good, but do not automatically assume the study is bad either. Evaluate it on its merits and the work done. • Misleading use of data: improperly displayed graphs, incomplete data, or lack of context • Confounding: When the effects of multiple factors on a response cannot be separated. Confounding makes it difficult or impossible to draw valid conclusions about the effect of each factor. 1. Gallup-Healthways Well-Being Index. http://www.well-beingindex.com/default.asp (accessed May 1, 2013). 2. Gallup-Healthways Well-Being Index. http://www.well-beingindex.com/methodology.asp (accessed May 1, 2013). 3. Gallup-Healthways Well-Being Index. http://www.gallup.com/poll/146822/ga...questions.aspx (accessed May 1, 2013). 4. Data from www.bookofodds.com/Relationsh...-the-President 5. Dominic Lusinchi, “’President’ Landon and the 1936 Literary Digest Poll: Were Automobile and Telephone Owners to Blame?” Social Science History 36, no. 1: 23-54 (2012), ssh.dukejournals.org/ content/36/1/23.abstract (accessed May 1, 2013). 6. “The Literary Digest Poll,” Virtual Laboratories in Probability and Statistics http://www.math.uah.edu/stat/data/LiteraryDigest.html (accessed May 1, 2013). 7. “Gallup Presidential Election Trial-Heat Trends, 1936–2008,” Gallup Politics http://www.gallup.com/poll/110548/ga...9362004.aspx#4 (accessed May 1, 2013). 8. The Data and Story Library, lib.stat.cmu.edu/DASL/Datafiles/USCrime.html (accessed May 1, 2013). 9. LBCC Distance Learning (DL) program data in 2010-2011, http://de.lbcc.edu/reports/2010-11/f...hts.html#focus (accessed May 1, 2013). 10. Data from San Jose Mercury News Data are individual items of information that come from a population or sample. Data may be classified as qualitative, quantitative continuous, or quantitative discrete. Because it is not practical to measure the entire population in a study, researchers use samples to represent the population. A random sample is a representative group from the population chosen by using a method that gives each individual in the population an equal chance of being included in the sample. Random sampling methods include simple random sampling, stratified sampling, cluster sampling, and systematic sampling. Convenience sampling is a nonrandom method of choosing a sample that often produces biased data. Samples that contain different individuals result in different data. This is true even when the samples are well-chosen and representative of the population. When properly selected, larger samples model the population more closely than smaller samples. There are many different potential problems that can affect the reliability of a sample. Statistical data needs to be critically analyzed, not simply accepted. 1. lastbaldeagle. 2013. On Tax Day, House to Call for Firing Federal Workers Who Owe Back Taxes. Opinion poll posted online at: www.youpolls.com/details.aspx?id=12328 (accessed May 1, 2013). 2. Scott Keeter et al., “Gauging the Impact of Growing Nonresponse on Estimates from a National RDD Telephone Survey,” Public Opinion Quarterly 70 no. 5 (2006), http://poq.oxfordjournals.org/content /70/5/759.full (accessed May 1, 2013). 3. Frequently Asked Questions, Pew Research Center for the People & the Press, www.people-press.org/methodol...wer-your-polls (accessed May 1, 2013). Cluster Sampling a method for selecting a random sample and dividing the population into groups (clusters); use simple random sampling to select a set of clusters. Every individual in the chosen clusters is included in the sample. Continuous Random Variable a random variable (RV) whose outcomes are measured; the height of trees in the forest is a continuous RV. Convenience Sampling a nonrandom method of selecting a sample; this method selects individuals that are easily accessible and may result in biased data. Discrete Random Variable a random variable (RV) whose outcomes are counted Nonsampling Error an issue that affects the reliability of sampling data other than natural variation; it includes a variety of human errors including poor study design, biased sampling methods, inaccurate information provided by study participants, data entry errors, and poor analysis. Qualitative Data See Data. Quantitative Data See Data. Random Sampling a method of selecting a sample that gives every member of the population an equal chance of being selected. Sampling Bias not all members of the population are equally likely to be selected Sampling Error the natural variation that results from selecting a sample to represent a larger population; this variation decreases as the sample size increases, so selecting larger samples reduces sampling Sampling with Replacement Once a member of the population is selected for inclusion in a sample, that member is returned to the population for the selection of the next individual. Sampling without Replacement A member of the population may be chosen for inclusion in a sample only once. If chosen, the member is not returned to the population before the next selection. Simple Random Sampling a straightforward method for selecting a random sample; give each member of the population a number. Use a random number generator to select a set of labels. These randomly selected labels identify the members of your sample. Stratified Sampling a method for selecting a random sample used to ensure that subgroups of the population are represented adequately; divide the population into groups (strata). Use simple random sampling to identify a proportionate number of individuals from each stratum. Systematic Sampling a method for selecting a random sample; list the members of the population. Use simple random sampling to select a starting point in the population. Let k = (number of individuals in the population)/(number of individuals needed in the sample). Choose every kth individual in the list starting with the one that was randomly selected. If necessary, return to the beginning of the population list to complete your sample.
{"url":"https://fundaciongalindo.com/article/1-3-data-sampling-and-variation-in-data-and-sampling","timestamp":"2024-11-14T14:24:26Z","content_type":"text/html","content_length":"176206","record_id":"<urn:uuid:3a044ff4-9f16-46ff-85de-ffd93a3202e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00373.warc.gz"}
HackerRank 2D Arrays - DS problem solution - Programmingoneonone HackerRank 2D Arrays – DS problem solution In this HackerRank 2D Arrays – DS problem, we need to develop a program that can take a 2-dimensional integer array as input and then calculate the sum of every hourglass that present in that array. what is an hourglass in an array? let’s say we have a 2-dimensional array. just like as shown below. a b c 0 0 0 0 d 0 0 0 0 e f g 0 0 0 so here the one hourglass of an array is that follows the pattern. just like this… a b c e f g so if we have an array of size 6×6 then there is 16 hourglass present in the array. so here in this problem, we need to print the hourglass sum that has a maximum value. and the input array is fixed that is a 6×6 or 2-dimensional array. Problem solution in Python programming. n = 6 m = [] for i in range(n): m.append(list(map(int, input().split()[:n]))) def sum_glass(m, i, j): """Assumes hour-glass is in bounds of m!""" return sum(m[i][j:j+3]) + m[i+1][j+1] + sum(m[i+2][j:j+3]) best = float("-inf") for i in range(4): for j in range(4): s = sum_glass(m, i, j) if s > best: best = s print (best) Here we first define two variable n = 6 and m that has an empty list/array. after that, we take the input using for loop and store the input in the array m using the map function. after that, we find the sum of the hourglass using the sum_glass function. you can see the logic behind it. after that in the best variable we storing the – infinity value and then using for loop for every iteration of the array we are calling the sum_glass function and after that, we are comparing with the best variable to find the maximum value of the hourglass. Problem solution in Java Programming. import java.util.Scanner; public class Intro2dArray { public static void main(String[] args) { Scanner sc = new Scanner(System.in); int multiDimArr[][] = new int[6][6]; for(int row = 0; row < 6; row++){ for(int col = 0; col < 6;col++){ multiDimArr[row][col] = sc.nextInt(); static int Solve(int arr[][]){ int max = Integer.MIN_VALUE; int total = 0; for(int row = 0; row < 4; row++){ for(int col = 0; col < 4; col++ ){ total = arr[row][col] + arr[row][col+1] + arr[row][col+2]; total += arr[row+1][col+1]; total += arr[row+2][col] + arr[row+2][col+1] + arr[row+2][col+2]; max = total>max?total:max; return max; Here in the above code, the logic is that we use a total variable that has an initial value of 0. after that to find the value of an hourglass in the array we are finding the values one row at a time. as you could see in the above code in the last for loop we are finding the value of the first row and then the second and then the third row. and adding it to the total variable and then comparing with the max variable to find the max value of hourglass in the array. remember that in the above code in the first line of last for loop we are not adding it to the total variable instead, we are just initializing the value to the total variable. so we don’t need to follow an extra step to make the value 0 of the total variable every time in the iteration of for loop. Problem solution in C++ programming. #include <cmath> #include <cstdio> #include <vector> #include <iostream> #include <algorithm> using namespace std; int arr[7][7]; int sum(int stx , int sty){ return arr[stx][sty] + arr[stx][sty+1] + arr[stx][sty+2] + arr[stx+1][sty+1] + arr[stx+2][sty] + arr[stx+2][sty+1] + arr[stx+2][sty+2]; int main() { int ans = -100; for(int i = 0 ; i < 6 ; i++){ for(int j = 0 ; j < 6 ; j++){ cin >> arr[i][j]; for(int i = 0 ; i < 4 ; i++){ for(int j = 0 ; j < 4 ; j++){ ans = max(ans , sum(i,j)); cout << ans << endl; return 0; Here in the above code logic is pretty simple we make a sum function and also we are using a fixed size of an array in the program. and in the sum function, we are returning the sum of every hourglass that present in the array and addition all the values in just one line of code. the logic is the same as we did in the python program. Problem solution in C programming. #include <stdio.h> #include <string.h> #include <math.h> #include <stdlib.h> int main() { /* Enter your code here. Read input from STDIN. Print output to STDOUT */ int a[6][6]; int i, j; for (i=0; i<6; i++){ for (j=0; j<6; j++) { scanf("%d", &a[i][j]); int out=-100, sum=0; for (i=0; i<4; i++){ for (j=0; j<4; j++) { sum = 0; sum += a[i][j]; sum += a[i][j+1]; sum += a[i][j+2]; sum += a[i+1][j+1]; sum += a[i+2][j]; sum += a[i+2][j+1]; sum += a[i+2][j+2]; if (sum > out){ out = sum; printf("%d", out); return 0; Here in the above c program, we are using the fixed length of the array. and then using for loop we are scanning the input values and store them in the array. after that, we are using the for loop to find the sum of each hourglass that present in the array. remember here we are following the one additional step in the for a loop. and to optimize it you can also remove the sum = 0 and use the sum = a[i][j]. 5 thoughts on “HackerRank 2D Arrays – DS problem solution” 1. This comment has been removed by the author. 2. why the row in solve method is 4 ? for(int row = 0; row < 4; row++){ 3. why used -100?? 4. because there are 6 rows and column and so that the pairs strats from 3-3 pairs from 0th row and end to 4th number of row. 5. python solution helped in understanding the easiest approach.Thank you… jst one problem it showed on the hackerrank that is for the sum function you have used it showed int is not itterable error
{"url":"https://programmingoneonone.com/hackerrank-2d-arrays-ds-problem-solution.html","timestamp":"2024-11-07T00:37:34Z","content_type":"text/html","content_length":"89838","record_id":"<urn:uuid:35d068ba-cc2c-4421-abad-95a524adbc5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00013.warc.gz"}
SummedOp | IBM Quantum Documentation class qiskit.opflow.list_ops.SummedOp(oplist, coeff=1.0, abelian=False) Bases: ListOp Deprecated: A class for lazily representing sums of Operators. Often Operators cannot be efficiently added to one another, but may be manipulated further so that they can be later. This class holds logic to indicate that the Operators in oplist are meant to be added together, and therefore if they reach a point in which they can be, such as after evaluation or conversion to matrices, they can be reduced by addition. Deprecated since version 0.24.0 The class qiskit.opflow.list_ops.summed_op.SummedOp is deprecated as of qiskit-terra 0.24.0. It will be removed no earlier than 3 months after the release date. For code migration guidelines, visit • oplist (List[OperatorBase]) – The Operators being summed. • coeff (complex |ParameterExpression) – A coefficient multiplying the operator • abelian (bool) – Indicates whether the Operators in oplist are known to mutually commute. Whether the Operators in oplist are known to commute with one another. A bool indicating whether the oplist is Abelian. The scalar coefficient multiplying the Operator. The coefficient. Return a list of the coefficients of the operators listed. Raises exception for nested Listops. The function defining how to combine oplist (or Numbers, or NumPy arrays) to produce the Operator’s underlying function. For example, SummedOp’s combination function is to add all of the Operators in The combination function. The gradient of combo_fn. Return the unique instance id. The list of OperatorBases defining the underlying function of this Operator. The Operators defining the ListOp Return Operator addition of self and other, overloaded by +. This appends other to self.oplist without checking other is already included or not. If you want to simplify them, please use simplify(). other (OperatorBase) – An OperatorBase with the same number of qubits as self, and in the same ‘Operator’, ‘State function’, or ‘Measurement’ category as self (i.e. the same type of underlying A SummedOp equivalent to the sum of self and other. Return type Return Operator by simplifying duplicate operators. E.g., SummedOp([2 * X ^ Y, X ^ Y]).collapse_summands() -> SummedOp([3 * X ^ Y]). A simplified SummedOp equivalent to self. Return type Check if other is equal to self. This is not a mathematical check for equality. If self and other implement the same operation but differ in the representation (e.g. different type of summands) equals will evaluate to False. other (OperatorBase) – The other operator to check for equality. True, if other and self are equal, otherwise False. Return type >>> from qiskit.opflow import X, Z >>> 2 * X == X + X >>> X + Z == Z + X Try collapsing list or trees of sums. Tries to sum up duplicate operators and reduces the operators in the sum. A collapsed version of self, if possible. Return type Returns the quantum circuit, representing the SummedOp. In the first step, the SummedOp is converted to MatrixOp. This is straightforward for most operators, but it is not supported for operators containing parameterized PrimitiveOps (in that case, OpflowError is raised). In the next step, the MatrixOp representation of SummedOp is converted to circuit. In most cases, if the summands themselves are unitary operators, the SummedOp itself is non-unitary and can not be converted to circuit. In that case, ExtensionError is raised in the underlying modules. The circuit representation of the summed operator. • OpflowError – if SummedOp can not be converted to MatrixOp (e.g. SummedOp is composed of • parameterized PrimitiveOps**)****.** – Return type Returns an equivalent Operator composed of only NumPy-based primitives, such as MatrixOp and VectorStateFn. Return type Returns an equivalent Operator composed of only Pauli-based primitives, such as PauliOp. Return type
{"url":"https://docs.quantum.ibm.com/api/qiskit/0.45/qiskit.opflow.list_ops.SummedOp","timestamp":"2024-11-05T00:51:23Z","content_type":"text/html","content_length":"215539","record_id":"<urn:uuid:fe60be2e-a74e-4d4e-9a64-3f09b8ef5cf3>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00072.warc.gz"}
Galilean quantum gravity with cosmological constant and the extended q-Heisenberg algebra Abstract / Description of output We define a theory of Galilean gravity in 2+1 dimensions with cosmological constant as a Chern-Simons gauge theory of the doubly-extended Newton-Hooke group, extending our previous study of classical and quantum gravity in 2+1 dimensions in the Galilean limit. We exhibit an r-matrix which is compatible with our Chern-Simons action ( in a sense to be defined) and show that the associated bi-algebra structure of the Newton-Hooke Lie algebra is that of the classical double of the extended Heisenberg algebra. We deduce that, in the quantisation of the theory according to the combinatorial quantisation programme, much of the quantum theory is determined by the quantum double of the extended q-deformed Heisenberg algebra. Dive into the research topics of 'Galilean quantum gravity with cosmological constant and the extended q-Heisenberg algebra'. Together they form a unique fingerprint.
{"url":"https://www.research.ed.ac.uk/en/publications/galilean-quantum-gravity-with-cosmological-constant-and-the-exten","timestamp":"2024-11-01T20:59:37Z","content_type":"text/html","content_length":"46508","record_id":"<urn:uuid:9e6d4a43-0b9b-4b6e-90d3-deb2dfa10419>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00664.warc.gz"}
X-Plane, UDP, and Visual Basic, for X-Plane Version 8 | jefflewis.net X-Plane, UDP, and Visual Basic, for X-Plane Version 8 Outdated Tutorial This tutorial is for an older version of X-Plane. Newer versions of X-Plane have different network communication requirements. If you're not using X-Plane 8, please go to the index to check for a version of this tutorial for newer versions of X-Plane. X-Plane UDP Index If you don't know what any part of this page's title means, you might not be in the right place. But don't worry, I'll explain it all. X-Plane is a popular computer flight simulator. It has a very accurate flight model, making it very powerful not only for entertainment, but also as an engineering tool. It comes packaged with additional programs that allow you to design your own aircraft and fly it in X-Plane. Besides its accurate flight model, X-Plane has another feature which makes it very powerful- it outputs flight data over a network, and allows certain parameters, such as control positions, to be sent back to it (and if you're using version 8.6 or newer, you only need one computer to do this). The protocol that X-Plane uses to send/receive the data is UDP, hence the UDP in the title above. As for the Visual Basic, well, to be able to make use of the data you have to have a program to do something with it, and Visual Basic is the programming language I use. If you still don't understand any of that, go check out the X-Plane website. From there, you can download a demo version of the simulator, and see what makes it so great. This tutorial was written using X-Plane 8.50. At some point since I wrote the previous version of this tutorial for X-Plane 6.51, the format of the X-Plane UDP packets has been updated. I believe that this tutorial should work with all versions of 8, but I don't know about 7. If you need the previous tutorial, please refer to the X-Plane UDP Index to check for that version. There's already a pretty good site for info on UDP and X-Plane, although it's a little outdated. It is http://www.x-plane.info. Also, decent documentation for UDP now comes included with X-Plane (it didn't when I wrote the first tutorial). I'm not going to try and repeat everything from those two sources. The reason I'm writing this page, well, actually two reasons, are because I didn't see anything on those sources that dealt specifically with Visual Basic (it was mostly C), and because they assumed you already had some network programming experience. Well, when I first started trying to play around with this stuff, I'd never programmed anything that dealt with a network, so I had to figure it all out by scouring web sites looking for the relevant information. Hopefully, by me putting all this info in one spot, someone else in the same situation that I was will read this and save some time. By the way, I'm assuming that you at least know how to program in Visual Basic. The easiest way to find a listing of all the UDP data channels is just to look at the Data Input & Output screen in X-Plane. To figure out what gets sent in each of those channels, just temporarily select them for output to the Cockpit During Flight. One more thing before we get started. You can download some source code to see all of this in context in a program while you're reading. You can also use this source code as your foundation for writing your own applications. Download source code To start off, you'll need to add a WinSock component to your program. WinSock stands for Windows Sockets. It is the interface between programs and the network. We'll use the suppport in Visual Basic for our program. First, go to the Project menu and select Components. In that window, scroll down until you get to Microsoft Winsock Control 6.0, and click the check box. Then click OK. A little icon will appear on your tool bar that looks like two computers connected together. Click it, and then add one anywhere on your form. There are four parameters that we have to set on the Winsock control- RemoteHost, RemotePort, Bind (the local port), and most importantly- the protocol. First, set the protocol to 1 (UDP). The other protocol (0) is TCP, which we won't use. Set remote port to 49000. X-Plane always uses 49000, so you don't have a choice on that. Now, for binding the local port, there are some options. In X-Plane, when setting the IP address and port of the machine you want to communicate with, X-Plane again defaults to port 49000. However, there's nothing saying that it has to be 49000. If you're going to run your VB program on a separate machine from X-Plane, you might as well set your VB program to bind 49000, to give you one less option to set in X-Plane. However, from version 8.60 on, X-Plane allows you to use UDP to communicate with the same computer X-Plane is running on. In other words, you can run X-Plane and this VB program on the same computer. In that case, since X-Plane is already using port 49000, the VB program will need to use a different port. I use 49001, but you could really set it to whatever you wanted. Finally, set the RemoteHost to whatever the IP address is of the machine that will be running X-Plane. For our purposes, there are one event and two methods that we need to be concerned with. The event is _DataArrival. This occurs whenever a UDP packet is sent from the IP address at the port specified. Since I named my winsock control WinsockUDP, the subroutine for this event is: Sub WinsockUDP_DataArrival(ByVal bytesTotal As Long) End Sub Now, whenever a packet of data is sent, this subroutine will be run. The methods that we need to be concerned with are .GetData and .SendData. Their purposes are pretty much self explanatory. They're very easy to use. When you receive data, you need to store it to a variable, so use code that looks about like: WinsockUDP.GetData VariableName If you want to send a variable, use code that looks about like: WinsockUDP.SendData VariableName Just a small note now before I get into the explanation of UDP packets- I declare the variable that I'm going to use to store the data as a byte array. I also do this for the variable that I use to store the data that I'm going to send. I do this because UDP packets can only send bytes. But I'm starting to get ahead of myself. Lets start looking at UDP packets. This is the section of the tutorial that I think will be the most help, because it's the section that I had the most trouble with, myself. UDP stands for Universal Data Packet. It is a method of sending information over a network. We don't really need to be concerned with all the details of how it works, but there are a few things that we need to know. First, unlike other protocols, like TCP, UDP does not do any error checking. If we send a packet, and the other computer gets the wrong data, or doesn't even get any data at all, we have no way of knowing, unless we program our own error checking into the program. But since we can't really change the code of X-Plane, we're kind of stuck on that. This really shouldn't be much of a problem, especially if you're going to be transmitting over a local network, but it could be the cause of a hard to track down problem. The next important thing to know is that UDP packets are composed entirely of bytes, as are all packets sent over networks, and even everything stored on your hard drive. It has to be this way- computers only work with ones and zeros. To get decimals, you have to do a bit of math on the bytes that you've stored. To get letters and other symbols, you have to know the ASCII code for that symbol, to translate the byte into the symbol. X-Plane uses what are known as single precision floating point variables. This means that the number can be stored using four bytes. A double precision floating point variable would require eight bytes. So let's take a look at how to convert those four bytes into a number. Single Precision Floating Point Numbers and Bytes Let me say a couple things before I get into the details of floating point numbers on computers, which will hopefully make it seem a little simpler. First, a floating point number on a computer is basically just the binary equivalent of scientific notation in decimal numbers. This lets the computer maintain significant digits, while at the same time being able to represent really big and really small numbers. So, it's calculated as: Value = Significand * 2^Exponent Second, when I first figured out how to do this, and wrote my algorithms to do these conversions, I didn't know enough about certain aspects of programming to go about it in the most efficient manner. In fact, I still don't really know enough personally, but two people who do have sent me in suggestions for more efficient algorithms. I can't find one of those e-mails for now, but the other I will include at the very end of this page. I'm leaving my code in, because even if you never us it, it still helps explain the theory of how floating point numbers work. Really, I guess you don't need to know the theory, but it's still nice. The reason for putting the more efficient algorithm at the end of the page is, I've included the entire algorithm, which is rather long. I thought putting it at the end would leave the rest of this tutorial more readable. Now, on to the details. A single precision floating point number is stored as 4 bytes. Let's just use this as an example: At this point, something important to bring up is whether the bytes are being stored in big endian or little endian format. Basically, that's the convention used for the order of listing bytes. As an example from everyday life, when we right 524, we assume it to mean (5 x 10^2) + (2 x 10^1) + (4 x 10^0). That's known as big endian, because the largest values are listed first. It's the convention that we use, but if the group of people that invented our number system had done things a little differently, that same value could just as easily have been written as 425, to mean (4 x 10^0) + (2 x 10^1) + (5 x 10^2), which would be little endian. For our case of floating point values, you need to know the proper order to analyze those bytes. Basically, Macs use big endian, where you can just use the bytes the way they are. PCs use little endian, and you need to look at the bytes in reverse order. However, in X-Plane 8, X-Plane's developer decided to standardize how UDP packets were sent. So, whether you're running X-Plane on a PC or Mac, the bytes are sent in big endian order. Earlier versions of X-Plane were not standardized, so the endianness of the bytes depended on the OS that X-Plane was being run on. Okay, so we have our bytes, and we're running X-Plane 8, so we know they're in big endian order. So, now we need to convert the variables to binary. Always use an eight digit binary number. Use leading zeros if you have to. If you're unsure how to do this, here's the code I used: RunningTotal = byte1 For ctr = 1 To 8 If RunningTotal >= 2 ^ (8 - ctr) Then NumberString1 = NumberString1 + "1" RunningTotal = RunningTotal - 2 ^ (8 - ctr) NumberString1 = NumberString1 + "0" End If Next ctr So, after converting each of the four bytes to binary, we get: Now from this list of digits, we need to get three numbers. So first, combine all the digits into one long list. Now we need to redo where the breaks are in the same way that the computer will. Do it like so- 1 digit, 8 digits, 23 digits. The first bit is the sign bit, the next 8 bits are the biased exponent, and the remaining 23 bits are the mantissa. The first digit is the sign digit. It tells us whether the number is positive or negative (0 for positive, 1 for negative). Since it is a zero here, our number is positive. The next 8 bits are the biased exponent, biased because it's 127 more than the actual exponent. This is done because there's no way to represent a negative number with just ones and zeroes. So, convert the binary to decimal by multiplying by powers of two. Start with the right-most bit, and work your way to the left. Biased Exponent = (right-most bit * 2^0) + (next bit * 2^1) + ... + (left-most bit * 2^7) Once you've converted the binary into decimal, simply subtract 127 to get the actual exponent. Biased Exponent = 133 Exponent = Biased Exponent - 127 = 6 Now there're those last 23 digits. These are called the mantissa, or the fractional part of the significand. Basically, it's still a number in binary, only they're the digits that come after a decimal point (binary point?). Consider that in our normal decimal base that you had 3.1415926. The mantissa would be the 1415926. So, the conversion of the binary mantissa back to decimal is the same, only now you're using negative powers of two. Whenever this standard was created, it was decided that since the significand would always be greater than 1, there was no need to waste a bit by encoding that, so that's why only the mantissa is encoded. So, when you calculate the value of the significand, you assume that one's already there. So to convert the above series of digits into the number that we're going to use, do the following: Significand = Implied 1 + Mantissa Significand = Implied 1 + (first bit * 2^-1) + (2nd bit * 2^-2) + (3rd bit * 2^-3) + ... + (23rd bit * 2^-23) Here it is for our specific example, where the numbers in bold are the digits in the binary sequence: Significand = 1 + (1 * 2^-1) + (1 * 2^-2) + (1 * 2^-3) + (0 * 2^-4) + ... + (0 * 2^-23) Significand = 1.923828125 Now that we have the sign, the exponent, and the significand, we're ready to calculate the value. Remember that it is of the following form, and that we have to make it positive or negative depending on the sign: Value = Significand * 2^Exponent Value = 1.923828125 * 2^6 Value = 123.125 There, we've just calculated a value from four bytes. As a final note on endianness, if you were running an older version of X-Plane on a PC, remember that the bytes would be in little endian order (don't ask me why - I actually had to figure that out on my own by dumb luck). So, our number of 123.125 would be represented on a Mac, or on any computer running a newer version of X-Plane, as: while on a PC running an older version of X-Plane it would be represented as: Going from Single Precision Floating Point Numbers to Bytes To calculate the four bytes that represent a given single precision floating point value, you basically just have to go through the above calculations in reverse. However, it's a little more involved. Let's go through an example again to explain it. To make things more interesting, let's use 0.1 as the number we're going to convert. The first step is to find the exponent. This is done by finding the log base 2 of the absolute value of the number, and then rounding down to the nearest integer (9.9 rounds down to 9, -3.2 rounds down to -4). Remember that Log in visual basic is the natural log (base e), so think back to your high school algebra days to remember how to find the log of a number in any base. To save a little computational time, I defined Log2 as a constant equal to 0.69314718056 (the natural log of 2). Here's the code I use to determine the exponent: Exponent = Int(Log(Abs(float)) / Log2) BiasedExponent = Exponent + 127 For our example with the float equal to 0.1, the exponent is -4, so the biased exponent is 123, or 01111011 in binary. Next, you need to find the Mantissa. This is pretty similar to the way that you convert an integer from decimal to binary. You just go through checking each digit to see if it makes the value less than or greater than the value you're trying to approximate. Here's the code that I use. Remember that the 1 before the decimal point is implied for the significand. I should also add that what I refer to as Mantissa and MantissaTemp below are actually the significand, and only MantissaString refers to the actual mantissa (I would modify that here, but I'm leaving it as is to keep it consistent with the included sample code - once I get around to modifying the sample code, I will correct this). Mantissa = 1 MantissaString = "" absfloat = Abs(float) For ctr = 1 To 23 MantissaTemp = Mantissa + 2 ^ -ctr If MantissaTemp * 2 ^ Exponent <= absfloat Then MantissaString = MantissaString + "1" Mantissa = MantissaTemp MantissaString = MantissaString + "0" End If Next ctr In our example with 0.1, the significand ends up being 1.60000002384186. More interestingly, in binary it's represented as 1.10011001100110011001101. Taking the last 23 digits behind the decimal point (the 23 that the code above determines), our mantissa is 10011001100110011001101. Finally, our number is positive, so the leading bit will be 0. So we can represent this number in a single precision floating point as Breaking it up into 4 bytes and finally, determining their decimal values As an aside, and for a bit of theory - if you notice, the mantissa is just a repeating pattern of 1100, where the last digit got rounded up. The number 0.1 cannot be represented discretely in binary, and thus cannot be exactly represented with a single precision floating point number - similar to the way 1/3 can't be represented exactly in decimal without an infinite number of digits. In fact, many numbers that we're used to representing in decimal can't be exactly represented by a single precision floating point number, but for our purposes with X-Plane, they should be close enough that you don't need to worry about it. It's only special applications where this discrepancy becomes important. Double Precision Floating Point Numbers Double precision floating point numbers work the same way as single precision, only they use 8 bytes instead of 4. The breakdown of the 64 bits then is 1 digit, 11 digits, 52 digits. The first bit is the sign bit, the next 11 are the biased exponent, which is biased by 1023, not 127 like a single precision, and the last 52 are the mantissa. Everything is calculated the same way as described above for single precision numbers, only you'll need to update your code appropriately for the new lengths and bias. The only application I know of for double precision with UDP and X-Plane is when sending a VEH1 packet - lattitude, longitude and altitude must be represented as double precision floating point numbers. Integers and Bytes Calculating an integer from four bytes is pretty similar to calculating a floating point variable. You take all four bytes, convert them into four 8-digit binary numbers, and combine them into one long 32 digit binary number. (For general programming, remember to use the proper Mac/PC convention for the order that the bytes are in - although in X-Plane 8 they're always in Mac order). Once you have the 32 digit number, the first digit controls the sign of the number. If it is a zero, the number is positive. If it is a one, the number is negative. Calculating a positive integer is easy. Just convert the 31 digit binary number into a decimal number. However, a negative number is a bit different, because early computer engineers wanted to solve the problem of finding an easy way to do subtraction. What they came up with is called "2's complement." It's really pretty simple- just invert all the bits and add 1. So as an example, 3 would be represented in binary as: Note that this is just 11 (bin), with a whole bunch of leading zeros, and a zero in the sign bit. Negative 3 would be: 1 1111111 11111111 11111111 11111100 + 1 An X-Plane specific note: if the only integers you're going to deal with are the index numbers, then you only need to look at the last byte. And in this case, there's no need to go through the steps of converting to binary and back to decimal, since you know the first three bytes are going to be zero, and the other byte's going to be the number. If you're interested as to the reason why computer engineers decided to use 2's complement, here's the explanation. It's to make subtraction easier. I'll show this with an example. Say you wanted to perform the function 5-3. Well, this would be the same as 5 + -3. So, we can use 2's complement of 3, and then do normal addition. (Remember that in binary, 1 + 1 = 10.) +1 1111111 11111111 11111111 11111101 Since the computer can only handle eight digit bytes, that leading 1 goes into overflow and gets dropped, so we're left with: This converts to 2 in decimal, so we can see that the math did come out correctly. You don't really need to know this theory for X-Plane, but it is nice to know why it's done the way it is. Letters and Symbols and Bytes This is a lot simpler than the conversion between a floating point number and the four bytes. Letters and symbols each correspond to an integer number. The number is between 0 and 255, so it can be represented as a single byte. Here is a list of all the symbols and their corresponding code, in MS Word format. So, to convert the letter "A" to a byte, we just look up what its code is, and find that it is 65. Note that a lower case "a" is 97, which is different from an uppercase "A". If you look at the above list, you'll also notice that each of the digits 0 through 9 has a code which is different from the digit itself. For example, a "9" is 57. This is because in this format, the digits are being represented as strings, not as numbers. Handling UDP Packets in Visual Basic Since all the information sent via UDP is in bytes, it makes sense to use a byte variable to handle it. And since there are a lot of bytes being sent, we should use an array. Here is the way I declared the array to handle incoming data: Dim PacketData() As Byte Now, when we read the packet data in, using the code mentioned earlier in this tutorial: WinsockUDP.GetData PacketData each of the bytes is stored into an element in the array PacketData. If we know something about the format of the data being sent, we can decode it into the variables that we need. Similarly, when we're making up a data packet to send out, it's useful to define it as a byte array. Then, we convert all of our values into the proper bytes, and send out the packet: WinsockUDP.SendData PacketData Breaking Down an X-Plane UDP Packet Now, let's take a look at what a UDP packet being sent from X-Plane looks like. This is another place where I got a little lost looking at the information on x-plane.info. But once I figured out everything was in bytes, it made a lot more sense. A UDP packet contains a header with some network information, but Visual Basic does not import that into the program when we use the .GetData command. So, we only see the body part of the packet. When I talk about packets in Visual Basic, that is what I'm referring to. A typical DATA packet being sent out from X-Plane may look something So what does all this mean? The first five bytes are what X-Plane uses for its header. Each of these bytes are actually ASCII codes, so we convert each of them into a symbol. The first 4 bytes of the header tell us what type of packet it is. In this example, they're 68,65,84,65, which correspond to D,A,T,A, respectively, so we know this is a DATA packet. The fifth byte in the header, 38 in this example, is an index used internally by X-Plane that we don't really need to worry about (I'm not exactly sure what it does, to tell the truth). When creating a data packet to send back to X-Plane, I just set this value to 48, the ASCII code for "0." Now comes a group of 36 bytes. This is the data segment. The first 4 bytes are the index, and the next 32 bytes are the data for that index. Like I said before, there is a very good explanation of what each index is, and what data are sent on that index, at http://www.x-plane.info, or simply by looking at the Data Input & Output screen in X-Plane. The first 4 bytes are the index, as an integer. Remember that in newer versions of X-Plane, bytes are in Mac order. So, just look at the fourth of the 4 bytes, and whatever that byte is is the index number. In our example above, the byte is 34, which means index 34, which is engine rpm. Now there are 32 bytes left in this data segment. This is 8 groups of 4, or 8 single precision floating point numbers. You convert them in the manner as described above. The first number in our example is the four bytes 68, 151, 111, 166, or 1211.489. The remaining 7 data points in this example are all zero. A DATA packet can end there, as in the above example, or it could be followed by any number of 32 byte data segments, which you treat the same was as described above. Unlike previous verions of X-Plane, there is no special symbol to designate the end of a packet. If you want to be able to handle an arbitrary number of data segments, you'll just have to count how long the UDP packet is, and calculate the number of data segments from that (NumberChannels = (bytesTotal - 5) / 36). There are a few more things that I discovered while writing my program, that you'll probably find useful. Remember that there is no error checking for UDP. That means that packets can get lost. Through experimentation sending DSEL and USEL packets (the packets used to request X-Plane to start or stop sending a specific data channel), I found that when sending four packets total, one right after the other in the code, on average only two of the packets made it through each time. However, sending the packets with the timer, with the timer set to an interval of 10 ms seemed to work just fine. My recommendation is that if you need to send a lot of DSEL or USEL information at once, or if you need to send several DATA channels at once, you combine them into one packet before being sent. This way, there is much less of a chance that the packets will be lost. Another note: X-Plane uses the value -999 to represent no data. So, if you want to update only one value in a data channel, specify the other values as -999, and X-Plane will leave them alone. And finally, if you try to update the joystick or a few other related channels, X-Plane will think you want complete control of the joystick, and will stop looking at data from the actual joystick. If you want your program to control the airplane while still letting you use the joystick, have it control the trim settings. If you do send a packet that overrides the joystick, and want X-Plane to start looking at the joystick again, send a data packet with -999 in the appropriate channels. Well, I think that should be a pretty good starting point. I know it took me a while to figure all of the above out. But using this page, the source code of the program I wrote, and the information at www.x-plane.info, you should be able to figure out how to write your own programs to interact with X-Plane via UDP. If you have any questions, e-mail me. Good luck. A More Efficient Method of Converting Between Byte Arrays and Floating Point Values As promised, here is the more efficient algorithm concerning bytes and floating point values. This was sent in to me by someone going by Passel. I've simply copied and pasted his e-mail. I haven't had the time to try this, yet, but I hope it works. I thought I should write to let you know that your byte array to Floats (and vice versa) routines can be much more efficient. The bytes in the array are already in floating point format so we don't have to decode all the bits of the floating point format in order to combine the bytes or split the float into bytes. We just need to copy the bytes into the destination memory in the proper order. In the case of the later X-plane versions always exporting in Big-Endian order, and since VB is strictly Intel based and thus Little-Endian, we really only need to always swap the bytes. It is Ironic that Big-Endian was settled on, and the Mac's are now going to Intel processors. Perhaps in the future, X-plane will reverse the interface again to favor the newer platforms. In any case, I've rewritten your routines to give you an example of just swapping the bytes into the proper order and copying them to the destination, which is tremendously faster than all the string manipulations. [This is where the algorithm was originally included in the e-mail. I've copied it below, to get it out of the blockquote section.] Option Explicit Public Declare Sub CopyMemory Lib "kernel32" Alias "RtlMoveMemory" ( _ Destination As Any, _ Source As Any, _ ByVal Length As Long) Sub ConvertBytesToSingle( _ byte1 As Byte, _ byte2 As Byte, _ byte3 As Byte, _ byte4 As Byte, _ float As Single _ Dim b(1 To 4) As Byte If FormMain.CheckReverseBytes.Value = Checked Then b(1) = byte1 b(2) = byte2 b(3) = byte3 b(4) = byte4 b(1) = byte4 b(2) = byte3 b(3) = byte2 b(4) = byte1 End If CopyMemory float, b(1), 4 End Sub Sub ConvertSingleToBytes( _ float As Single, _ byte1 As Byte, _ byte2 As Byte, _ byte3 As Byte, _ byte4 As Byte _ ) 'This sub converts a value to four bytes for storage as a single precision floating point value Dim b(1 To 4) As Byte CopyMemory b(1), float, 4 If FormMain.CheckReverseBytes.Value = Checked Then byte1 = b(1) byte2 = b(2) byte3 = b(3) byte4 = b(4) byte1 = b(4) byte2 = b(3) byte3 = b(2) byte4 = b(1) End If End Sub Sub ConvertBytesToDouble( _ byte1 As Byte, _ byte2 As Byte, _ byte3 As Byte, _ byte4 As Byte, _ byte5 As Byte, _ byte6 As Byte, _ byte7 As Byte, _ byte8 As Byte, _ float As Double _ ) 'This sub converts eight bytes to a Double precision floating point value Dim b(1 To 8) As Byte If FormMain.CheckReverseBytes.Value = Checked Then b(1) = byte1 b(2) = byte2 b(3) = byte3 b(4) = byte4 b(5) = byte5 b(6) = byte6 b(7) = byte7 b(8) = byte8 b(1) = byte8 b(2) = byte7 b(3) = byte6 b(4) = byte5 b(5) = byte4 b(6) = byte3 b(7) = byte2 b(8) = byte1 End If CopyMemory float, b(1), 8 End Sub Sub ConvertDoubleToBytes( _ float As Double, _ byte1 As Byte, _ byte2 As Byte, _ byte3 As Byte, _ byte4 As Byte, _ byte5 As Byte, _ byte6 As Byte, _ byte7 As Byte, _ byte8 As Byte _ ) 'This sub converts a number to eight bytes for storage as a double precision floating point value Dim b(1 To 8) As Byte CopyMemory b(1), float, 8 If FormMain.CheckReverseBytes.Value = Checked Then byte1 = b(1) byte2 = b(2) byte3 = b(3) byte4 = b(4) byte5 = b(5) byte6 = b(6) byte7 = b(7) byte8 = b(8) byte1 = b(8) byte2 = b(7) byte3 = b(6) byte4 = b(5) byte5 = b(4) byte6 = b(3) byte7 = b(2) byte8 = b(1) End If End Sub
{"url":"http://jefflewis.net/XPlaneUDP_8.html","timestamp":"2024-11-08T02:43:11Z","content_type":"text/html","content_length":"44373","record_id":"<urn:uuid:2150bb37-b927-45fd-af3e-607ff6d4b2b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00347.warc.gz"}
Need help with RYG status formula =IF([% Complete]@ROW<.7, "RED"), IF ([% Complete]@ROW< 1>.7, "YELLOW"), IF([% Complete]@ROW)=1, "GREEN"),""))) If below/above certain percentages, attempting to change the ball to red/yellow green Best Answer • Hi Dawn, The first thing to note is that the @row function needs to be lower-case or you'll receive an error. Secondly, you'll want to make sure that the end of your IF statements are left open, without a closing parenthesis ) until the very end of the entire statement. Then since Logic formulas read left-to-right and stop as soon as the criteria is met, you want to make sure your statements are in the right order. I would start with the Green because it only has one possible value... then if it's not 100, the formula will move on to the next statement. Try this: =IF([% Complete]@row =1, "Green", IF([% Complete]@row < 0.7, "Red", IF([% Complete]@row >= 0.7, "Yellow", ""))) Keep in mind that this will return a Red status ball if the cell is blank, since it reads a blank cell as "less than 0.7". You could eliminate that by adding another criteria at the very =IF([% Complete]@row = "", "", IF([% Complete]@row =1, "Green", IF([% Complete]@row < 0.7, "Red", IF([% Complete]@row >= 0.7, "Yellow", "")))) Let me know if you have any questions! Need more help? 👀 | Help and Learning Center こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions • Hi Dawn, The first thing to note is that the @row function needs to be lower-case or you'll receive an error. Secondly, you'll want to make sure that the end of your IF statements are left open, without a closing parenthesis ) until the very end of the entire statement. Then since Logic formulas read left-to-right and stop as soon as the criteria is met, you want to make sure your statements are in the right order. I would start with the Green because it only has one possible value... then if it's not 100, the formula will move on to the next statement. Try this: =IF([% Complete]@row =1, "Green", IF([% Complete]@row < 0.7, "Red", IF([% Complete]@row >= 0.7, "Yellow", ""))) Keep in mind that this will return a Red status ball if the cell is blank, since it reads a blank cell as "less than 0.7". You could eliminate that by adding another criteria at the very =IF([% Complete]@row = "", "", IF([% Complete]@row =1, "Green", IF([% Complete]@row < 0.7, "Red", IF([% Complete]@row >= 0.7, "Yellow", "")))) Let me know if you have any questions! Need more help? 👀 | Help and Learning Center こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions • No problem! Glad it worked for you 🙂 Need more help? 👀 | Help and Learning Center こんにちは (Konnichiwa), Hallo, Hola, Bonjour, Olá, Ciao! 👋 | Global Discussions Help Article Resources
{"url":"https://community.smartsheet.com/discussion/69484/need-help-with-ryg-status-formula","timestamp":"2024-11-03T18:24:26Z","content_type":"text/html","content_length":"439339","record_id":"<urn:uuid:3175fc18-19ff-4ca3-bf9a-0fc84dd54875>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00526.warc.gz"}
How can I estimate a multiple group latent class model (knownclass)? | Mplus FAQ This page was created using Mplus version 5.2, the output and/or syntax may be different for other versions of Mplus. Frequently, we wish to compare the structure of measurement models across groups (e.g. men and women). When the latent variable is categorical the model is often referred to as a latent class analysis (LCA), more generally, these models are sometimes referred to as mixture models. Below we show how to estimate an LCA with either continuous or categorical class indicators (it is also possible to estimate a model with both categorical and continuous class indicators). We will start with a latent class model with continuous indicators, because these models have a slightly simpler syntax. In Mplus, the knownclass option is used to estimate a latent class model with multiple groups. This option takes its name from the fact that the grouping variable (e.g. gender) is known (i.e. In the examples below, group is the known or observed class, while c is the latent variable estimated using the observed items. There are three continuous observed items, named a1, a2, and a3. You can download the example dataset here: mult_grp_lca_con . A single group latent class model As a starting place, below we show the syntax for a single group latent class model. In this model, the continuous variables a1, a2, and a3, are used to form a latent variable c with two classes. The file option of the data: command gives the name of the file in which the dataset is stored. In the variable: command the names option gives the names of the variables in the dataset. The usevariables option gives the names of the variables used to estimate the model. The classes option defines the names of the categorical latent variable c, followed by the number of classes in parentheses, that is (2) for a two class latent variable. In the analysis: command, the type = mixture command indicates that we wish to estimate a mixture model. file = mult_grp_lca_con.dat ; names = group a1 a2 a3; usevariables = a1 a2 a3; classes = c(2); type = mixture; Model allowing differences in item means across groups, fixing class probabilities and item variances across groups and classes In this model, we add the observed grouping variable, group to our model in order to estimate a multiple group mixture model. In this model, the classes option of the variable: command lists two classes (c and g), each with the number of groups listed in parentheses after the class name. The knownclass option specifies that the classes in the variable g are defined by the observed variable group, the observed values of group associated with each class (e.g. group = 0) are listed in parentheses after the class name (i.e. g). file = mult_grp_lca_con.dat; names = group a1 a2 a3; usevariables = a1 a2 a3; classes = c(2) g(2); knownclass = g (group=0 group=1); type = mixture; Model allowing differences in item means and class probabilities across groups, with item variances fixed across groups and classes In this model we use g (i.e. the grouping variable) to predict the probability of class membership in c, meaning that the probability of being in a given class is allowed to vary by the observed variable group. First, we have changed the classes option so that the known class (i.e. g) is listed first, this is necessary if we want to regress c on g to allow the class probabilities to vary by level of group. We have also added the model: command, in the overall section of the model (under %overall%), we have added c on g which adds a regression in which the known class variable (g) predicts the probability that a given case will be in one of the classes of the latent variable c, this allows the class probabilities for c to vary by g. file = mult_grp_lca_con.dat ; names = group a1 a2 a3; usevariables = a1 a2 a3; classes = g(2) c(2); knownclass = g (group=0 group=1); type = mixture; c on g; Another model allowing differences in item means and class probabilities across groups, fixing item variances across groups and classes The model estimated in this example is identical to the previous model, using different syntax. In the model below we have explicitly listed the item means in the input file, so that we can fix or free individual parameters across groups, which allows us to test for differences between item means in the two groups. We can confirm that the two models are the same by comparing their log likelihoods, if the two are the same, we have indeed run the same model. One thing to note is that when we change to this syntax, the order of the classes may change. This change is substantively and mathematically unimportant, we are, after all, still running the same model, but it can be hard to keep track of which classes correspond across the groups if they are not in the same order. To avoid confusion, we can use the coefficient estimates from the above model as starting values for the current model. We could use all of the item means, but it turns out this is unnecessary, a few starting values is usually sufficient to put the classes in the desired order. Below is the Mplus output from the model immediately above this one (i.e. the model that is identical, but has different syntax). Here we show the item means for all three variables, for g = 0 and c = 1 (Latent Class Pattern 1 1), followed by the item means for g=1 and c=1 (Latent Class Pattern 2 1). Estimate S.E. Est./S.E. P-Value Latent Class Pattern 1 1 A1 0.247 0.058 4.227 0.000 A2 2.142 0.091 23.617 0.000 A3 -0.960 0.067 -14.223 0.000 <output omitted> Latent Class Pattern 2 1 A1 0.978 0.064 15.260 0.000 A2 0.173 0.042 4.120 0.000 A3 -1.541 0.065 -23.570 0.000 The item means for class 1 (of the latent variable c) in each group (g) are shown above. In the syntax below, we use these parameter estimates as starting values. Most of the syntax shown below is the same as that from the previous models, the new syntax all appears at the end of the input. Below the overall portion of the model (%overall%) we see specific commands for each combination of g and c, in this case, both g and c have two categories, so there are four sections of the model. The first section is for g=1 (group=0) and c=1 is indicated by %g#1.c#1%, the observed (knownclass) variable must come first, if we used %c#1.g#1% Mplus would issue an error message and the model would not run. Below this designation we see the syntax describing the structure of the model [a1*0.247 a2*2.142 a3*-0.960], this specifically lists each of the item means for the variables used to form the latent variable c (i.e. a1, a2, and a3). In addition to listing the parameters, we assign each parameter a starting value based on the above output, for example, a1*0.247 sets the starting value of the mean of a1 to 0.247 in the class c=1, for the group g=1. For c=2 in g=1 (under the label %g #1.c#2%), we specify the means of the items a1–a3, but do not assign starting values, the set of starting values above is sufficient to set the class ordering for g=1. For c=1 in g=2 (labeled %g#2.c# 1%), we again include starting values, so that the classes for the second group (g=2) will be in the desired order. file = mult_grp_lca_con.dat; names = group a1 a2 a3; usevariables = a1 a2 a3; classes = g(2) c(2); knownclass = g (group=0 group=1); type = mixture; c on g; [a1*0.247 a2*2.142 a3*-0.960]; [a1 a2 a3]; [a1*0.978 a2*0.173 a3*-1.541]; [a1 a2 a3]; Testing for differences in parameter estimates Once we have estimated a model in which the item means are allowed to vary across groups, we may want to test to see whether the differences in item means between the two groups are significant, one method of doing this is to use the model test: command to perform a Wald test. Below we test whether the mean for a1 is different in class=1 across the two groups. To do this we have given each of the parameters in question a name. In Mplus, parameter names must appear at the end of a line and in parentheses, for example [a1*0.247] (p1) used below gives the mean of a1 for class 1 (c=1) in group 1 (g=1) the name p1. We have also given the parameter for the mean of a1 in class 1 (c=1) in group 2 (g=2) a name, p2. Note that these names are arbitrary, except that they must begin with a letter and be enclosed in parentheses. Finally, at the bottom of the input, we use the model test: command, below this is the test we want to perform, specifically, we want to test whether p1 = p2. file = mult_grp_lca_con.dat; names = group a1 a2 a3; usevariables = a1 a2 a3; classes = g(2) c(2); knownclass = g (group=0 group=1); type = mixture; c on g; [a1*0.247] (p1); [a2*2.142 a3*-0.960]; [a1 a2 a3]; [a1*0.978] (p2); [a2*0.173 a3*-1.541]; [a1 a2 a3]; model test: p1 = p2; The output from this model will have an additional section, shown below. In this case we can reject the null hypothesis that p1 = p2. If we wanted to constrain these two parameters to equality (either because the difference was non-significant, or for other reasons) we could do so by either giving the two parameters the same name, or replacing the parameter name with a number that is the same for all parameters that are to be constrained to equality, for example, we could replace both (p1) and (p2) above with (1). Wald Test of Parameter Constraints Value 71.900 Degrees of Freedom 1 P-Value 0.0000 Model allowing differences in item variances across groups, item means and class probabilities fixed across groups In this model, we have modified the model: command so that the item means and class probabilities are fixed across groups (g), but the item variances are allowed to vary by group. Under model c: we describe the structure of the latent variable c. Under %c#1% the model for class 1 of the latent variable c is defined by the mean of the variables a1, a2, and a3, this is indicated by the name of the variables listed within square brackets (i.e. [a1 a2 a3]). Under %c#2% we describe the structure of c=2 in the same manner as c=1. Under model g: we explicitly show the variance structure of the model. Under %g#1% we see the names of the variables that make up the latent variable c (i.e. a1, a2, and a3), the name of a variable listed without other commands (e.g. on, with, or square brackets) indicates the variance of the variable. So listing the variances of the variables separately by level of g indicates that the variances of a1, a2, and a3 should be allowed to vary by group. This syntax is repeated for the second group (g=2), allowing the variances for this group to differ from those in the first. file is mult_grp_lca_con.dat ; names are group a1 a2 a3; classes = g(2) c(2); knownclass = g (group=0 group=1); type = mixture; model c: [a1 a2 a3]; [a1 a2 a3]; model g: a1 a2 a3; a1 a2 a3; Working with categorical observed variables In the examples below, instead of the observed variables being continuous, as above, the observed items are categorical. As above, the variable group is the known or observed class variable. The latent variable c is estimated using the categorical observed items named i1, i2, and i3. The variables i1 and i2 are dichotomous, while the variable i3 is ordinal with three categories. You can download the example dataset here: mult_grp_lca_cat. A single group latent class model As a starting place, below we show the syntax for a single group latent class model. In this model, the categorical variables i1, i2, and i3, are used to form a latent variable c with two classes. Most of this input file is the same as the single group latent class model with continuous indicators. The file option of the data: command gives the name of the file in which the dataset is stored. In the variable: command the names option gives the names of the variables in the dataset. The usevariables option gives the names of the variables used to estimate the model because not all variables in the dataset are used in the model. The classes option defines the names of the categorical latent variable c, followed by the number of classes in parentheses, that is (2) for a two class latent variable. In the analysis: command, the type = mixture command indicates that we wish to estimate a mixture model. The difference between this model, and a single group model with continuous indicators is that in the variable: command, the categorical option lists the names of the categorical variables in the dataset, in this case (i.e. i1, i2, and i3). file = mult_grp_lca_cat.dat ; names = group i1 i2 i3; usevariables = i1 i2 i3; classes = c(2); categorical = i1 i2 i3; type = mixture; Model allowing differences in item thresholds across groups, fixing class probabilities across groups and classes In this model, we add the observed grouping variable, group to our model in order to estimate a multiple group mixture model. In this model, the classes option of the variable: command lists two classes (c and g), each with the number of groups listed in parentheses after the class name. The knownclass option specifies that the classes in the variable g are defined by the observed variable group, the observed values of group associated with each class (e.g. group = 0) are listed in parentheses after the class name (i.e. g). file is mult_grp_lca_cat.dat; names = group i1 i2 i3; usevariables = i1 i2 i3; classes = c(2) g(2); knownclass = g (group=0 group=1); categorical = i1 i2 i3; type = mixture; Model allowing differences in item thresholds and class probabilities across groups In this model we use g (i.e. the grouping variable) to predict the probability of class membership in c, meaning that the probability of being in a given class is allowed to vary by the observed variable group. First, we have changed the classes option so that the known class (i.e. g) is listed first, this is necessary if we want to regress c on g to allow the class probabilities to vary by level of group. We have also added the model: command, in the overall section of the model (under %overall%), we have included c on g which adds a regression in which the known class variable (g) predicts the probability that a given class will be in one of the classes of the latent variable c, this allows the class probabilities for c to vary by g. file is mult_grp_lca_cat.dat ; names are group i1 i2 i3; usevariables are i1 i2 i3; classes = g(2) c(2); knownclass = g (group=0 group=1); categorical = i1 i2 i3; type = mixture; c on g; Another model allowing differences in item thresholds and class probabilities across groups The model estimated in this example is identical to the previous model, but uses different input. In the input below we have explicitly listed the item thresholds in the input file, so that we can fix or free individual parameters across groups, which allows us to test for differences between item means in the two groups. We can confirm that the two models are the same by comparing their log likelihoods, which will match if we have indeed run the same model. One thing to note is that when we change to this syntax, the order of the classes may change. This change is substantively unimportant, we are, after all, still running the same model, but it can be hard to keep track of which classes correspond across the groups if they are not in the same order. To avoid confusion, we can use the coefficient estimates from the above model as starting values for the current model. We could use all of the item thresholds, but a few starting values is usually sufficient to put the classes in the desired order. Below is the Mplus output from the model immediately above this one (i.e. the model that is identical, but has different syntax). Here we show the item thresholds for all three variables, for g = 0 and c = 1 (Latent Class Pattern 1 1), followed by the item thresholds for g=1 and c=1 (Latent Class Pattern 2 1), note that there are four thresholds, one each for i1 and i2 (which have two categories), and two thresholds for i3, which has three ordinal categories. Each threshold is denoted by the variable name, followed by a dollar sign and a number indicating the order of the threshold, for example, i3$1 is the first threshold for i3, and i3$2 is the second. Estimate S.E. Est./S.E. P-Value Latent Class Pattern 1 1 I1$1 -0.307 0.689 -0.446 0.656 I2$1 1.976 3.089 0.640 0.522 I3$1 -1.467 1.150 -1.276 0.202 I3$2 1.097 0.304 3.604 0.000 <output omitted> Latent Class Pattern 2 1 I1$1 1.501 1.453 1.033 0.301 I2$1 -1.870 3.996 -0.468 0.640 I3$1 -0.139 0.294 -0.474 0.635 I3$2 4.358 9.937 0.439 0.661 The item thresholds for class 1 (of the latent variable c) in each group (g) are shown above. In the syntax below, we use these parameter estimates as starting values. Most of the syntax shown below is the same as that from the previous models, the new syntax all appears at the end of the input. Below the overall portion of the model (%overall%) we see the portions of the model: command for each combination of g and c, in this case, both g and c have two categories, so there are four sections of the model. The first section is for g=1 (group=0) and c=1 is indicated by %g#1.c#1%, the observed (knownclass) variable must come first, if we used %c#1.g#1% Mplus would issue an error message and the model would not run. Below this designation we see the syntax describing the structure of the model [i1$1*-0.307 i2$1*1.976 i3$1*-1.467 i3$2*1.097], this specifically lists each of the item means for the variables used to form the latent variable c (i.e. i1, i2, and i3). In addition to listing the parameters, we assign each parameter a starting value based on the above output, for example, i1$1*-0.307 sets the starting value of the threshold of i1 to -0.307 in the class c=1, for the group g=1. For c=2 in g=1 (under the label %g#1.c#2%), we specify the thresholds of the items i1–i3, but do not assign starting values, the set of starting values above is sufficient to set the class ordering for g=1. For c=1 in g=2 (labeled %g#2.c#1%), we again include starting values, so that the classes for the second group (g=2) will be in the desired order. file = mult_grp_lca_cat.dat ; names = group i1 i2 i3; usevariables = i1 i2 i3; classes = g(2) c(2); knownclass = g (group=0 group=1); categorical = i1 i2 i3; type = mixture; c on g; [i1$1*-0.307 i2$1*1.976 i3$1*-1.467 i3$2*1.097]; [i1$1 i2$1 i3$1 i3$2]; [i1$1*1.501 i2$1*-1.870 i3$1*-0.139 i3$2*4.358]; [i1$1 i2$1 i3$1 i3$2]; Testing for differences in parameter estimates Once we have estimated a model in which the item thresholds are allowed to vary across groups, we may want to test to see whether the differences in item thresholds between the two groups are significant. One method of doing so is to use the model test: command to perform a Wald test. Below we test whether the threshold 1 for i1 is different in class=1 across the two groups. To do this we have given each of the parameters in question a name. In Mplus, parameter names must appear at the end of a line and in parentheses, for example [i1$1*-0.307] (p1) used below gives the threshold for i1 in class 1 (c=1) in group 1 (g=1) the name p1. We have also given the parameter for the threshold of i1 in class 1 (c=1) and group 2 (g=2) a name, p2. Note that these names are arbitrary, except that they must begin with a letter and be enclosed in parentheses. Finally, at the bottom of the input, we see the model test: command, below this is the test we want to perform, specifically, we want to test whether p1 = p2. file = mult_grp_lca_cat.dat ; names = group i1 i2 i3; usevariables = i1 i2 i3; classes = g(2) c(2); knownclass = g (group=0 group=1); categorical = i1 i2 i3; type = mixture; c on g; [i1$1*-0.307] (p1); [i2$1*1.976 i3$1*-1.467 i3$2*1.097]; [i1$1 i2$1 i3$1 i3$2]; [i1$1*1.501] (p2); [i2$1*-1.870 i3$1*-0.139 i3$2*4.358]; [i1$1 i2$1 i3$1 i3$2]; model test: p1 = p2; The output from this model will have an additional section, shown below. In this case we fail to reject the null hypothesis that p1 = p2. If we wanted to constrain these two parameters to equality (either because the difference was non-significant, or for other reasons) we could do so by either giving the two parameters the same name, or replacing the parameter name with a number that is the same for all parameters that are to be constrained to equality, for example, we could replace both (p1) and (p2) above with (1). Wald Test of Parameter Constraints Value 1.266 Degrees of Freedom 1 P-Value 0.2605
{"url":"https://stats.oarc.ucla.edu/mplus/faq/how-can-i-estimate-a-multiple-group-latent-class-model-knownclass/","timestamp":"2024-11-03T02:59:33Z","content_type":"text/html","content_length":"61106","record_id":"<urn:uuid:a420c6b8-c9a4-46f4-912a-a15795982b21>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00353.warc.gz"}
1. Introduction2. Methodology of Optimizing Water Injection Well Pressure3. Optimization Plan for Water Injection Well Pressure4. Optimization of the Algorithm for Increasing Injection Rate Research Results4.1. Algorithm Optimization4.2. Implementation and Benefits of Algorithm Optimization Projects5. Conclusion and UnderstandingConflicts of InterestCite this paperReferences WJETWorld Journal of Engineering and Technology2331-4222Scientific Research Publishing10.4236/wjet.2023.112017WJET-124775ArticlesChemistry&Materials Science Engineering Optimization of the Algorithm for Increasing Injection Rate in Water Injection Wells for Pressure Optimization in P Oilfield LingyuLi[1]^*Tianjin Branch of CNOOC Ltd., Tianjin, China1503202311022462518, April 20235, May 2023 8, May 2023© Copyright 2014 by authors and Scientific Research Publishing Inc. 2014This work is licensed under the Creative Commons Attribution International License (CC BY). http://creativecommons.org/ In today’s society, with the continuous growth of energy demand, Bohai Oilfield, as an important offshore oil resource base in China, is facing increasingly severe challenges while contributing to national energy security. In order to improve the quality of water injection in the oilfield and gradually achieve efficient and stable production, Bohai Oilfield has launched a water injection well pressure optimization project, focusing on improving the efficiency and quality of water injection in the water injection wells, in order to achieve the optimal water injection plan. In practical work, P Oilfield continues to promote the development of water injection well pressure optimization projects, emphasizing practical exploration and continuous optimization of work plans. However, during the project implementation process, there were some problems, one of which was that the statistics of cumulative injection volume were not scientific enough, resulting in a more comprehensive and accurate presentation of the actual results of pressure optimization work. In the context of continuous improvement work, after careful analysis and research, P Oilfield has decided to optimize the cumulative injection rate algorithm to guide the oilfield’s water injection work in a more refined way, ensuring sufficient and good water injection, and enhancing the oilfield’s production efficiency and comprehensive competitiveness. Offshore Oil Fields Water Injection Wells Pressure Optimization Water Injection Volume Calculation Method P Oilfield has been implementing the optimization plan for water injection well pressure since September 2018. From 2018 to 2020, the increased injection volume for three consecutive years was 600,000 cubic meters, 220,000 cubic meters, and 1.35 million cubic meters, respectively. The reason for the low cumulative injection volume in 2019 is that the wells after acidification failed to timely calculate the pressure optimization injection volume after the acidification effect decreased or even disappeared. Due to the significant decrease in injection pressure after acidification of the water injection well, the effect of optimizing the pressure of the water injection well enters a certain cooling period for a certain period of time, and it is also impossible to separate the acidification injection increase part and the water injection well pressure optimization injection increase part for the increased injection volume afterwards. The effective improvement of this project will provide more accurate information and basis for P Oilfield to continue promoting the optimization of water injection well pressure in the future. By using more precise methods to measure the pressure optimization of water injection wells, we can guide P Oilfield to strengthen water injection research in a more refined way, and achieve high-quality development of oilfield production. With the continuous advancement of water injection development in P oilfield, the water injection wells have been blocked in the near wellbore area, resulting in an increase in water injection pressure and a decrease in water injection volume. After multiple rounds of water injection well acidification, the effectiveness of acidification and blockage removal gradually decreases, making it difficult to completely remove the blockage in the near wellbore area. When the maximum pressure at the wellhead of the water injection well remains unchanged, the actual injection pressure at the bottom of the well continues to decrease due to the presence of additional pressure drop. The continuous acidification of water injection wells has brought high measure costs and gradually shortened effectiveness of plug removal, which can no longer meet the needs of oilfield water injection development. After more than ten years of water injection development in P oilfield, this problem has become prominent. According to the requirements of the “Offshore Oil Production Engineering Manual” for water injection pressure, the maximum allowable bottom hole injection pressure is 80% - 90% of the formation fracture pressure in the water injection well section. The water injection wells in P oilfield are managed at 85% of the fracture pressure. Optimization of water injection well pressure refers to further increasing the injection pressure of the water injection well after the injection pressure at the wellhead has reached the maximum allowable injection pressure, ensuring safe water injection, and achieving the goal of increasing water injection volume. The key is to determine the additional pressure drop and obtain the extent to which injection pressure can be increased [1] . P Oilfield adopts pressure testing on water injection wells to obtain the water absorption index curve. Through practical verification, it ensures that the inflection point of the water absorption index is determined as the injection pressure of the water injection wells increases, and further obtains the true formation fracture pressure. At the same time, the Hall curve is used to directly calculate the skin factor, thereby obtaining the additional pressure drop situation [2] [3] . This method was first proposed by Hall in 1963, and improved by Boolean et al. in 1989, resulting in an approximate analytical method. The principle is that the Hall curve graph can reflect the different slopes of the straight line segments of water injection wells under different conditions. This slope reflects the changes in seepage resistance of each water injection well, that is, the skin coefficient of each water injection well can be solved from this. The calculation method of Hall curve slope in the water injection stage is based on the radial flow equation of monomial steady Newtonian fluid, which is plotted on Cartesian coordinate system with Hall integral term and cumulative injection volume R. Before and after water breakthrough in the oil well, they are linear segments, and their mathematical expressions are: R = 0.535626 K h μ w ⋅ B w [ ln ( R e / R w ) + S ] ∫ ( Δ P ) d t (1) ∫ ( Δ P ) d t = 1.867 μ w ⋅ B w [ ln ( R e / R w ) + S ] K h (2) k = 1.867 μ w ⋅ B w [ ln ( R e / R w ) + S ] K h (3) S = k ⋅ ( K h ) 1.867 μ w ⋅ B w − ln ( R e / R w ) (4) In the formula, S is the skin factor; ∆P is the injection pressure difference of injection well, MPa; R is the cumulative injection amount corresponding to a certain time, m^3; K is the effective permeability, 10^−^3 μm^2; h is the effective thickness, m; R[e] and R[w] are the driving radius and wellbore diameter, m; t is the time, d; k is the slope of the straight section of the Hall curve; B[w] is the volume coefficient of water; μ[w] is the viscosity of water, mPa·s. The skin factor represents the resistance at the bottom of the well. In 1949, Van Everdingen and Hurst introduced the skin effect to characterize the steady-state pressure difference s in the near wellbore region, which is proportional to the skin factor. The mathematical expression is: Δ P s = R μ w 2 π K h S (5) In the formula, S is the skin factor; ∆P[s] is the additional pressure drop of Injection well, MPa [4] . Through the above calculations and analysis, the final conclusion is that the optimized pressure of the water injection well is equal to the sum of the maximum pressure at the wellhead of the water injection well and the additional pressure drop: P = P max + Δ P s (6) In the formula, P is the optimized pressure of the water injection well, MPa; P[max] is the maximum pressure at the wellhead of Injection well, MPa; ∆P[s] is the additional pressure drop of Injection well, MPa. Using the daily injection volume before pressure optimization in the water injection well as the reference point R[0], gradually increase the injection pressure, with each step of 50 psi and one step every three days. Afterwards, the daily injection volume of a single well R is increased by ΔR. Daily injection volume: Δ R = R − R 0 (7) In the equation: ΔR is the actual injection increase after pressure optimization of the water injection well, m^3; R[0] is the daily injection volume before implementing pressure optimization in the water injection well, m^3; R is the actual daily injection volume after pressure optimization of the water injection well, m^3. As the injection pressure of the water injection well is optimized, the allowable maximum injection pressure after overcoming the additional pressure drop is gradually reached. At this point, the increase in injection volume of the water injection well reaches its maximum. The annual cumulative injection volume R[i] is the accumulation of daily injection volume: R i = ∑ Δ R n ( n = 1 , 2 , ⋯ ) (8) In the formula: R[i] is the cumulative annual injection volume after pressure optimization of the water injection well, m^3; N is the number of days after stress optimization [5] . 1) Normal situation (ΔR > 0). After optimizing the pressure of the water injection well, the injection volume continues to increase, and normal daily calculations are sufficient. 2) The water injection volume continues to decrease, and the calculated daily increment is negative (ΔR < 0). After optimizing the pressure of the water injection well, although the pressure has been increased, the daily injection rate continues to decline, so the reference value for comparison is becoming less and less. This type of well calculates the pressure gradient based on the pressure increase amplitude and includes it in the increased injection rate. 3) Significant decrease in injection well pressure after acidification (Injection pressure < Reference pressure). After acidification of the water injection well, the injection pressure significantly decreases, which is lower than the injection pressure used as the reference point. The pressure optimization effect temporarily disappears, and will be recalculated when the pressure returns to the reference point. 4) Small decrease in injection well pressure after acidification (Injection pressure > Reference pressure). The effect of acidification in water injection wells is poor, and the injection pressure is greater than the previous benchmark injection pressure. Verify the pressure and flow rate at the reference point, and calculate the injection rate according to the pressure and flow gradient. In 2019, there were 40 pressure optimized water injection wells with a cumulative increase of 220,000 cubic meters. In 2020, there were 74 pressure water injection wells with a cumulative increase of 1.3536 million cubic meters. In 2020, the cumulative injection rate was significantly increased through the implementation of the pressure optimization algorithm for water injection wells. The cumulative injection rate was calculated separately for different water injection wells, more accurately determining the benefits brought by the water injection well pressure optimization project, effectively guiding the next step in arranging the frequency of water injection well acidification and reducing oilfield operating costs. The optimization of the pressure optimization algorithm for water injection wells in P oilfield has fundamentally improved the reliability of pressure optimization project data, providing better guidance for the continuous promotion of water injection well pressure optimization projects in the future, and greatly improving the reference value of the data. Clarify the direction for the follow-up water injection work in P Oilfield, effectively reduce the frequency of acidification and other measures, thereby reducing operating costs, and promoting high-quality water injection development in P Oilfield. The author declares no conflicts of interest regarding the publication of this paper. Li, L.Y. (2023) Optimization of the Algorithm for Increasing Injection Rate in Water Injection Wells for Pressure Optimization in P Oilfield. World Journal of Engineering and Technology, 11, 246-251. Zou, D.H., Chai, S.C., Ruan, X.F., et al. (2019) Energy Consumption Analysis of Electric Submersible Pump Wells in Offshore Oilfields. Petrochemical Technology, 26, 287. https://kns.cnki.net/kcms2/ src=copyZhu, D.W., Liu, W.C., Tan, S.H., et al. (2004) Research on the Application of Four Step High Pressure Injection Technology. Petroleum Drilling Technology, 2004, 50-52. https://kns.cnki.net/ kcms2/article/abstract?v=Lty0U-YuiCedohDgVx7NmEgSwDUOSOBnu6mTAc5zm3DimvU8VyQbC1NzSlhcc-DbLsX020d4lkFlvxV2t8s1ztNxXCycxQ2aPZMKCTfmkkg=&uniplatform=NZKPT&language=CHS&src=copyHe, F., Ding, L., Tang, W.J., et al. (2013) Exploration of Pressure Raising and Injection Increasing Technology for Mobei Tight Reservoir. Xinjiang Petroleum and Natural Gas, 9, 42-46+55+3. https://kns.cnki.net/kcms2/ article/abstract?v=Lty0U-YuiCdm6V3J-rQCCBu50Y__oNecS57-Mq_V4Wfefo3HOdPoGGBvgImRuG8No0EEYI2CgGgA5RLY1TE1jiDDaaEYFrOOLOYa9YmlSHo=&uniplatform=NZKPT&language=CHS&src=copyWang, L., Liu, C., Yang, G.H., et al. (2022) Study on Optimization of Injection Pressure by Increasing Injection Pressure in Water Injection Wells. Complex Oil and Gas Reservoirs, 15, 91-93+99. https://kns.cnki.net/kcms2/article/ copyChen, Q., Xun, C.W., Xing, X.K., et al. (2022) Introduction to a Simple Method for Calculating the Increase in Water Injection in Water Injection Wells. Energy Conservation in Petroleum and Petrochemical Industry, 12, 38-42+61. https://kns.cnki.net/kcms2/article/abstract?v=3uoqIhG8C44YLTlOAiTRKibYlV5Vjs7iJTKGjg9uTdeTsOI_ra5_XYetLxv5-7J6mPmTMNyJR65BmvlgIIdOzLKzOayHbyHE&uniplatform=NZKPT&
{"url":"https://www.scirp.org/xml/124775.xml","timestamp":"2024-11-08T15:52:51Z","content_type":"application/xml","content_length":"19098","record_id":"<urn:uuid:63ad58c9-c25c-410b-9b8e-54dc859f1cee>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00524.warc.gz"}
28.3 Length Contraction 225 28.3 Length Contraction • Describe proper length. • Calculate length contraction. • Explain why we don’t notice these effects at everyday scales. Figure 1. People might describe distances differently, but at relativistic speeds, the distances really are different. (credit: Corey Leopold, Flickr) Have you ever driven on a road that seems like it goes on forever? If you look ahead, you might say you have about 10 km left to go. Another traveler might say the road ahead looks like it’s about 15 km long. If you both measured the road, however, you would agree. Traveling at everyday speeds, the distance you both measure would be the same. You will read in this section, however, that this is not true at relativistic speeds. Close to the speed of light, distances measured are not the same when measured by different observers. Proper Length One thing all observers agree upon is relative speed. Even though clocks measure different elapsed times for the same process, they still agree that relative speed, which is distance divided by elapsed time, is the same. This implies that distance, too, depends on the observer’s relative motion. If two observers see different times, then they must also see different distances for relative speed to be the same to each of them. The muon discussed in Chapter 28.2 Example 1 illustrates this concept. To an observer on the Earth, the muon travels at $latex \boldsymbol{0.950 \textbf{c}} $ for $latex \boldsymbol{7.05 \;\mu \ textbf{s}} $ from the time it is produced until it decays. Thus it travels a distance $latex \boldsymbol{L_0 = v \Delta t = (0.950)(3.00 \times 10^8 \;\textbf{m/s})(7.05 \times 10^{-6} \;\textbf{s}) = 2.01 \;\textbf{km}} $ relative to the Earth. In the muon’s frame of reference, its lifetime is only 2.20μs2.20μs. It has enough time to travel only $latex \boldsymbol{L = v \Delta t_0 = (0.950)(3.00 \times 10^8 \;\textbf{m/s})(2.20 \times 10^{-6} \;\textbf{s}) = 0.627 \;\textbf{km}}. $ The distance between the same two events (production and decay of a muon) depends on who measures it and how they are moving relative to it. Proper Length Proper length $latex \boldsymbol{L_0} $ is the distance between two points measured by an observer who is at rest relative to both of the points. The Earth-bound observer measures the proper length $latex \boldsymbol{L_0} $, because the points at which the muon is produced and decays are stationary relative to the Earth. To the muon, the Earth, air, and clouds are moving, and so the distance $latex \boldsymbol{L} $ it sees is not the proper length. Figure 2. (a) The Earth-bound observer sees the muon travel 2.01 km between clouds. (b) The muon sees itself travel the same path, but only a distance of 0.627 km. The Earth, air, and clouds are moving relative to the muon in its frame, and all appear to have smaller lengths along the direction of travel. Length Contraction To develop an equation relating distances measured by different observers, we note that the velocity relative to the Earth-bound observer in our muon example is given by The time relative to the Earth-bound observer is $latex \boldsymbol{\Delta t} $, since the object being timed is moving relative to this observer. The velocity relative to the moving observer is given by The moving observer travels with the muon and therefore observes the proper time $latex \boldsymbol{\Delta t_0} $. The two velocities are identical; thus, We know that $latex \boldsymbol{\Delta t = \gamma \Delta t_0} $. Substituting this equation into the relationship above gives Substituting for $latex \boldsymbol{\gamma} $ gives an equation relating the distances measured by different observers. Length Contraction Length contraction $latex\boldsymbol{L} $ is the shortening of the measured length of an object moving relative to the observer’s frame. If we measure the length of anything moving relative to our frame, we find its length $latex \boldsymbol{L} $ to be smaller than the proper length $latex \boldsymbol{L_0} $ that would be measured if the object were stationary. For example, in the muon’s reference frame, the distance between the points where it was produced and where it decayed is shorter. Those points are fixed relative to the Earth but moving relative to the muon. Clouds and other objects are also contracted along the direction of motion in the muon’s reference frame. Example 1: Calculating Length Contraction: The Distance between Stars Contracts when You Travel at High Velocity Suppose an astronaut, such as the twin discussed in Chapter 28.2 Simultaneity and Time Dilation, travels so fast that $latex \boldsymbol{\gamma = 30.00} $. (a) She travels from the Earth to the nearest star system, Alpha Centauri, 4.300 light years (ly) away as measured by an Earth-bound observer. How far apart are the Earth and Alpha Centauri as measured by the astronaut? (b) In terms of $latex \boldsymbol{c} $, what is her velocity relative to the Earth? You may neglect the motion of the Earth relative to the Sun. (See Figure 3.) Figure 3. (a) The Earth-bound observer measures the proper distance between the Earth and the Alpha Centauri. (b) The astronaut observes a length contraction, since the Earth and the Alpha Centauri move relative to her ship. She can travel this shorter distance in a smaller time (her proper time) without exceeding the speed of light. First note that a light year (ly) is a convenient unit of distance on an astronomical scale—it is the distance light travels in a year. For part (a), note that the 4.300 ly distance between the Alpha Centauri and the Earth is the proper distance $latex \boldsymbol{L_0} $, because it is measured by an Earth-bound observer to whom both stars are (approximately) stationary. To the astronaut, the Earth and the Alpha Centauri are moving by at the same velocity, and so the distance between them is the contracted length $latex \boldsymbol{L} $. In part (b), we are given $latex \boldsymbol{\ gamma} $, and so we can find $latex \boldsymbol{v} $ by rearranging the definition of $latex \boldsymbol{\gamma} $ to express $latex \boldsymbol{v} $ in terms of $latex \boldsymbol{c} $. Solution for (a) 1. Identify the knowns. $latex \boldsymbol{L_0 – 4.300 \;\textbf{ly}} $; $latex \boldsymbol{\gamma = 30.00} $ 2. Identify the unknown. $latex \boldsymbol{L} $ 3. Choose the appropriate equation. $latex \boldsymbol{L = \frac{L_0}{\gamma}} $ 4. Rearrange the equation to solve for the unknown. $latex \begin{array}{r @{{}={}}l} \boldsymbol{L} & \boldsymbol{\frac{L_0}{\gamma}} \\[1em] & \boldsymbol{\frac{4.300 \;\textbf{ly}}{30.00}} \\[1em] & \boldsymbol{0.1433 \;\textbf{ly}} \end{array} Solution for (b) 1. Identify the known. $latex \boldsymbol{\gamma = 30.00} $ 2. Identify the unknown. $latex \boldsymbol{v} $ in terms of $latex \boldsymbol{c} $ 3. Choose the appropriate equation. 4. Rearrange the equation to solve for the unknown. $latex \begin{array}{r @{{}={}}l}\boldsymbol{\gamma} & \boldsymbol{\frac{1}{\sqrt{1 – \frac{v^2}{c^2}}}} \\[1em] \boldsymbol{30.00} & \boldsymbol{\frac{1}{\sqrt{1 – \frac{v^2}{c^2}}}} \end{array} Squaring both sides of the equation and rearranging terms gives so that Taking the square root, we find which is rearranged to produce a value for the velocity First, remember that you should not round off calculations until the final result is obtained, or you could get erroneous results. This is especially true for special relativity calculations, where the differences might only be revealed after several decimal places. The relativistic effect is large here ($latex \boldsymbol{\gamma = 30.00} $), and we see that $latex \boldsymbol{v} $ is approaching (not equaling) the speed of light. Since the distance as measured by the astronaut is so much smaller, the astronaut can travel it in much less time in her frame. People could be sent very large distances (thousands or even millions of light years) and age only a few years on the way if they traveled at extremely high velocities. But, like emigrants of centuries past, they would leave the Earth they know forever. Even if they returned, thousands to millions of years would have passed on the Earth, obliterating most of what now exists. There is also a more serious practical obstacle to traveling at such velocities; immensely greater energies than classical physics predicts would be needed to achieve such high velocities. This will be discussed in Chapter 28.6 Relativistic Energy. Why don’t we notice length contraction in everyday life? The distance to the grocery shop does not seem to depend on whether we are moving or not. Examining the equation $latex \boldsymbol{L = L_0 \ sqrt{1 – \frac{v^2}{c^2}}} $, we see that at low velocities ($latex \boldsymbol{ v<<c} $) the lengths are nearly equal, the classical expectation. But length contraction is real, if not commonly experienced. For example, a charged particle, like an electron, traveling at relativistic velocity has electric field lines that are compressed along the direction of motion as seen by a stationary observer. (See Figure 4.) As the electron passes a detector, such as a coil of wire, its field interacts much more briefly, an effect observed at particle accelerators such as the 3 km long Stanford Linear Accelerator (SLAC). In fact, to an electron traveling down the beam pipe at SLAC, the accelerator and the Earth are all moving by and are length contracted. The relativistic effect is so great than the accelerator is only 0.5 m long to the electron. It is actually easier to get the electron beam down the pipe, since the beam does not have to be as precisely aimed to get down a short pipe as it would down one 3 km long. This, again, is an experimental verification of the Special Theory of Relativity. Figure 4. The electric field lines of a high-velocity charged particle are compressed along the direction of motion by length contraction. This produces a different signal when the particle goes through a coil, an experimentally verified effect of length contraction. Check Your Understanding 1: A particle is traveling through the Earth’s atmosphere at a speed of $latex \boldsymbol{0.750c} $. To an Earth-bound observer, the distance it travels is 2.50 km. How far does the particle travel in the particle’s frame of reference? • All observers agree upon relative speed. • Distance depends on an observer’s motion. Proper length $latex \boldsymbol{L_0} $ is the distance between two points measured by an observer who is at rest relative to both of the points. Earth-bound observers measure proper length when measuring the distance between two points that are stationary relative to the Earth. • Length contraction $latex \boldsymbol{L} $ is the shortening of the measured length of an object moving relative to the observer’s frame: Conceptual Questions 1: To whom does an object seem greater in length, an observer moving with the object or an observer moving relative to the object? Which observer measures the object’s proper length? 2: Relativistic effects such as time dilation and length contraction are present for cars and airplanes. Why do these effects seem strange to us? 3: Suppose an astronaut is moving relative to the Earth at a significant fraction of the speed of light. (a) Does he observe the rate of his clocks to have slowed? (b) What change in the rate of Earth-bound clocks does he see? (c) Does his ship seem to him to shorten? (d) What about the distance between stars that lie on lines parallel to his motion? (e) Do he and an Earth-bound observer agree on his velocity relative to the Earth? Problems & Exercises 1: A spaceship, 200 m long as seen on board, moves by the Earth at $latex \boldsymbol{0.970c} $. What is its length as measured by an Earth-bound observer? 2: How fast would a 6.0 m-long sports car have to be going past you in order for it to appear only 5.5 m long? 3: (a) How far does the muon in Chapter 28.2 Example 1 travel according to the Earth-bound observer? (b) How far does it travel as viewed by an observer moving with it? Base your calculation on its velocity relative to the Earth and the time it lives (proper time). (c) Verify that these two distances are related through length contraction $latex \boldsymbol{\gamma = 3.20} $. 4: (a) How long would the muon in Chapter 28.2 Example 1 have lived as observed on the Earth if its velocity was $latex \boldsymbol{0.0500c} $? (b) How far would it have traveled as observed on the Earth? (c) What distance is this in the muon’s frame? 5: (a) How long does it take the astronaut in Example 1 to travel 4.30 ly at $latex \boldsymbol{0.99944c} $ (as measured by the Earth-bound observer)? (b) How long does it take according to the astronaut? (c) Verify that these two times are related through time dilation with $latex \boldsymbol{\gamma = 30.00} $ as given. 6: (a) How fast would an athlete need to be running for a 100-m race to look 100 yd long? (b) Is the answer consistent with the fact that relativistic effects are difficult to observe in ordinary circumstances? Explain. 7: Unreasonable Results (a) Find the value of $latex \boldsymbol{\gamma} $ for the following situation. An astronaut measures the length of her spaceship to be 25.0 m, while an Earth-bound observer measures it to be 100 m. (b) What is unreasonable about this result? (c) Which assumptions are unreasonable or inconsistent? 8: Unreasonable Results A spaceship is heading directly toward the Earth at a velocity of $latex \boldsymbol{0.800c} $. The astronaut on board claims that he can send a canister toward the Earth at $latex \boldsymbol{1.20c} $ relative to the Earth. (a) Calculate the velocity the canister must have relative to the spaceship. (b) What is unreasonable about this result? (c) Which assumptions are unreasonable or proper length $latex \boldsymbol{L_0} $; the distance between two points measured by an observer who is at rest relative to both of the points; Earth-bound observers measure proper length when measuring the distance between two points that are stationary relative to the Earth length contraction $latex \boldsymbol{L} $, the shortening of the measured length of an object moving relative to the observer’s frame: $latex \boldsymbol{L = L_0 \sqrt{1 – \frac{v^2}{c^2}} = \frac{L_0}{\gamma}} $ Check Your Understanding 1: $latex \begin{array}{r @{{}={}}l} \boldsymbol{L} & \boldsymbol{L_0 \sqrt{1 – \frac{v^2}{c^2}}} \\[1em] & \boldsymbol{(2.50 \;\textbf{km}) \sqrt{1 – \frac{(0.750c)^2}{c^2}}} \\[1em] & \boldsymbol {1.65 \;\textbf{km}} \end{array} $ Problems & Exercises 1: 48.6 m 3: (a) 1.387 km = 1.39 km (b) 0.433 km (c) $latex \begin{array}{r @{{}={}} l @{{}={}} l} \boldsymbol{L} & \boldsymbol{\frac{L_0}{\gamma}} & \boldsymbol{\frac{1.387 \times 10^3 \;\textbf{m}}{3.20}} \\[1em] & \boldsymbol{433.4 \;\textbf{m}} & \boldsymbol{0.433 \;\textbf{km}} \end{array} $ Thus, the distances in parts (a) and (b) are related when $latex \boldsymbol{\gamma = 3.20} $. 5: (a) 4.303 y (to four digits to show any effect) (b) 0.1434 y Thus, the two times are related when $latex \boldsymbol{\gamma = 30.00} $. 7: (a) 0.250 (b) $latex \boldsymbol{\gamma} $ must be ≥1 (c) The Earth-bound observer must measure a shorter length, so it is unreasonable to assume a longer length.
{"url":"https://pressbooks.uiowa.edu/clonedbook/chapter/length-contraction/","timestamp":"2024-11-08T02:15:50Z","content_type":"text/html","content_length":"183542","record_id":"<urn:uuid:5fe7bcb7-b9d2-43f8-8e29-da8045b9420d>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00192.warc.gz"}
Chapter 4 This section is about setting basic rotations: how to declare a rotation variable, and ways to make and think about them. Later, in another chapter, we’ll be able to add them, take fractions and the Our cubes are no good for examining rotations since we can’t tell the front from the top and sides. A cow, pointing along +z, is fine (but it can’t be tilted to aim that way – its 000-rotation must be +z). Otherwise we can make something. Take one of our cubes and child something unique on top and in front. Maybe a small sphere on top, and forward-aimed cylinder in front: // 3-part testing object, if we don’t have a cow model: | o <- small sphere for a hat +y H === <-- cylinder as a nose +z -> Side view Rotations can be imagined in lots of ways. They can be a direction arrow and a roll – like aiming a telescope, then spinning to adjust the viewfinder. Or the old xyz 0-360 method. Or a rotation can be a single diagonal line going through your origin, rotating 0-360 degrees around that one line. Like vectors, rotations can be offsets. The simplest rotations tell us which way to face. But an offset-style rotation is meant to add to another rotation, or find how far apart two rotations are. But the various ways to create simple direction-style rotations will be enough to fill this chapter. 4.1 Quaternions It seems natural to think of rotations as x,y,z, 0-360. That’s what the Inspector shows us, and how the 3 circles work using the rotation tool. But those are just tools giving us that view. No one has really stored or used rotations that way for decades. We use a better system called quaternions. The same way transform.position is a Vector3, transform.rotation is a quaternion. Quaternions have only one drawback: the numbers in them don’t make sense to humans. But that’s not a problem since we don’t need to look at the numbers. Think of quaternions as a struct with built-in functions doing everything we need. The simplest way to use a quaternion in a program is saving and restoring a spin. We can declare a quaternion – which is a rotation-holding-variable – and copy our starting rotation into it. Later we can copy it back: Quaternion savedStartFacing; // this is a rotation variable // copy our starting rotation into a variable: void Start() { savedStartFacing = transform.rotation; } void Update() { // spin yourself by hand, then press "a" to restore: transform.rotation = savedStartFacing; // copy it back Quaternions looks strange, and it feels weird to copy our rotation into one, but this program is really nothing more than Vector3 savedPos = transform.position; and then the reverse. A similar example, to show using quaternion variables, this switches my rotation with the red cube’s. If you remember, a standard swap looks like temp=a; a=b; b=temp; In this, temp is a quaternion: void Start() { Quaternion temp = transform.rotation; // save a copy of my rotation transform.rotation = redCube.rotation; // red into me redCube.rotation = temp; // saved old me into red Still not very exciting, since we don’t know how to create or change a rotation yet. For more using quaternions as normal variables, let’s save the rotations of red, green and blue in an array of quaternions. Then pressing 1,2 or 3 snaps us to copy that cube’s rotation, using a public Transform redCube, blueCube, greenCube; // dragged in links Quaternion[] Spins; // will be length 3 list of colored cube’s rotations void Start() { // basic array creation, with quaternions!: Spins = new Quaternion[3]; // array of quaternion variables Spins[0] = redCube.rotation; // copy into each array slot Spins[1] = blueCube.rotation; Spins[2] = greenCube.rotation; void copyRotation(Quaternion r) { // quaternion as an input transform.rotation = r; // not very exciting. Keys call the function: void Update() { if(Input.getKeyDown("1") copyRotation(Spins[0]); if(Input.getKeyDown("2") copyRotation(Spins[1]); if(Input.getKeyDown("3") copyRotation(Spins[2]); The point of this is that we can use Quaternion like any other variable type. An array of them is fine. Spins[0] is the first quaternion in the array. We can pass quaternions to functions. They’re regular variables. 4.2 Ways to make a rotation The most common way to make a rotation is to say what you want to look at, and have the computer do the math. That seems like cheating, but we have a computer – why wouldn’t we create that command? After that, we’ll take a look at the old xyz, 0-360 method. We don’t have to set rotations that way, but it’s a perfectly good option. Then there are a few oddball functions for rotation setting. 4.2.1 No rotation Unity has one built-in for a preset rotation: Quaternion.identity is the 000 rotation, or no rotation. For examples: transform.rotation = Quaternion.identity; // face 000 = forward redCube.rotation = Quaternion.identity; // same // save a spin, currently it’s "no spin": Quaternion spin1; spin1 = Quaternion.identity; It’s the same idea as Vector3.zero. It’s a rotation of x=0, y=0, z=0. This is the only preset. For example, there’s no Quaternion.up; It’s named identity instead of zero since identity is the formal math term everyone uses. We can think of it two ways. For a tree or rubble or a dirt pile, it means to place it with no extra spin. But for a cow or a flashlight – anything which logically has a facing – it’s better to say it aims you North with your head up. But only if the model was made correctly for Unity’s coordinates, facing on +z. That’s why I made a big deal about facing +z and possibly fixing it with the parent trick. If your cow faces +x, then transform.rotation=Quaternion.identity will face it +x. It will look like the command messed up, but you just have a wrong cow. 4.2.2 Look in a direction The simplest way of making a rotation is using a direction arrow. Recall that a direction is just any arrow where the length isn’t important. Suppose you have (0,10,1), which is an arrow aimed up and a little bit forward. We could turn that into a rotation, assign it to us, and we’ll be looking up and a little forward. The command is Quaternion.LookRotation. It converts a direction into a rotation. This code aims us in direction (0,10,1): Vector3 dir = new Vector3(0,10,1); // up, a little fwd transform.rotation = Quaternion.LookRotation(dir); It almost doesn’t seem like we did anything. (0,10,1) is an arrow, and we copied it into our direction, sort of. Except arrows and rotations are different. LookRotation(dir) did the math to convert to a quaternion. This next one is basically the same thing but looking 4 ahead and 1 right: transform.rotation = Quaternion.LookRotation(new Vector3(1,0,4)); The cool thing is that there are no angles involved at all. Pretend we’re on a board and actually want to look 1 space over for every 4 forward. We don’t know the angle and don’t need to. This one is very sneaky. It gives us a rotation of 000: transform.rotation = Quaternion.LookRotation(Vector3.forward); It make us face forward, as advertised. But in Unity, forward, with no up or down, is the starting rotation, which is all 0’s. For fun, here’s a hacky way to look in a random forward angle. We’ll aim 10 forward and -10 to 10 sideways, which gives a random -45 to 45 degrees: float xAmt = Random.Range(-10.0f, 10.0f); Vector3 v = new Vector3(xAmt, 0, 10); // a random forward arrow transform.rotation = Quaternion.LookRotation(v); There’s no special reason for using 10. It seemed like a round number, and directions don’t care about the length. Now on to the really useful part: aiming at something. It’s easy, since we already know how to get the direction from us to anything else by subtracting points. This aims us at the green cube: Vector3 toGreen = greenCube.position - transform.position; transform.rotation = Quaternion.LookRotation(toGreen); It’s so slick. LookRotation takes any arrow, and we have an arrow aimed at green. So, LookRotation faces us to green.. Our aiming arrow happened to be the exact length from us to green. It didn’t need to be, but it doesn’t hurt. If we had a unit vector, it would work as well. Aiming the red cube at the green one is the same idea: Vector3 redToGreen = greenCube.position - redCube.position; redCube.rotation = Quaternion.LookRotation(redToGreen); Fun fact, if we flip the subtraction order and get a backwards arrow – green to red, then red will be facing exactly away from green. Our old vector math tricks work here. Suppose we want to look at a spot a little above the green cube. That’s just one more line getting that spot: Vector3 aimPoint = greenCube.position + Vector3.up*1.5f; Vector3 toGreen = aimPoint - transform.position; transform.rotation = Quaternion.LookRotation(toGreen); Suppose the green cow can see 6 units, and we want to look at what it sees: // 6 ahead of the green cube: Vector3 aimPoint = greenCube.position + greenCube.forward*6; Vector3 toGreen = aimPoint - transform.position; transform.rotation = Quaternion.LookRotation(toGreen); Or maybe we want to look at a spot between the red and green cubes. We can use the averaging points trick: // between 2 cubes: Vector3 aimPoint = (greenCube.position + redCube.position)/2; Vector3 toMiddle = aimPoint - transform.position; transform.rotation = Quaternion.LookRotation(toMiddle); If the 2 cubes are on opposite sides of us, the middle will be in a not very helpful place, but we can’t fix that here. Another common look-at trick is getting a “flat” spin, only on y. For example, we want to y-spin without leaning, to face someone who might be standing on a hill or in a valley. After we get the arrow, set y to 0: Vector3 toBunny = bunny.position - transform.position; toBunny.y=0; // now it’s a flat arrow transform.rotation = Quaternion.LookRotation(toBunny); I like this one since it feels like “how to rotate, but only on y”, which is hard. But it’s also “how to aim in a direction, but the direction can’t have a change in y”, which is easy. 4.2.3 Y, local X, local Z rotations Euler angles logic In our code we can create rotations by directly filling in the xyz 0-360 values, just as they would appear in the Inspector. To do that we’ll need more details on the exact way they work, and a plan for aiming in a certain direction. Rotations are made to go in order y,x,z. That seems funny, but it works great. To test, use the slide trick in the Inspector: move the cursor to the left of a number until it turns into a little slide icon, then click and drag left/right. The number will scroll up and down. How the axes act and combine: • The y-spin is your compass direction. The y-axis runs up/down, so spinning around y is like standing on the ground, turning to face North/South/East/West. Even if you spin x and z first, y is still a perfectly flat compass spin. • x-spins are elevation, like setting the angle to fire a cannon. They never change your compass direction – only angle you up/down. Normally we want to set x between -90 and 90. Past those – over the top elevation – aims you in the opposite compass direction that y says, which is legal but confusing. • z-spins never change the direction you point. They always just “roll” you. I like to imagine a flunky villain aiming a gun, playing around with holding it sideways or upside-down. • Because of this, it never matters what order you change xyz’s. If you side-roll on z, that has no effect on how x and y work. Changing them also keeps that z-roll, on whatever new direction we face. x is the same way – compass spinning y drags any previous x-spin with it. To face in any direction, set y to the compass heading, and angle x to the elevation. Or go in the other order. And then z is always just for fun. Once the cow looks at what it should, you can roll in onto either side or on its back without changing the aim. A review of the numbers: Because Unity thinks +z is forwards, y-spins have 0=north, 90=east, and so on. It goes clockwise: 0 y rotation -90 x rotation ^ top view ^ side view | | 270 <-- --> 90 --> 0 | | v v Because of the left-hand rule, x-rotations feel backwards. +x tilts downward and -x tilts up. Also because of the left-hand rule, z-spins are counter-clockwise. +z spins to the left. Some theory: xyz coordinate systems aren’t done yet until you say the order. Unity chose to use yxz, which is the best one, but xyz, or zxy, …are legal. There’s no way to make a system where x,y, and z all just spin you around the real axis. It always has to act like a main axis and a chain of 2 connected gears. Put another way, it’s impossible to say which direction (10,80,30) aims until you know the order. If it’s in an xzy coordinate system, it will point in a different direction than Unity would. That whole system – xyz 0-360 and an order to do them in – is now called Euler Angles. As in “the Inspector shows the Euler angle representation of the quaternion”. Euler angles in code Because we don’t store angles directly, we can’t just copy xyz degrees into quaternions. But we almost can. We have a function named Quaternion.Euler which translates our Euler angles into the proper quaternion values. This faces us to the right: transform.rotation = Quaternion.Euler(0, 90, 0); It’s the same as entering those values into the Inspector. All of these are. One thing to note is that it’s an equals – it snaps our rotation to that exact angle, no matter how we were facing before. We can’t add angles yet (it’s not +=, it’s more complicated than that). This aims us straight up, with out feet facing forwards. Notice how we needed to use -90 to go up. +90 would be facing down: transform.rotation = Quaternion.Euler(-90, 0, 0); This aims us left, up 45 degrees, and on our back: transform.rotation = Quaternion.Euler(-45, -90, 180); You need to sort of translate each value: y is clockwise with 0=north, so -90 y is facing West or Left. Negative x still goes up, so -45 x is 1/2-way tilted up. Then z has no affect on direction; 180 z merely rolls us on our back. Also notice a cool thing about x: it isn’t affected by compass facing. That’s nice. If we see (-45, yy, zz), we know the cow’s height angle is 45-up, no matter what yy is. You’re also allowed to use a Vector3, but it still needs to be an input to Quaternion.Euler. For example this aims South and randomly on our left or right side: Vector3 spinv = Vector3.zero; // stands for a (000) rotation spinv.y=180; // facing South spinv.z=90; if(Random.value<0.5f) spinv.z*=-1; // left or right side // spinv is either (0,180,90) or (0,180,-90) transform.rotation = Quaternion.Euler(spinv); It’s just a shortcut. Quaternion.Euler needs an xyz, which can be 3 numbers, or one Vector3. It can also be a little confusing: a “spin” can now be a real quaternion, a Vector3 holding Eulers, or a Vector3 holding a direction (to be used in a LookRotation). Using EulerAngles, we can have Update automatically change the angle. This makes a cow do forward summersaults (not backflips, since +x goes down, not up): public float xSpin=0; void Update() { // cow roller transform.rotation = Quaternion.Euler(xSpin, 0, 0); Here’s a trickier one. We want the cow to face right, rolling on its side (we’re making a game about roasting a cow). Facing right is 90 degrees on y, and rolling forwards is +x. But this is wrong: Quaternion.Euler(xSpin, 90, 0); The problem is that x spins in our personal forward. The cow is still doing summersaults. The rules about how x and z travel with y can fool us. We need to think about what rolls in our personal sideways, which is z: // correct right-facing barbecue spinning: public float zSpin=0; void Update() { // sideways barbecue-rolling cow transform.rotation = Quaternion.Euler(0, 90, zSpin); We can play with how x&y together aim us by having pairs of keys control them. A&D spins, W&S aims up/down: Vector3 aimEulers=Vector3.zero; // aiming forward void Update() { if(Input.GetKey("a")) aimEulers.y-=1; if(Input.GetKey("d")) aimEulers.y+=1; // notice how these go backwards, +1 is down if(Input.GetKey("s")) aimEulers.x+=1; if(Input.GetKey("w")) aimEulers.x-=1; transform.rotation = Quaternion.Euler(aimEulers); This is a common trick. We can’t just reach into the quaternion and adjust the xyz angles, since they don’t exist. So we keep our own copy and send it to our rotation each time. 4.2.4 Rotate around an arbitrary axis Sometimes we want to draw just any line through our origin, and spin ourself around that line. An easy-to-see example, this spins us around a diagonal /-line: public float degrees=0; void Update() { Vector3 spinLine = new Vector3(1,0,1); // flat north-East / transform.rotation = Quaternion.AngleAxis(degrees, spinLine); If you put this on a cube, it will rotate perfectly corner-over-corner diagonally. On a cow, it will do the same thing but it will look a lot stranger (the head will tuck left, then it will be upside-down facing right, then back to normal). After watching for a while, you should be able to “see” the diagonal line it spins around. A semi-real example is a y-spin with a small wobble. We’ll make a line almost straight up, leaning just a tad left, and spin around it: public float degrees=0; void Update() { Vector3 almostUp = new Vector3(-1,10,0); // almost up transform.rotation = Quaternion.AngleAxis(degrees, almostUp); Again, watching it should eventually help you see the almost-up line we’re spinning around. It might look nicer if the line we spin around is something we can visualize. This code snaps us to in-between the red and green cubes, then spins around that line: public Transform redCube, greenCube; float degrees=0; void Update() { transform.position = (redCube.position + greenCube.position)/2; Vector3 redToGreen = redCube.position - greenCube.position; transform.rotation = Quaternion.AngleAxis(degrees, redToGreen); While running, moving the cubes around will change the line. In a sense, AngleAxis is very simple. It’s a single spin around just one axis. If the axis is easy to visualize, AngleAxis looks like a simple spin. 4.2.5 FromToRotation This command is an improved version of LookRotation. Instead of aiming the front, we can aim any part of us along the arrow. For example, if we’re a cow, this aims our feet at the green cube: Vector3 toGreen = greenCube.position - transform.position; Quaternion feetToGreen = Quaternion.FromToRotation(Vector3.down, toGreen); transform.rotation = feetToGreen; The first input is the part of you to aim, as if you were standing and facing forward. Vector3.down always points your down-arrow at the target. Suppose we want to snub the green cube by looking almost at it. We can do that by aiming an almost front arrow to it: Vector3 almostFront=new Vector3(-1,0,20); Quaternion aimToGreen = Quaternion.FromToRotation(almostFront, toGreen); Since the aiming arrow was slanted a little bit left, and goes straight to green, our head will be facing a little bit to the right. We rarely need this. Suppose we always want to aim our up arrow somewhere. We’d use the parent trick to spin Up to Forward, then aim our Forward like a normal person. 4.2.6 LookAt Making us look at a point is so common that Unity has a shortcut command. You give it the point, and it automatically computes the offset, then uses that to aim us. transform.LookAt(greenCube.position); makes us look at the green cube. This command is so useful and common that it’s easy to forget that’s it’s only a shortcut for LookRotation: // this is what LookAt does: void LookAt(Vector3 pos) { // pos is a point, not an offset Vector3 dir = pos - transform.position; // get the offset Quaternion r = Quaternion.LookRotation(r); transform.rotation = r; The drawback is that it automatically aims us. For fun, here’s code to fake LookRotation using LookAt. We do a LookAt, read our current rotation as the answer, then reset our rotation to how it was. It’s hilariously Rube-Goldburgy, but it works: Quaternion savedOriginalSpin = transform.rotation; Quaternion greenQ = transform.rotation; // <- the answer transform.rotation = savedOriginalSpin; // restore my rotation LookAt works with any object or any point: // the red cube rotates to face me: // green cube faces red cube: // look at spot in front of red cube: transform.LookAt(redCube + redCube.forward*6); // look at the point (10,9,32): transform.LookAt(new Vector3(10,9,32)); The last one is another example of how to tell a point from an offset. (10,9,32) could be an arrow – it would be forward, tilted right and a tad up. But LookAt expects a position, so (10,9,32) counts as a position here. 4.2.7 Directions vs. rotations, LookAt z-spin A direction arrow is almost a rotation, but not quite. A rotation is made of the direction plus the “free” z-spin. This is the thing where z always goes last and merely rolls you around the direction arrow. A cow aimed at a farmer could be head up, feet up, lying in its side, or so on. Or, again, the goon playing with the cool sideways gun-aiming technique. A way to see this is that rotations are on 3D models. They have textures, and feet sticking out. We can see them roll. But directions are on arrows. It makes no sense to roll an arrow around itself. LookRotation turns a direction arrow into a rotation, but since directions don’t have the extra z-spin, it cheats. It makes-up a z-spin of zero. In other words, LookRotation(v) doesn’t give you the rotation to face that way. It gives you one possible rotation, out of many. Later on we’ll be able to turn a rotation into a direction. Going from a direction to a rotation then back is safe. It adds a z-rotation of 0, then takes it off. You get back the same arrow you started with. Going from a rotation to a direction then back destroys your z-roll. You’ll still be pointed the same way, but your y will always be aimed up. LookAt and LookDirection have an option to set your z-spin, but in an odd way. You tell it which way you’d like your +y to point. It’s usually thought of as which way your head points. The z-spin will be set as best it can. These commands will aim you at green, lying on your side with your head pointed right: transform.rotation = LookRotation(toGreen, Vector3.right); transform.LookAt(greenCube.position, Vector3.right); This shows why we like the “head direction” method. Usually we don’t know the z-degrees we want, but know how the top should point. In fact, we couldn’t even do this by giving a z-roll. To keep its head facing right, the cow needs to switch sides when aiming forwards vs. backwards. The thing to remember is this is only a z-roll. The first input is still the real way we aim. The Vector3.right means to spin on z to the most right-aiming spin we can get – pick the best out of the 0-360 z-spin. 4.3 eulerAngles and round trips This is a “don’t do this” section. It’s explaining why, if you want to change your aim with moving xyz degrees, you need to keep your own copy. The main issue is that when you give the system xyz Euler angles, you won’t get the same ones back. You’re allowed to ask for the Euler angles with eulerAngles. It gives you a Vector3 with the 3 spins: transform.rotation = Quaterion.Euler(0,30,120); print( transform.rotation.eulerAngles ); // 0,30,120 transform.rotation = Quaterion.Euler(-90, 0, -20); print( transform.rotation.eulerAngles ); // 270, 0, 340 The first one gave us the same numbers back, but the second one “fixed” them (for real it recomputed them differently, which feels like it fixed them). Unity prefers 0-360 for y and z. For x it uses only 0-90 and 270-360. The second range is for negative spins; -20 is stored as 340. It cuts out 90-270 since those are over-the-top x-spins. If you need to change and move xyz Eulers, the best way is keeping your own copies, like the aiming with ASWD keys example. This spins y from 90 down to -90 over and over again: Vector3 vSpin=Vector3.zero; // no rotation -- facing forward void Update() { vSpin.y-=1; // from right to left, then resetting if(vSpin.y<-90) vSpin.y=90; The system converts negative y’s into 270-360, but we don’t care since we never look at transform.eulerAngles. vSpin is like our master copy, and its y is fully controlled by us. 4.4 Details and math These sections are all comments, background and other things that are nice to see, but aren’t really important. 4.4.1 Real Quaternion values If you look inside a quaternion, there’s an x, y, z and w. I already wrote that they aren’t degrees. If you know trig you might think they’re radians – nope. They’re totally different things called x, y and z (and w). If you have an irresistible urge to look at the actual numbers in a quaternion, it’s a simple 4-float struct. This code would show them: public float x,y,z,w; // will be copied from the quaternion void Update() { Quaternion q = transform.rotation; x=q.x; y=q.y; z=q.z; w=q.w; For 50 on z, (0,0,50) this gives (0.4, 0, 0, 0.9). That means turning on z uses the x and w slots, with numbers that make no sense. Some more: rotation quaternion 0,0,0 0, 0, 0, 1 0,90,0 0, 0.71, 0, 0.71 90,0,0 0.71, 0, 0, 0.71 180,0,0 1, 0, 0, 0 90,90,0 0.5, 0.5, -0.5, 0.5 There doesn’t seem to be much of a pattern. Obviously there is. You can find the formulas. But there’s nothing useful you can do with those numbers that isn’t already in LookRotation and so on. If they’re so unhelpful, why aren’t they private? That’s because quaternions are real math, and actual mathematicians using Unity might want to directly play with x,y,z&w to do something obscure. 4.4.2 Multiple ways to write an angle A crazy thing about xyz degrees is there are 2 legitimate ways to write every angle. I’m not talking about +/-360 tricks. There’s another way involving an up-and-over on x. Whenever x goes past 90, which is aiming backwards what your y says, you can make the same angle by flipping y by 180 and recomputing x. Here we can rewrite a 135 degree x-rotation as a 45-degree going the other way: A A \ x=135 x=45 \ \ \ o -> y=0 y=180 <- o side view That isn’t quite right, since the first way put us on our back, whereas the second keep us feet-down. Flipping z by 180 is the rest of the trick. The two identical rotations in the picture are (-135,0,0) and (-45,180,180). The first is shorter to read, but the second makes it more obvious that we’re actually facing South and are upside down. Every rotation can be rewritten like that (add 180 to x and z, make x=180-x). Every rotation has a version with x from -90 to 90, and another with it past 90. Fun fact: (0,0,0) is the same as In case you were wondering, that’s very strange, and we don’t like it. But that’s the way it is. The net effect is that moving and hand-checking Euler angles is even more of a giant mess. If all you do is aim mostly forward and keep x and y both 90 or less, things are fine. But with free-movement, anything like if(q.eulerAngles.y>180) is doomed to failure. 4.4.3 How the Inspector shows rotations The Inspector shows a different version of the rotation values. It’s not the same as the ones eulerAngles computes in the code. Unity saves the starting Inspector values, and keeps what it displays close, using +/-360. It thinks you will like that. Suppose you start a rotation at (0,0,0) – those are the values entered into the Inspector. While the program runs, the Inspector adjusts everything to between -180 and 180. If a value goes to 181, which is fine inside the program, the Inspector shows it as -179. If you started the x rotation at 90, the range for x would be -90 and 270. When your program sees 271, the Inspector shows -89. It seems like a huge problem, but hand-checking Euler angles is already such a mess that this doesn’t make it much worse. And if you keep your own xyz rotation variable, it will be fine. 4.4.4 Moving with euler xyz problems We use quaternions because Euler angles have problems. But what are these problems? As we’ve seen, Euler angles are pretty good for setting a facing. So far that’s all we’ve done, so they seem fine. It turns out that Euler’s are terrible at moving between angles. Basically y&x based rotation movement always works like an army tank aiming its gun. When targets are near the ground, we can track them pretty well. We rotate the turret with y, x is and up/down for hills and valleys. Anything higher up will give us more trouble. It takes less motion for the same degrees on y, making it easer to outrun our turning. Suppose a flying saucer zooms up directly overhead and we angle x up to 90, tracking it. It can go a little sideways and be safe. Even though the tip of our cannon only has to move a tiny bit sideways, we’ll need to spin our turret a full 90 degrees to be able to tilt left. If you’re making that game, you should use Euler Angles for rotation. Use the trick where you keep them in your own Vector3. It will give the weird gear-based gun rotation you want. The problem happens because most things aren’t army tank guns, and look terrible when the work that way. Usually we’re looking at the red cow and want to smoothly turn to face the blue one. We don’t want the camera to have a little extra curve, or sometimes go really slow with an extra tight funny little spin. Unlike a 2-gear Euler system, quaternions don’t have any good or bad spots. They don’t have any problem areas like when a cannon aims mostly up or down. Going from any spin to any other, quaternions always gives a nice straight angle, at the same speed If you’ve seen the term Gimbal Lock (the Flying Saucer example has it), quaternions don’t have that. 4.4.5 Quaternion setting commands Our commands all compute a quaternion, like: q1 = Quaternion.LookRotation(v);. They have alternate versions where a quaternion sets itself. I’ve never used them, and they don’t do anything new, but seeing the alternate form is a fun review. Here’s the old and new version of LookRotation: Quaternion q; q=Quaternion.LookRotation(toGreen); // old way q.SetLookRotation(toGreen); // new way The second one computes the rotation and puts it directly into q. It seems more efficient, but you can’t use it in a formula. Not having to write q= isn’t that much of an advantage. You could write transform.rotation.SetLookRotation(toGreen); since your rotation is a quaternion. But no one ever does. transform.LookAt(greenCube); is a better shortcut. Recall AngleAxis spins you around any line. Spinning around Vector3.forward rolls you sideways. Both of these tip you 5 degrees left: transform.rotation = Quaternion.AngleAxis(5, Vector3.foward); // old transform.rotation.ToAngleAxis(5, Vector3.forward); // new If our cow has a unicorn horn sticking up at 45 degrees, these will aim it at the green cube: Vector3 hornAngle=new Vector3(0,1,1); // 45 up and forward Quaternion q; q = Quaternion.FromToRotation(hornAngle, toGreen); // old q.SetFromToRotation(hornAngle, toGreen); transform.rotation = q; The strangest one is the shortcut for setting xyz Euler Angles. It doesn’t fit the pattern. It looks like an assignment statement, but it’s a disguised function call: q=Quaternion.Euler(0, 90, 10); // normal q.eulerAngles = new Vector3(0,90,10); // alternate Again, quaternions don’t actually save the Euler angles. The second command is just running the first one. There are also 3 shortcuts for FromToRotation – aiming a different part of you somewhere: transform.forward = toRedCube; // same as LookRotation transform.up = toRedCube; // aim your back that way transform.right = toRedCube; // aim our right side // same as: transform.rotation = Quaternion.FromToRotation(Vector3.right, toRedCube); These are more tricks with disguised functions. They actually call FromToRotation to do the work. We can use the backwards arrow trick to assign to the other 3: transform.up = -toRedCube; points your feet at the red cube, by pointing your head in the exact opposite direction.
{"url":"http://taxesforcatses.com/vectorW/vecMathAllch5.html","timestamp":"2024-11-15T03:49:57Z","content_type":"text/html","content_length":"65487","record_id":"<urn:uuid:c3b04545-7c8f-4f7e-975a-b504df6276ad>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00601.warc.gz"}
Computational Graph A computational graph is a form of representing mathematical expressions by using directed graphs. As shown in the figure, the neural network structure can be regarded as a computational graph consisting of Tensor data and Tensor operations as nodes, so constructing a neural network and training it by using a deep learning framework is the process of constructing a computational graph and executing it. The current support for computational graphs in the industry framework is divided into two modes: dynamic graphs are executed by interpretation, with dynamic syntax affinity and flexible expression, and static graphs are executed by using JIT (just in time) compilation optimization. There are more restrictions on the syntax for using static syntax. MindSpore supports both computational graph modes with a unified API expression, using the same API in both modes and a unified automatic differentiation mechanism to achieve the unification of dynamic and static graphs. In the following, we introduce each of the two computational graph modes of MindSpore. Dynamic Graphs Dynamic graphs are characterized by the fact that the construction and computation of a computational graph occur simultaneously (Define by run), which is consistent with Python interpreted execution. When a Tensor is defined in the computational graph, its value is calculated and determined, so it is easier to debug the model and get the value of intermediate results in real time. However, the need for all nodes to be saved makes it difficult to optimize the entire computational graph. In MindSpore, the dynamic graph pattern is also known as the PyNative pattern. Due to interpreted execution of dynamic graphs, it is recommended to use dynamic graph mode for debugging during script development and network process debugging. Default computational graph mode in MindSpore is PyNative mode. If you need to manually control the framework to adopt PyNative mode, you can configure it with the following code: import mindspore as ms In PyNative mode, the underlying operaors corresponding to all computation nodes is executed in a single Kernel, so that printing and debugging of computation results can be done arbitrarily, e.g. import numpy as np from mindspore import nn from mindspore import ops from mindspore import Tensor, Parameter class Network(nn.Cell): def __init__(self): self.w = Parameter(Tensor(np.random.randn(5, 3), ms.float32), name='w') # weight self.b = Parameter(Tensor(np.random.randn(3,), ms.float32), name='b') # bias def construct(self, x): out = ops.matmul(x, self.w) print('matmul: ', out) out = out + self.b print('add bias: ', out) return out model = Network() We simply define a Tensor with a shape of (5,) as input and observe the output. You can see that the print statement inserted in the construct method prints out the intermediate results in real time. x = ops.ones(5, ms.float32) # input tensor out = model(x) print('out: ', out) matmul: [-1.8809001 2.0400267 0.32370526] add bias: [-1.6770952 1.5087128 0.15726662] out: [-1.6770952 1.5087128 0.15726662] Static Graphs Compared to dynamic graphs, static graphs separate the construction of the computational graph from the actual computation (Define and run). In the build phase, the original computational graph is optimized and tuned according to the complete computational flow, and compiled to obtain a more memory-efficient and less computationally intensive computational graph. Since the structure of the graph does not change after compilation, it is called a “static graph”. In the calculation phase, the results are obtained by executing the compiled calculation graph based on the input data. Compared with dynamic graphs, static graphs have a richer grasp of global information and can be optimized more, but their intermediate processes are black boxes for users, and they cannot get the intermediate calculation results in real time like dynamic graphs. In MindSpore, the static graph mode is also known as Graph mode. In Graph mode, based on graph optimization, computational graph whole graph sink and other techniques, the compiler can perform global optimization for the graph and obtain better performance, so it is more suitable for scenarios where the network is fixed and high performance is required. If you need to manually control the framework to adopt Graph mode, you can configure it with the following code: Graph Compilation Based on Source Code Conversion In static graph mode, MindSpore converts Python source code into Intermediate Representation IR (IR) by means of source code conversion, and based on this, optimizes the IR graph, and finally executes the optimized graph on hardware devices. MindSpore uses a functional IR based on graph representation, called MindIR. For details, see Intermediate Representation MindIR. MindSpore static graph execution process actually consists of two steps, corresponding to the Define and Run phases of the static graph. However, in practice, it is not sensed when the instantiated Cell object is called. MindSpore encapsulates both phases in the __call__ method of the Cell, so the actual calling process is as follows: model(inputs) = model.compile(inputs) + model.construct(inputs), where model is the instantiated Cell object. We call the compile method explicitly for the following example: model = Network() out = model(x) print('out: ', out) out: [-0.26551223 3.0243678 0.706525 ] Static Graph Syntax In Graph mode, Python code is not executed by the Python interpreter, but the code is compiled into a static computational graph, and then the static computational graph is executed. Therefore, the compiler cannot support the full amount of Python syntax. MindSpore static graph compiler maintains a subset of Python common syntax to support the construction and training of neural networks. For details, refer to Static graph syntax support. Static Graph Control Flow In PyNative mode, MindSpore fully supports flow control statements in Python native syntax. In Graph mode, MindSpore is compiled with performance optimizations, so there are some special constraints on the use of flow control statements when defining networks, but the rest remains consistent with the native Python syntax. For details, refer to flow control statements. Just-in-time Compilation Usually, due to the flexibility of dynamic graphs, we choose to use the PyNative model for free neural network construction to achieve model innovation and optimization. But when performance acceleration is needed, we need to accelerate the neural network in part or as a whole. At this point, switching directly to Graph mode is an easy option, but the limitations of static graphs on syntax and control flow make it impossible to convert from dynamic to static graphs senselessly. For this purpose, MindSpore provides the jit decorator, which can make Python functions or Python-class member functions compiled into computational graphs, and improves the running speed by graph optimization and other techniques. At this point we can simply accelerate the graph compilation for the modules we want to optimize for performance, while the rest of the model, we still use the interpreted execution method, without losing the flexibility of dynamic graphs. Cell Module Compilation When we need to speed up a part of the neural network, we can use the jit modifier directly on the construct method. The module is automatically compiled to a static graph when the instantiated object is called. The example is as follows: import mindspore as ms from mindspore import nn class Network(nn.Cell): def __init__(self): self.fc = nn.Dense(10, 1) def construct(self, x): return self.fc(x) Function Compilation Similar to the Cell module compilation, when you need to compile acceleration for certain operations of Tensor, you can use jit modifier on its defined function. The module is automatically compiled as a static graph when the function is called. An example is as follows: Based on the functional auto-differentiation feature of MindSpore, it is recommended to use the function compilation method for JIT compilation acceleration of Tensor operations. def mul(x, y): return x * y Whole-graph Compilation MindSpore supports compiling and optimizing the forward computation, back propagation, and gradient optimization update of neural network training into one computational graph, which is called whole graph compilation. At this point, it only needs to construct the neural network training logic as a function and use the jit modifier on the function to achieve whole-graph compilation. The following is an example by using a simple fully connected network: network = nn.Dense(10, 1) loss_fn = nn.BCELoss() optimizer = nn.Adam(network.trainable_params(), 0.01) def forward_fn(data, label): logits = network(data) loss = loss_fn(logits, label) return loss grad_fn = ms.value_and_grad(forward_fn, None, optimizer.parameters) def train_step(data, label): loss, grads = grad_fn(data, label) return loss As shown in the above code, after encapsulating the neural network forward execution and loss function as forward_fn, the function transformation is executed to obtain the gradient calculation function. Then the gradient calculation function and optimizer calls are encapsulated as train_step function and modified with jit. When the train_step function is called, the static graph is compiled, the whole graph is obtained and executed. In addition to using modifiers, jit methods can also be called by using function transformations, as in the following example: train_step = ms.jit(train_step)
{"url":"https://www.mindspore.cn/tutorials/en/r2.0.0-alpha/advanced/compute_graph.html","timestamp":"2024-11-11T06:26:43Z","content_type":"text/html","content_length":"31128","record_id":"<urn:uuid:c87f7b19-c74b-41b8-920a-644f7bf30c6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00684.warc.gz"}
Asia-Pacific Symposium for Lattice Field Theory (APLAT 2020) Hadron Spectroscopy and Interactions: Session 4-1 C • Chuan Liu (Peking University) Hadron Spectroscopy and Interactions: Session 4-2 B • Liuming Liu (Chinese Academy of Sciences) Hadron Spectroscopy and Interactions: Session 6-2 B • Derek Leinweber (University of Adelaide) Hadron Spectroscopy and Interactions: Session 7-1 C • Marc Wagner (Goethe University Frankfurt )
{"url":"https://conference-indico.kek.jp/event/113/sessions/770/","timestamp":"2024-11-04T02:46:17Z","content_type":"text/html","content_length":"135834","record_id":"<urn:uuid:72a6524e-be83-449e-bea3-090fab5439e5>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00795.warc.gz"}
Create Regression Models with ARMA Errors Default Regression Model with ARMA Errors This example shows how to apply the shorthand regARIMA(p,D,q) syntax to specify the regression model with ARMA errors. Specify the default regression model with ARMA(3,2) errors: $\begin{array}{l}{y}_{t}=c+{X}_{t}\beta +{u}_{t}\\ {u}_{t}={a}_{1}{u}_{t-1}+{a}_{2}{u}_{t-2}+{a}_{3}{u}_{t-3}+{\epsilon }_{t}+{b}_{1}{\epsilon }_{t-1}+{b}_{2}{\epsilon }_{t-2}.\end{array}$ Mdl = regARIMA with properties: Description: "ARMA(3,2) Error Model (Gaussian Distribution)" SeriesName: "Y" Distribution: Name = "Gaussian" Intercept: NaN Beta: [1×0] P: 3 Q: 2 AR: {NaN NaN NaN} at lags [1 2 3] SAR: {} MA: {NaN NaN} at lags [1 2] SMA: {} Variance: NaN The software sets each parameter to NaN, and the innovation distribution to Gaussian. The AR coefficients are at lags 1 through 3, and the MA coefficients are at lags 1 and 2. Pass Mdl into estimate with data to estimate the parameters set to NaN. The regARIMA model sets Beta to [] and does not display it. If you pass a matrix of predictors (${X}_{t}$) into estimate, then estimate estimates Beta. The estimate function infers the number of regression coefficients in Beta from the number of columns in ${X}_{t}$. Tasks such as simulation and forecasting using simulate and forecast do not accept models with at least one NaN for a parameter value. Use dot notation to modify parameter values. ARMA Error Model Without an Intercept This example shows how to specify a regression model with ARMA errors without a regression intercept. Specify the default regression model with ARMA(3,2) errors: $\begin{array}{l}{y}_{t}={X}_{t}\beta +{u}_{t}\\ {u}_{t}={a}_{1}{u}_{t-1}+{a}_{2}{u}_{t-2}+{a}_{3}{u}_{t-3}+{\epsilon }_{t}+{b}_{1}{\epsilon }_{t-1}+{b}_{2}{\epsilon }_{t-2}.\end{array}$ Mdl = regARIMA('ARLags',1:3,'MALags',1:2,'Intercept',0) Mdl = regARIMA with properties: Description: "ARMA(3,2) Error Model (Gaussian Distribution)" SeriesName: "Y" Distribution: Name = "Gaussian" Intercept: 0 Beta: [1×0] P: 3 Q: 2 AR: {NaN NaN NaN} at lags [1 2 3] SAR: {} MA: {NaN NaN} at lags [1 2] SMA: {} Variance: NaN The software sets Intercept to 0, but all other parameters in Mdl are NaN values by default. Since Intercept is not a NaN, it is an equality constraint during estimation. In other words, if you pass Mdl and data into estimate, then estimate sets Intercept to 0 during estimation. You can modify the properties of Mdl using dot notation. ARMA Error Model with Nonconsecutive Lags This example shows how to specify a regression model with ARMA errors, where the nonzero ARMA terms are at nonconsecutive lags. Specify the regression model with ARMA(8,4) errors: $\begin{array}{l}{y}_{t}=c+{X}_{t}\beta +{u}_{t}\\ {u}_{t}={a}_{1}{u}_{1}+{a}_{4}{u}_{4}+{a}_{8}{u}_{8}+{\epsilon }_{t}+{b}_{1}{\epsilon }_{t-1}+{b}_{4}{\epsilon }_{t-4}.\end{array}$ Mdl = regARIMA('ARLags',[1,4,8],'MALags',[1,4]) Mdl = regARIMA with properties: Description: "ARMA(8,4) Error Model (Gaussian Distribution)" SeriesName: "Y" Distribution: Name = "Gaussian" Intercept: NaN Beta: [1×0] P: 8 Q: 4 AR: {NaN NaN NaN} at lags [1 4 8] SAR: {} MA: {NaN NaN} at lags [1 4] SMA: {} Variance: NaN The AR coefficients are at lags 1, 4, and 8, and the MA coefficients are at lags 1 and 4. The software sets the interim lags to 0. Pass Mdl and data into estimate. The software estimates all parameters that have the value NaN. Then estimate holds all interim lag coefficients to 0 during estimation. Known Parameter Values for a Regression Model with ARMA Errors This example shows how to specify values for all parameters of a regression model with ARMA errors. Specify the regression model with ARMA(3,2) errors: $\begin{array}{l}{y}_{t}={X}_{t}\left[\begin{array}{l}2.5\\ -0.6\end{array}\right]+{u}_{t}\\ {u}_{t}=0.7{u}_{t-1}-0.3{u}_{t-2}+0.1{u}_{t-3}+{\epsilon }_{t}+0.5{\epsilon }_{t-1}+0.2{\epsilon }_{t-2},\ where ${\epsilon }_{t}$ is Gaussian with unit variance. Mdl = regARIMA('Intercept',0,'Beta',[2.5; -0.6],... 'AR',{0.7, -0.3, 0.1},'MA',{0.5, 0.2},'Variance',1) Mdl = regARIMA with properties: Description: "Regression with ARMA(3,2) Error Model (Gaussian Distribution)" SeriesName: "Y" Distribution: Name = "Gaussian" Intercept: 0 Beta: [2.5 -0.6] P: 3 Q: 2 AR: {0.7 -0.3 0.1} at lags [1 2 3] SAR: {} MA: {0.5 0.2} at lags [1 2] SMA: {} Variance: 1 The parameters in Mdl do not contain NaN values, and therefore there is no need to estimate Mdl using estimate. However, you can simulate or forecast responses from Mdl using simulate or forecast. Regression Model with ARMA Errors and t Innovations This example shows how to set the innovation distribution of a regression model with ARMA errors to a t distribution. Specify the regression model with ARMA(3,2) errors: $\begin{array}{l}{y}_{t}={X}_{t}\left[\begin{array}{l}2.5\\ -0.6\end{array}\right]+{u}_{t}\\ {u}_{t}=0.7{u}_{t-1}-0.3{u}_{t-2}+0.1{u}_{t-3}+{\epsilon }_{t}+0.5{\epsilon }_{t-1}+0.2{\epsilon }_{t-2},\ where ${\epsilon }_{t}$ has a t distribution with the default degrees of freedom and unit variance. Mdl = regARIMA('Intercept',0,'Beta',[2.5; -0.6],... 'AR',{0.7, -0.3, 0.1},'MA',{0.5, 0.2},'Variance',1,... Mdl = regARIMA with properties: Description: "Regression with ARMA(3,2) Error Model (t Distribution)" SeriesName: "Y" Distribution: Name = "t", DoF = NaN Intercept: 0 Beta: [2.5 -0.6] P: 3 Q: 2 AR: {0.7 -0.3 0.1} at lags [1 2 3] SAR: {} MA: {0.5 0.2} at lags [1 2] SMA: {} Variance: 1 The default degrees of freedom is NaN. If you don't know the degrees of freedom, then you can estimate it by passing Mdl and the data to estimate. Specify a ${t}_{5}$ distribution. Mdl.Distribution = struct('Name','t','DoF',5) Mdl = regARIMA with properties: Description: "Regression with ARMA(3,2) Error Model (t Distribution)" SeriesName: "Y" Distribution: Name = "t", DoF = 5 Intercept: 0 Beta: [2.5 -0.6] P: 3 Q: 2 AR: {0.7 -0.3 0.1} at lags [1 2 3] SAR: {} MA: {0.5 0.2} at lags [1 2] SMA: {} Variance: 1 You can simulate or forecast responses from Mdl using simulate or forecast because Mdl is completely specified. In applications, such as simulation, the software normalizes the random t innovations. In other words, Variance overrides the theoretical variance of the t random variable (which is DoF/(DoF - 2)), but preserves the kurtosis of the distribution. Specify Regression Model with ARMA Errors Using Econometric Modeler App In the Econometric Modeler app, you can specify the predictor variables in the regression component, and the error model lag structure and innovation distribution of a regression model with ARMA(p,q) errors, by following these steps. All specified coefficients are unknown but estimable parameters. 1. At the command line, open the Econometric Modeler app. Alternatively, open the app from the apps gallery (see Econometric Modeler). 2. In the Time Series pane, select the response time series to which the model will be fit. 3. On the Econometric Modeler tab, in the Models section, click the arrow to display the models gallery. 4. In the models gallery, in the Regression Models section, click RegARMA. The RegARMA Model Parameters dialog box appears. 5. Choose the error model lag structure. To specify a regression model with ARMA(p,q) errors that includes all AR lags from 1 through p and all MA lags from 1 through q, use the Lag Order tab. For the flexibility to specify the inclusion of particular lags, use the Lag Vector tab. For more details, see Specifying Univariate Lag Operator Polynomials Interactively. Regardless of the tab you use, you can verify the model form by inspecting the equation in the Model Equation section. 6. In the Predictors section, choose at least one predictor variable by selecting the Include? check box for the time series. For example, suppose you are working with the Data_USEconModel.mat data set and its variables are listed in the Time Series pane. • To specify a regression model with AR(3) errors for the unemployment rate containing all consecutive AR lags from 1 through its order, Gaussian-distributed innovations, and the predictor variables COE, CPIAUCSL, FEDFUNDS, and GDP: 1. In the Time Series pane, select the UNRATE time series. 2. On the Econometric Modeler tab, in the Models section, click the arrow to display the models gallery. 3. In the models gallery, in the Regression Models section, click RegARMA. 4. In the regARMA Model Parameters dialog box, on the Lag Order tab, set Autoregressive Order to 3. 5. In the Predictors section, select the Include? check box for the COE, CPIAUCSL, FEDFUNDS, and GDP time series. • To specify a regression model with MA(2) errors for the unemployment rate containing all MA lags from 1 through its order, Gaussian-distributed innovations, and the predictor variables COE and 1. In the Time Series pane, select the UNRATE time series. 2. On the Econometric Modeler tab, in the Models section, click the arrow to display the models gallery. 3. In the models gallery, in the Regression Models section, click RegARMA. 4. In the regARMA Model Parameters dialog box, on the Lag Order tab, set Moving Average Order to 2. 5. In the Predictors section, select the Include? check box for the COE and CPIAUCSL time series. • To specify the regression model with ARMA(8,4) errors for the unemployment rate containing nonconsecutive lags $\begin{array}{c}{y}_{t}=c+{\beta }_{1}CO{E}_{t}+{\beta }_{2}CPIAUCS{L}_{t}+{u}_{t}\\ \left(1-{\alpha }_{1}L-{\alpha }_{4}{L}^{4}-{\alpha }_{8}{L}^{8}\right){u}_{t}=\left(1+{b}_{1}L+{b}_{4}{L}^ {4}\right){\epsilon }_{t}\end{array},$ where ε[t] is a series of IID Gaussian innovations: 1. In the Time Series pane, select the UNRATE time series. 2. On the Econometric Modeler tab, in the Models section, click the arrow to display the models gallery. 3. In the models gallery, in the Regression Models section, click RegARMA. 4. In the regARMA Model Parameters dialog box, click the Lag Vector tab: 1. In the Autoregressive Lags box, type 1 4 8. 2. In the Moving Average Lags box, type 1 4. 5. In the Predictors section, select the Include? check box for the COE and CPIAUCSL time series. • To specify a regression model with ARMA(3,2) errors for the unemployment rate containing all consecutive AR and MA lags through their respective orders, the predictor variables COE and CPIAUCSL, and t-distributed innovations: 1. In the Time Series pane, select the UNRATE time series. 2. On the Econometric Modeler tab, in the Models section, click the arrow to display the models gallery. 3. In the models gallery, in the Regression Models section, click RegARMA. 4. In the regARMA Model Parameters dialog box, click the Lag Order tab: 1. Set Autoregressive Order to 3. 2. Set Moving Average Order to 2. 5. Click the Innovation Distribution button, then select t. 6. In the Predictors section, select the Include? check box for the COE and CPIAUCSL time series. The degrees of freedom parameter of the t distribution is an unknown but estimable parameter. After you specify a model, click Estimate to estimate all unknown parameters in the model. See Also Related Examples More About
{"url":"https://it.mathworks.com/help/econ/specifications-for-regression-models-with-arma-errors.html","timestamp":"2024-11-11T08:53:58Z","content_type":"text/html","content_length":"115531","record_id":"<urn:uuid:a833bdf1-3c36-4b20-a7d6-9a405a9b9636>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00010.warc.gz"}
A math problem solving strategy that's proven to work I said before , there's a tough set of requirements that any problem solving strategy that's worth teaching to students as a singular go-to overarching framework must satisfy. I have yet to see a simple problem solving strategy satisfy all of those requirements, but here below is something that's been proven to work [1]. The Formulaic Action Oriented Problem Solving Strategy It's "formulaic" because the of the strategy is centered around using formulas to derive numeric solutions. But also, this strategy is "formulaic" because it's step-by-step, bringing students from start to finish of solving a It's "action oriented" because it critically answers for students the question of "what should I do?" It's a "strategy" and not a bag of techniques, and so perhaps it's better called a "framework". But only because so many other people call as "strategies" the techniques that one might use during problem solving. Certainly problem solving require techniques, but just as important is learning terminology. Terminology and terms are definitions. Terms have meaning that is denotational. Technique is what to do, how and when. Steps of the action oriented, formula centered problem solving strategy (1) diagram the problem or situation, (2) label the diagram, (3) choose or write a formula, (4) fill the formula in with numbers, (5) solve for the unknown. Notice the first two steps, the of this strategy, depend on knowing terminology. The last three steps, the , depend on knowing how to work with formulas. Working with formulas is the most advanced form of arithmetic techniques. It is also the most foundational form of algebraic techniques. Therefore knowing what to do with formulas is an essential skill, and set of techniques, to learn: especially to bridge students from arithmetic to algebra. Thus this strategy's back-end could be equivalently used with arithmetic, formula, or algebra based math, depending on the student's mathematical maturity. Also equivalently, though it ought to be rarely (especially in grade school), the back-end here could be swapped out to use algorithmic based math as well --- and I really mean algorithmic as defined technically in computing science, not just any step-by-step non-mathematical short-cut that someone without higher mathematical maturity concocts. The formula centre in that problem solving strategy's back-end The formula centre consists of the last three steps. Recall they are: (3) choose or write an appropriate formula (4) Then filling into the formula the appropriate numbers to replace the variables in that formula. (5) And finally, appropriately solving for the unknown variable. The simpler the formula, the more it involves just arithmetic skills: including the arithmetic technique of number substitution which should've been previously learned in junior high school, and similarly also the arithmetic technique of balancing equations which is often confused or misunderstood as algebra. The more complex the formula, the more it needs algebraic techniques: especially as you need better knowledge of the algebraic technique of cancellation for canceling terms, factors, and fractions until getting the formula into the desired form to solve for the unknown. This problem solving technique works broadly, e.g. from topics like (unit conversion, areas and volume), to It even works for topics like roots and powers --- but then that involves some more advanced skills like the algebraic technique of formula substitution : replacing a whole, or part, of an equation with another expression, kind of like instead of filling an equation in with numbers, you fill it in with expressions. It also works for Factors and products --- but students would need to learn either the abstract area model for things like distribution, factoring out GCD, binomial decomposition, etc. But all these are just applications of the basic algebraic techniques used in the Formulaic Action Oriented Problem Solving Strategy! It also works for problem solving with Relations and functions. Of course, functions are just special formulas, and Graph Sketching and Interpretation techniques would be needed. Linear functions . And transforming between the 3 forms of linear equations with algebra mainly requires the algebraic technique of cancellation: cancelling terms, factors, and fractions until getting desired form. Linear systems. Linear Systems can be solved using Graph Sketching and Interpretation techniques , and also with algebra ( algebraic techniques of: formula substitution , and formula elimination) Let's see: that's just 3 foundational algebraic techniques and two visual (diagrammatic, graphical) techniques! All working within one problem solving strategy. And the many variations of those. And a lot of terminology! Turns out Math isn't that hard: the above covers a large number of basic fields of math that students have to learn, and we counted up just 5 foundational techniques and one problem solving strategy. [1] I would know because I've seen it used by some very strong and some very weak math students, all to great success! In fact, the above was written in around 2014 August after such observations.
{"url":"https://blog.carsoncheng.ca/2017/07/a-math-problem-solving-strategy-thats.html","timestamp":"2024-11-06T18:00:15Z","content_type":"application/xhtml+xml","content_length":"61916","record_id":"<urn:uuid:3d9550f1-865f-43b4-b798-0ab07c950ed9>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00570.warc.gz"}
teaching greater than and less than 2nd grade Greater Than Less Than Worksheets - Math-Aids.Com Less Than, Greater Than, Equal to- 2 Digit Numbers Math Worksheet Greater than Less than Worksheet - Comparing Numbers to 100 Greater than less than or equal worksheets for grade 2-exercise 18 ... Greater than Less than Worksheet - Comparing Numbers to 100 Greater Than, Less Than, Equal To? #2 | Worksheet | Education.com 2nd Grade Math Worksheets - Place Value - Comparing Numbers ... Second Grade Greater Than and Less Than Activity Sheet Greater Than Less Than Free - 10 Free PDF Printables | Printablee Greater than Less than Worksheets - Math Monks Greater Than Less Than Worksheets - Math-Aids.Com Equal to Greater than Less than Worksheets | Free Printables Comparing Numbers Worksheets | K5 Learning Less Than, Greater Than Worksheet for 1st - 2nd Grade | Lesson Planet Greater Than, Less Than Greater Than, Less Than, Equal To FREEBIE Greater Than and Less Than Activity Sheets (teacher made) Greater than Less than or Equal Worksheets for Grade 2 – Exercise ... 9 Greater Than or Less Than Resources - Teach Junkie Greater than Less than Worksheets - Math Monks Greater Than Less Than Worksheets| Download Free Printables For Kids 2nd Grade Math Worksheets: Greater Than, Less Than, Equal To The Best of Teacher Entrepreneurs: FREE MATH LESSON - “Greater ... Comparing Numbers - Greater Than Less Than Fun Greater Than Less Than Activities - Primary Theme Park How To Compare Numbers - Greater Than, Less Than. First Grade Math Lesson KS2 Greater Than and Less Than Worksheets Symbols Teaching Greater Than, Less Than, Equal To Math Center 1st and 2nd Grade ... 9 Greater Than or Less Than Resources - Teach Junkie Using Greater Than - Less Than Symbols - Halloween Worksheet ... Comparing and Ordering Numbers Activities - Saddle Up for 2nd Grade Fun Greater Than Less Than Activities - Primary Theme Park Greater Than, Less Than, Equal To Lesson | Primary Junction
{"url":"https://worksheets.clipart-library.com/teaching-greater-than-and-less-than-2nd-grade.html","timestamp":"2024-11-05T21:52:13Z","content_type":"text/html","content_length":"27422","record_id":"<urn:uuid:b940b871-dec6-40f4-83e0-f002c4e30fa4>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00726.warc.gz"}
How many Seconds in a Year (Regular/ Ordinary and Leap Years)? Wondering there are how many seconds in a year? There are 31,622,400 seconds in a leap year like 2024. However, a regular year which has 365 days has 31,536,000 seconds. Since we know that there are 525,600 minutes in a regular/ ordinary year and 527,040 minutes in a leap year, we can then multiply the total minutes by 60 in order to get the total seconds in a year. How many Seconds in a Year: Number of Seconds in Regular and Leap Years How many seconds in a year? The total seconds in an ordinary year is 31,536,000 seconds. However, there are 366 days in a leap year. As a result, the total seconds in a leap year like 2024 is 31,622,400 seconds. How many seconds in 2024? There are 31,622,400 seconds in 2024. Total seconds for each month in 2024 The following are the total seconds for each month in 2024, and the total seconds for the year 2024. Total seconds in each month of 2024 Month Number of Days Total seconds January 31 2,678,400 seconds February 29 2,505,600 seconds March 31 2,678,400 seconds April 30 2,592,000 seconds May 31 2,678,400 seconds June 30 2,592,000 seconds July 31 2,678,400 seconds August 31 2,678,400 seconds September 30 2,592,000 seconds October 31 2,678,400 seconds November 30 2,592,000 seconds December 31 2,678,400 seconds Total seconds in 2024: 31,622,400 seconds Total weeks, days, hours, minutes and seconds in 2024 The total weeks, days, hours, minutes and seconds in 2024 are: Share On Your Favorite Social Media! Use the following links to spread the word...
{"url":"https://www.mpesacharges.com/how-many-seconds-in-a-year/","timestamp":"2024-11-08T03:06:49Z","content_type":"text/html","content_length":"50473","record_id":"<urn:uuid:9720284e-2a4a-4927-965f-5412ee389d7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00219.warc.gz"}
Stability interchanges in a curved Sitnikov problem PDF STABILITY INTERCHANGES IN A CURVED SITNIKOV PROBLEM 7 1 LUIS FRANCO-PE´REZ,MARIANGIDEA,MARK LEVI,AND ERNESTO PE´REZ-CHAVELA 0 2 Abstract. WeconsideracurvedSitnikovproblem,inwhichaninfinitesimalparticle n moveson acircle underthegravitational influenceof twoequalmasses in Keplerian a J motionwithinaplaneperpendiculartothatcircle. Therearetwoequilibriumpoints, whose stability we are studying. Weshow that one of the equilibrium points under- 5 2 goesstabilityinterchangesasthesemi-majoraxisoftheKeplerianellipsesapproaches thediameterofthatcircle. Toderivethisresult,wefirstformulate andproveagen- ] eral theorem on stability interchanges, and then we apply it to our model. The S motivation for our model resides with the n-body problem in spaces of constant D curvature. . h t a m 1. Introduction [ 1.1. A curved Sitnikov problem. We consider the following curved Sitnikov prob- 1 v lem: Twobodiesof equal masses (primaries)move, undermutualgravity, on Keplerian 1 ellipses about their center of mass. A third, massless particle is confined to a circle 5 passing through the center of mass of the primaries, denoted by P , and perpendicular 4 0 7 to the plane of motion of the primaries; the second intersection point of the circle with 0 that plane is denoted by P . We assume that the massless particle moves under the 1 . 1 gravitational influence of the primaries without affecting them. The dynamics of the 0 massless particle has two equilibrium points, at P and P . We focus on the local dy- 0 1 7 namics near these two points, more precisely, on the dependence of the linear stability 1 : of these points on the parameters of the problem. v When the Keplerian ellipses are not too large or too small, P is a local center and i 0 X P is a hyperbolic fixed point. When we increase the size of the Keplerian ellipses, 1 r as the distance between P and the closest ellipse approaches zero, then P undergoes a 1 1 stability interchanges. That is, there exists a sequence of open, mutually disjoint intervals of values of thesemi-major axis oftheKeplerian ellipses, suchthat, on each of these intervals the linearized stability of P is strongly stable, and each complementary 1 interval contains values where the linearized stability is not strongly stable, i.e., it is either hyperbolic or parabolic. The length of these intervals approaches zero when the semi-majoraxisoftheKeplerianellipses approachesthediameter ofthecircleonwhich the massless particle moves. This phenomenon is the main focus in the paper. It is stated in [16] and suggested by numerical evidence [11, 12] that the linearized stability of the point P also undergoes stability interchanges when the size of the 0 binary is kept fixed and the eccentricity of the Keplerian ellipses approaches 1. Key words and phrases. Stability interchanges, qualitative theory,Sitnikovproblem. 1 2 L.FRANCO-PE´REZ,M.GIDEA,M.LEVI,ANDE.PE´REZ-CHAVELA Stability interchanges of the type described above are ubiquitous in systems of vary- ing parameters; they appear, for example, in the classical Hill’s equation and in the Mathieu equation [24]. To prove the occurrence of this phenomenon in our curved Sitnikov problem, we first formulate a general result on stability interchanges for a general class of simple mechanical systems. More precisely, we consider the motion of two bodies — one massive and one massless — which are confined to a pair of curves and move under Newtonian gravity. We let the distance between the two curves be controlled by some parameter λ. We assume that the position of the infinitesimal par- ticle that achieves the minimum distance between the curves is an equilibrium point. We show that, in the case when the minimum distance between the two curves ap- proaches zero, corresponding to λ 0, there existence a sequence of mutually disjoint Ñ open intervals λ ,λ , whose lengths approach zero as λ 0, such that whenever 2n´1 2n p q Ñ λ λ ,λ the linearized stability of the equilibrium point is strongly stable, and 2n´1 2n P p q each complementary interval contains values of λ where the linearized stability is not strongly stable. From this result we derive the above mentioned stability interchange result for the curved Sitnikov problem. The curved Sitnikov problem considered in this paper is an extension of the classical Sitnikov problem described in Section 1.2 (also, see e.g., [18]). When the radius of the circle approaches infinity, in the limit we obtain the classical Sitnikov problem — the infinitesimal mass moves along the line perpendicular to the plane of the primaries and passing through the center of mass. The equilibrium point P becomes the point 1 at infinity and is of a degenerate hyperbolic type. Thus, stability interchanges of P 1 representanewphenomenonthatweencounter inthecurvedSitnikov problembutnot in the classical one. Also in the last case, it is well known that for ε 0 the classical “ Sitnikov problem is integrable. In the case of the curved one, numerical evidence suggests that it is not (see Figure 2). The motivation for considering the curved Sitnikov problem resides in the n-body problem in spaces with constant curvature, and with models of planetary motions in binary star systems, as discussed in Section 1.3. 1.2. Classical Sitnikov problem. WerecallheretheclassicalSitnikovproblem. Two bodies (primaries) of equal masses m m 1 move in a plane on Keplerian ellipses 1 2 “ “ of eccentricity ε about their center of mass, and a third, massless particle moves on a line perpendicular to the plane of the primaries and passing through their center of mass. By choosing the plane of the primaries the xy-plane and the line on which the massless particle moves the z-axis, the equations of motion of the massless particle can be written, in appropriate units, as 2z (1) z: , “ ´ z2 r2 t 3{2 p ` p qq where r t is the distance from the primaries to their center of mass given by p q (2) r t 1 εcosu t , p q“ ´ p q where u t is the eccentric anomaly in the Kepler problem. By normalizing the time p q we can assume that the period of the primaries is 2π, and (3) r t 1 εcost O ε2 , p q“ p ´ q` p q STABILITY INTERCHANGES IN A CURVED SITNIKOV PROBLEM 3 for small ε. When ε 0, i.e., the primaries move on a circular orbit and the dynamics of the “ masslessparticleisdescribedbya1-degreeoffreedomHamiltonianandsoisintegrable. Depending on the energy level, one has the following types of solutions: an equilib- rium solution, when the particle rests at the center of mass of the primaries; periodic solutions around the center of mass; escape orbits, either parabolic, that reach infinity with zero velocity, or hyperbolic, that reach infinity with positive velocity. When ε 0,1 , the differential equation (1) is non-autonomous and the system is P p q non-integrable. Consider the case ε 1. The system also has boundedand unbounded ! orbits, as well as unboundedoscillatory orbits and capture orbits (oscillatory orbits are those for which limsup z t and liminf z t , and capture tÑ˘8| p q| “ `8 tÑ˘8| p q| ă `8 orbits are those for which limsup z t and limsup z t ). tÑ´8| p q| “ `8 tÑ`8| p q| ă `8 In his famous paper about the final evolutions in the three body problem, Chazy introducedthetermoscillatory motions [5], although hedidnotfindexamples of these, leaving the question of their existence open. Sitnikov’s model yielded the first example of oscillatory motions [23]. There are many relevant works on this problem, including [1, 2, 3, 17, 18, 22, 6, 9, 10]. The curved Sitnikov problem introduced in Section 1.1 is a modification of the classical problem when the massless particle moves on a circle rather than a line. Here weregardthecircleasaverysimplerestrictedmodelofaspacewithconstantcurvature. In Subsection 1.3 we introduce and summarize some aspects of this problem. 1.3. The n-body problem in spaces with constant curvature. Then-bodyprob- lem on spaces with constant curvature is a natural extension of the n-body problem in the Euclidean space; in either case the gravitational law considered is Newtonian. The extension was first proposed independently by the founders of hyperbolic geometry, Nikolai Lobachevsky and Ja´nos Bolyai. It was subsequently studied in the late 19th, early 20th century, by Serret, Killing, Lipschitz, Liebmann, Schering, etc. Schro¨dinger developed a quantum mechanical analogue of the Kepler problem on the two-sphere in 1940. The interest in the problem was revived by Kozlov, Harin, Borisov, Mamaev, Kilin, Shchepetilov, Vozmischeva, and others, in the 1990’s. A more recent surge of interest was stimulated by the works on relative equilibria in spaces with constant cur- vature (both positive and negative) by Diacu, P´erez-Chavela, Santoprete, and others, starting in the 2010’s. See [7] for a history of the problem and a comprehensive list of references. A distinctive aspect of the n-body problem on curved spaces is that the lack of (Galilean) translational invariance results in the lack of center-of-mass and linear- momentum integrals. Hence, the study of the motion cannot be reduced to a barycen- tric coordinate system. As a consequence, the two-body problem on a sphere can no longer be reduced to the corresponding problem of motion in a central potential field, as is the case for the Kepler problem in the Euclidean space. As it turns out, the two-body problem on the sphere is not integrable [25]. Studying the three-body problem on spaces with curvature is also challenging. Per- haps the simplest model is the restricted three-body problem on a circle. This was 4 L.FRANCO-PE´REZ,M.GIDEA,M.LEVI,ANDE.PE´REZ-CHAVELA studied in [8]. First, they consider the motion of the two primaries on the circle, which is integrable, collisions can be regularized, and all orbits can be classified into three different classes (elliptic, hyperbolic, parabolic). Then they consider the motion of the massless particle under the gravity of the primaries, when one or both primaries are at a fixed position. They obtain once again a complete classification of all orbits of the massless particle. Inthispaperwetaketheideasfromaboveonestepfurther,byconsideringthecurved Sitnikov problem, with the massless particle moving on a circle under the gravitational influence of two primaries that move on Keplerian ellipses in a plane perpendicular to that circle. In the limit case, when the primaries are identified with one point, that is when the primaries coalesce into a single body, the Keplerian ellipses degenerate to a point, andthe limit problemcoincides with thetwo-body problem on acircle described above. While the motivation of this work is theoretical, there are possible connections with the dynamics of planets in binary star systems. About 20 planets outside of the Solar System have been confirmed to orbit about binary stars systems; since more than half of the main sequence stars have at least one stellar companion, it is expected that a substantial fraction of planets form in binary stars systems. The orbital dynamics of such planets can vary widely, with some planets orbiting one star and some oth- ers orbiting both stars. Some chaotic-like planetary orbits have also been observed, e.g. planet Kepler-413b orbiting Kepler-413 A and Kepler-413 B in the constellation Cygnus, which displays erratic precession. This planet’s orbit is tilted away from the planeof binaries anddeviating fromKepler’s laws. Itis hypothesized thatthis tilt may be due to the gravitational influence of a third star nearby [13]. Of a related interest is the relativistic version of the Sitnikov problem [14]. Thus, mathematical models like the one considered in this paper could be helpful to understand possible types of planetary orbits in binary stars systems. To complete this introduction, the paper is organized as follows: In Section 2 we go deeperinthedescriptionofthecurvedSitnikovproblem,studyingthelimitcasesandits generalproperties. InSection3wepresentageneralresultonstability interchanges. In Section 4 we show that the equilibrium points in the curved Sitnikov problem present stability interchanges. Finally, in order to have a self contained paper, we add an Appendix with general results (without proofs) from Floquet theory. 2. The curved Sitnikov problem 2.1. Description of the model. We consider two bodies with equal masses (pri- maries) moving under mutual Newtonian gravity on identical elliptical orbits of eccen- tricity ε, abouttheir center of mass. For small values of ε, the distance r t from either p q primary to the center of mass of the binary is given by r t;r rρ t;ε , r 0, ε (4) p q “ p q ą ρ t;ε 1 εcos u t 1 εcos t O ε2 , p q “p ´ p p qqq “ p ´ p qq` p q STABILITY INTERCHANGES IN A CURVED SITNIKOV PROBLEM 5 xHtL rΕHt;rL xHtL 1 R H0,R,0L xHtL 2 z y x Figure 1. The curved Sitnikov problem. where u t is the eccentric anomaly, which satisfies Kepler’s equation u εsinu 2π τ t,pwqhere τ 2π is the mean motion1 of the primaries. The expansio´n in (4) “is p { q { convergent for ε ε 0.6627...; see [21]. c ă “ A massless particle is confined on a circle of radius R passing through the center of mass of the binary and perpendicular to the plane of its motion. We assume that the only force acting on the infinitesimal particle is the component along the circle of the resultant of the gravitational forces exerted by the primaries. The motion of the primaries take place in the xy-coordinate plane and the circle with radius R is in the yz-coordinate plane. See Figure 1. We place the center of mass at the point 0,R,0 in the xyz-coordinate system. The p q position of the primaries are determined by the functions x t r t;r sint,R r t;r cost,0 , 1 ε ε p q “ p p q ` p q q x t r t;r sint,R r t;r cost,0 . 2 ε ε p q “ p´ p q ´ p q q Note that t 0 corresponds to the passage of the primaries through the pericenter at “ y R r 1 ε and t π to the passage of the primaries through the apocenter at “ ˘ p ´ q “ y R r 1 ε ; both peri- and apo-centers lie on the plane of the circle of radius R “ ˘ p ` q in the y-axis. The position of the infinitesimal particle is x t 0,y t ,z t (taking into account the restriction of motion for the infinitesimal pparqti“clep topthqe cpirqcqle y2 z2 R2). We ` “ will derive the equations of motion by computing the gravitational forces exerted by 1 Themean motion is thetime-average angular velocity overan orbit. 6 L.FRANCO-PE´REZ,M.GIDEA,M.LEVI,ANDE.PE´REZ-CHAVELA the primaries: x x x x 1 2 (5) F y,t;R,r ´ ´ εp q “ ´ x x 3 ´ x x 3 1 2 || ´ || || ´ || r t;r sint,y R r t;r cost,z ε ε p´ p q ´ ´ p q q “ ´ x x 3 1 || ´ || r t;r sint,y R r t;r cost,z ε ε p p q ´ ` p q q , ´ x x 3 2 || ´ || where the distance from the particle to each primary is x x r2 t;r 2R2 2Rr t;r cost 2y R r t;r cost 1{2, 1 ε ε ε || ´ || “ p q` ` p q ´ p ` p q q x x “r2 t;r 2R2 2Rr t;r cost 2y R r t;r cost‰1{2. 2 ε ε ε || ´ ||“ p q` ´ p q ´ p ´ p q q “ ‰ We note that when r 1 ε 2R the elliptical orbit of the primary with the apo- p ` q “ center at y R crosses the circle of radius R, hence collisions between the primary and the infinăitesimal mass are possible. Therefore we will restrict to r 2R ; when ă 1`ε ε 0, this means r 2R. “ ă We write (5) in polar coordinates, that is y Rcosq, z Rsinq, and we obtain “ “ r t;r sint,Rcosq R r t;r cost,Rsinq ε ε (6) F q,t;R,r p´ p q ´ ´ p q q εp q “ ´ x x 3 1 || ´ || r t;r sint,Rcosq R r t;r cost,Rsinq ε ε p p q ´ ` p q q. ´ x x 3 2 || ´ || The origin q 0 corresponds to the point 0,R,0 in the xyz-coordinate system. “ p q Thus, the primaries move on elliptical orbits around this point. Next wewillretain thecomponentalongthecircle of theresultingforce(6). Thatis, we will ignore the constraint force that confines the motion of the particle to the circle, as this force acts perpendicularly to the tangential component of the gravitational at- tractionforce. TheunittangentvectortothecircleofradiusRat 0,Rcos q ,Rsin q p p q p qq pointing in the positive direction is given by u q 0, sin q ,cos q . The compo- p q “ p ´ p q p qq nent of the force F q,t;R,r along the circle is computed as ε p q R r t;r cos t sin q R r t;r cos t sin q ε ε (7) F q,t;R,r u q p ` p q p qq p q p ´ p q p qq p q. εp q¨ p q “ ´ x x 3 ´ x x 3 1 2 || ´ || || ´ || The motion of the particle, as a Hamiltonian system of one-and-a-half degrees of freedom, corresponds to (8) q9 p, “ p9 f q,t;R,r , ε “ p q where f q,t;R,r : F q,t;R,r u q , ε ε p q “ p q¨ p q x x r2 t;r 2R 1 cosq R r t;r cost 1{2 , 1 ε ε || ´ || “ p q` p ´ qp ` p q q x x “r2 t;r 2R 1 cosq R r t;r cost ‰1{2 , 2 ε ε || ´ || “ p q` p ´ qp ´ p q q “ ‰ STABILITY INTERCHANGES IN A CURVED SITNIKOV PROBLEM 7 Hence p2 (9) H q,p,t;R,r V q,t;R,r , ε ε p q “ 2 ` p q where the potential is given by 1 1 1 (10) V q,t;R,r . ε p q “ ´R ˆ x x ` x x ˙ 1 2 || ´ || || ´ || 2.2. Limit cases. The curved Sitnikov problem can be viewed as a link between the classicalSitnikovproblemandtheKeplerproblemonthecircle, mentionedinSection1. 2.2.1. The limit R . We express (7) in terms of the arc length w Rq, obtaining Ñ 8 “ R r t;r cost sin w R ε (11) f w,t;R,r p ` p q q p { q εp q “ ´ r2 t;r 2R 1 cos w R R r t;r cost 32 ε ε r p q` p ´ p { qqp ` p q qs R r t;r cost sin w R ε p ´ p q q p { q ´ r2 t;r 2R 1 cos w R R r t;r cost 23 ε ε r p q` p ´ p { qqp ´ p q qs which we can write in a suitable form as wsinpw{Rq 1 rεpt;rq cost pw{Rq ` R f w,t;R,r ´ ¯ εp q “ ´ 3 r2 t;r 2w2p1´cospw{Rqq 1 rεpt;rq cost 2 εp q` pw{Rq2 ` R ” ´ ¯ı wsinpw{Rq 1 rεpt;rq cost pw{Rq ´ R ´ ¯ . ´ 3 r2 t;r 2w2p1´cospw{Rqq 1 rεpt;rqcost 2 εp q` pw{Rq2 ´ R ” ´ ¯ı Letting R tend to infinity we obtain 2w lim f w,t;R,r , RÑ8 εp q “ ´ r2 t;r w2 3{2 ε p p q` q which is the classical Sitnikov Problem. 2.2.2. The limit r 0. Whenwetakethelimitr 0in(7)wearefusingtheprimaries Ñ Ñ intoalarge mass atthecenter of massandweobtain atwo-bodyproblem onthecircle. The component force along the circle corresponds to sin q (12) limf q,t;R,r p q . ε rÑ0 p q “ ´?2R2 1 cos q 3{2 p ´ p qq This problem was studied in [8] with a different force given by 1 1 (13) , ´ Rq2 ` R 2π q 2 p ´ q the distance between the large mass and the particle is measured by the arc length (in that paper the authors assume that R 1). The potential of the force (12) is “ 1 V q , 1 1 p q“ ´R2?2 1 cos q 1{2 1 p ´ p qq 8 L.FRANCO-PE´REZ,M.GIDEA,M.LEVI,ANDE.PE´REZ-CHAVELA where q denotes the angular coordinate, and the potential for (13) is 1 1 1 V q , 2 2 p q “´Rq ´ R 2π q 2 2 p ´ q whereq denotes the angular coordinate. Each problem defines an autonomous system 2 with Hamiltonian 1 (14) H p ,q p2 V q , ip i iq “ 2 i ` ip iq taking p dq dt, p dq dt and i 1,2. Let φi be the flow of the Hamiltonian 1 1 2 2 t “ { “ { “ H , and let A denote the phase space, i 1, 2 i i “ Using that all orbits are determined by the energy relations given by (14), it is not difficultto defineahomeomorphismg :A A which mapsorbits ofsystem (12)into 1 2 Ñ orbits of system (13). In the same way we can define a homeomorphism h : A A 2 1 which in fact is g´1. This shows the C0 equivalence of the respective flows. OnÑe can show that the two corresponding flows are C0–equivalent. We recall from [8] that the solutions of the two-body problem on the circle (apart from the equilibrium antipodal to the fixed body) are classified in three families (el- liptic, parabolic and hyperbolic solutions) according to their energy level. The elliptic solutions come out of a collision, stop instantaneously, and reverse their path back to the collision with the fixed body. The parabolic solutions come out of a collision and approach the equilibrium as t . Hyperbolic motions comes out of a collision with Ñ 8 the fixed body, traverse the whole circle and return to a collision. We remark that the two limit cases R and r 0 are not equivalent. Indeed, in Ñ 8 Ñ the case r 0 the resulting system is autonomous, the point q 0 is a singularity for Ñ “ the system, and the point q π is a hyperbolic fixed point, while in the case R “ Ñ 8 the resulting system is non-autonomous (for ε 0), the point q 0 is a fixed point of ‰ “ elliptic type, and the point q is a degenerate hyperbolic periodic orbit. “ 8 2.3. General properties. 2.3.1. Extended phase space, symmetries, and equilibrium points. It is clear that, be- sides the limit cases R and r 0, the dynamics of the system depends only on Ñ 8 Ñ the ratio r R, so we can fix R 1 and study the dependence of the global dynamics { “ on r where 0 r 2. In this case using (8) and (4) we get ă ă 1 rρ t;ε cos t sin q (15) f q,t;r p ` p q p qq p q εp q“ ´ r2ρ2 t;ε 2 1 cosq 1 rρ t;ε cost 3{2 r p q` p ´ qp ` p q qs 1 rρ t;ε cos t sin q p ´ p q p qq p q . ´ r2ρ2 t;ε 2 1 cosq 1 rρ t;ε cost 3{2 r p q` p ´ qp ´ p q qs To study the non-autonomous system (8) we will make the system autonomous by introducing the time as an extra dependent variable q9 p (16) Xε q,p,s;r $ p9 “ fε q,s;r . p q “ & s9 “ 1 p q “ % STABILITY INTERCHANGES IN A CURVED SITNIKOV PROBLEM 9 This vector field is defined on 0,2π R 0,2π , where we identify the boundary r sˆ ˆr s points of the closed intervals. The flow of X possesses symmetries defined by the ε functions S q,p,s q, p,s , 1 p q “ p´ ´ q S q,p,s q,p,s 2π , 2 p q “ p ` q S q,p,s q 2π,p,s , 3 p q “ p ` q S q,p,s q, p, s 4 p q “ p ´ ´ q in the sense that S X q,p,s X S q,p,s , 1 ε ε 1 ‚ X pq,pp,s qXq “S pq,pp,s , qq ε ε 2 ‚ X pq,p,sq “ X pS pq,p,sqq ε ε 3 ‚ S pX q,pq,“s p pX S qqq,p,s , 4 ε ε 4 ‚ p p qq “´ p p qq as can be verified by a direct computation. The function S describes the symmetry respect to the trajectory 0,0,s , S and S 1 2 3 describe the bi-periodicity of f q,s;r and S describes the reversibiplity ofqthe system. ε 4 p q System (8) has two equilibria 0,0 and π,0 , which correspond to periodic orbits p q p q for X . ε While the classical Sitnikov equation is autonomous for ε 0, our equation (16) is “ not,andthusweexpectittobenon–integrable,asisborneoutbynumericalsimulation. Figure 2 shows a Poincar´e section corresponding to s 0 (mod 2π), with ε 0 and “ “ r 1. This simulation suggests that the invariant KAM circles coexist with chaotic “ regions. In the sequel, we will analyze the dynamics around the equilibrium points π,0 p q and 0,0 . One important phenomenon that we will observe is that both equilibrium p q points undergo stability interchanges as parameters are varied. More precisely, when ε sufficiently small is kept fixed and r 2R , the point π,0 undergoes infinitely many Ñ 1`ε p q changes in stability, and when r is kept fixed and ε 1, the point 0,0 undergoes Ñ p q infinitely many changes in stability. In the next section we will first prove a general result. 3. A general result on stability interchanges Inthissection weswitch toamoregeneralmechanical modelwhichexhibitsstability interchanges. We consider the motion under mutual gravity of an infinitesimal particle and a heavy mass each constrained to its own curve and moving under gravitational attraction, and study the linear stability of the equilibrium point corresponding to the closest position between the particles along the curves they are moving on. In Section 4, we will apply this general result to the equilibrium point P of the curved 1 SitnikovproblemdescribedinSection1.1. ThefactthatinthecurvedSitnikovproblem there are two heavy masses, rather than a single one as considered in this section, does not change the validity of the stability interchanges result, since, as we shall see, what it ultimately matters is the time-periodic gravitational potential acting on the infinitesimal particle. 10 L.FRANCO-PE´REZ,M.GIDEA,M.LEVI,ANDE.PE´REZ-CHAVELA Figure 2. Poincar´e section for the curved Sitnikov problem, for ε 0 “ and r 1. “ To describe the setting of this section, consider a particle constrained to a curve x x s,λ in R3, where s is the arc length along the curve and λ is a parameter “ p q with values in some interval 0,λ , λ 0. Another (much larger) gravitational mass 0 0 r s ą undergoes a prescribed periodic motion according to y y t,λ y t 1,λ ; see “ p q “ p ` q Figure 3. We assume that the mass of the particle at x s is negligible compared to p q the mass at y t , treating the particle at x s as massless. p q p q Towritetheequationofmotionfortheunknowncoordinatesofthemasslessparticle, let: (17) z s,t,λ x s,λ y t,λ . p q “ p q´ p q Assume that s 0, t 0 minimize the distance between the two curves: “ “ def (18) z 0,0,λ min z s,t,λ δ λ , | p q| “ s,t | p q| “ p q for all λ 0,λ , and that this minimum point is non-degenerate with respect to t, in 0 P r s the sense that 2 (19) B z 0,t,λ 0. t2| p q||t“0 ‰ B Moreover, we make the following orthogonality assumption: (20) x9 0,λ y9 0,λ 0, for all λ 0,λ . 0 p q¨ p q “ P r s In the sequel we will study the case when the minimum distance min z s,t,λ s,t | p q| Ñ 0, that is, the shortest distance from the orbit y t,λ of the massive body to the curve p q See more
{"url":"https://www.zlibrary.to/dl/stability-interchanges-in-a-curved-sitnikov-problem","timestamp":"2024-11-02T23:36:22Z","content_type":"text/html","content_length":"159362","record_id":"<urn:uuid:00c56b4c-cb57-42cc-9156-d0b38f13c102>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00572.warc.gz"}
Graphs for Kids (songs, videos, games, worksheets, activities) Videos, stories and songs to help Grade 1 kids learn about graphs with fun. Graphs for Kids Learn about four basic graphs that can be used to share data. • Pictographs use pictures to display data. • Bar graphs use bars, usually of different colors. • Line graphs are helpful to show information over a period of time. • A pie chart looks like a pie and the size of each ‘slice’ depends on the given data. The following diagrams and video show how to use the different types of graphs to display some given information. Scroll down the page for more examples and solutions. What’s a graph? Types of graphs: line plots, bar graphs, and picture graphs. Examples and how to read each type of graph. Math Lesson for Kids Drawing Graphs: Representing information on graphs Graphs of Life Bar graphs, pie charts, and line graphs are briefly described in elementary terms. Plotting Graphs - 1st to 3rd Grade Math Lesson How to represent information on Graphs for Kids? Lisa went on a field trip which lasted from Monday to Friday. During this period, she caught some insects for her collection. The table below shows how many insects were collected each day. This information can be represented as such on the graph: Try the free Mathway calculator and problem solver below to practice various math topics. Try the given examples, or type in your own problem and check your answer with the step-by-step explanations. We welcome your feedback, comments and questions about this site or page. Please submit your feedback or enquiries via our Feedback page.
{"url":"https://www.onlinemathlearning.com/graphs-kids.html","timestamp":"2024-11-06T01:01:45Z","content_type":"text/html","content_length":"38078","record_id":"<urn:uuid:6c2c5bb3-c9b8-4c5f-90f5-ceaccb067acc>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00412.warc.gz"}
Background to the schools Wikipedia This Schools selection was originally chosen by SOS Children for schools in the developing world without internet access. It is available as a intranet download. SOS Children is the world's largest charity giving orphaned and abandoned children the chance of family life. In probability theory and statistics, a median is described as the number separating the higher half of a sample, a population, or a probability distribution, from the lower half. The median of a finite list of numbers can be found by arranging all the observations from lowest value to highest value and picking the middle one. If there is an even number of observations, the median is not unique, so one often takes the mean of the two middle values. Example: X,Y,Z median= Y Example: W,X,Y,Z median = mean(X, Y) = (X+Y)/2 At most half the population have values less than the median and at most half have values greater than the median. If both groups contain less than half the population, then some of the population is exactly equal to the median. Popular explanation The difference between the median and the mean is illustrated in this simple example: Suppose 19 paupers and 1 billionaire are in a room. Everyone removes all the money from their pockets and puts it on a table. Each pauper puts $5 on the table; the billionaire puts $1 billion (i.e. $10^9) there. The total is then $1,000,000,095. If that money is divided equally among the 20 people, each gets $50,000,004.75. That amount is the mean amount of money that the 20 people brought into the room. But the median amount is $5, since one may divide the group into two groups of 10 people each, and say that everyone in the first group brought in no more than $5, and each person in the second group brought in no less than $5. In a sense, the median is the amount that the typical person brought in. By contrast, the mean is not at all typical, since nobody in the room brought in an amount approximating $50,000,004.75. Measures of statistical dispersion When the median is used as a location parameter in descriptive statistics, there are several choices for a measure of variability: the range, the interquartile range, the mean absolute deviation, and the median absolute deviation. Since the median is the same as the second quartile, its calculation is illustrated in the article on quartiles. Working with computers, a population of integers should have an integer median. Thus, for an integer population with an even number of elements, there are two medians known as lower median and upper median. For floating point population, the median lies somewhere between the two middle elements, depending on the distribution. Median is the middle most value after arranging data by any order Theoretical properties An optimality property The median is also the central point which minimizes the average of the absolute deviations; in the example above this would be (1 + 0 + 0 + 0 + 1 + 7) / 6 = 1.5 using the median, while it would be 1.944 using the mean. In the language of probability theory, the value of c that minimizes is the median of the probability distribution of the random variable X. Note, however, that c is not always unique, and therefore not well defined in general. Efficient computation Even though sorting n items takes in general O(n log n) operations, by using a "divide and conquer" algorithm the median of n items can be computed with only O(n) operations (in fact, you can always find the k-th element of a list of values with this method; this is called the selection problem). Easy explanation (Statistics) As an example, we will calculate the median of the following population of numbers: 1, 5, 2, 8, 7. Start by sorting the numbers: 1, 2, 5, 7, 8. In this case, 5 is the median, because when the numbers are sorted, it is the middle number. If there is an even amount of numbers, the median is the arithmetic mean of the two middle numbers.
{"url":"https://www.valeriodistefano.com/en/wp/m/Median.htm","timestamp":"2024-11-14T23:23:53Z","content_type":"text/html","content_length":"80798","record_id":"<urn:uuid:58f8b1f6-c180-4f1f-997c-c73229dde8a0>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00177.warc.gz"}
B Value Calculator - Online Calculators To calculate the B value, multiply the two temperatures (T1 and T2), then divide by the product of the natural logarithm of the ratio of resistances (R1/R2) and the difference in the inverse temperatures (1/T1 – 1/T2). B Value Calculator Enter all values to calculate the B Value Welcome to the B Value Calculator! This tool helps you determine the B value, which is a measure of temperature coefficient of resistance. Let’s explore how it works and why it’s useful. The B Value Calculator is used to calculate the thermistor’s B value, which describes the relationship between the thermistor’s resistance and temperature. It is commonly used in temperature measurement and control systems. The B value can also appear in other contexts, such as statistics and regression, but in this case, it is focused on thermistors, where the B value is crucial in determining how the thermistor’s resistance changes with temperature. $B = \frac{T_1 \times T_2}{\ln\left( \frac{R_1}{R_2} \right) \times \left( \frac{1}{T_1} - \frac{1}{T_2} \right)}$ Variable Description B B value (thermistor constant) T1 Temperature 1 (in Kelvin) T2 Temperature 2 (in Kelvin) R1 Resistance at T1 (in ohms) R2 Resistance at T2 (in ohms) Solved Calculation: Example 1: Step Calculation Temperature 1 (T1) 298 K Temperature 2 (T2) 323 K Resistance 1 (R1) 10,000 ohms Resistance 2 (R2) 6,000 ohms B Value Calculation $B = \frac{298 \times 323}{\ln\left( \frac{10000}{6000} \right) \times \left( \frac{1}{298} – \frac{1}{323} \right)}$ Result B = 3939.17 K Answer: The B value is 3939.17 K. Example 2: Step Calculation Temperature 1 (T1) 300 K Temperature 2 (T2) 350 K Resistance 1 (R1) 12,000 ohms Resistance 2 (R2) 8,000 ohms B Value Calculation $B = \frac{300 \times 350}{\ln\left( \frac{12000}{8000} \right) \times \left( \frac{1}{300} – \frac{1}{350} \right)}$ Result B = 4010.52 K Answer: The B value is 4010.52 K. What is B Value Calculator? A B Value Calculator is a tool used to calculate the b coefficient in regression analysis. In statistics, the b value (also known as the regression coefficient) represents the slope of the regression line in linear regression, showing the relationship between the independent variable and the dependent variable. To calculate the b value in simple linear regression, the formula is: b = Σ(x – mean of x) * (y – mean of y) / Σ(x – mean of x)² This formula calculates the slope of the regression line, which indicates how much the dependent variable (y) changes when the independent variable (x) changes. The b₀ is the intercept, and b₁ is the slope, and tools like the b₀ and b₁ calculator or regression coefficient calculator can help simplify this process. For multiple regression, the b values represent the coefficients for each independent variable. You can use tools like the online multiple regression calculator to find these values. Final Words: Last but not least, the B Value Calculator can also be used for various applications like thermistor B value calculations, which measure temperature sensitivity, or for solving quadratic equations. Additionally, specialized calculators like the l a b value calculator or blox fruits value calculator exist for other specific uses.
{"url":"https://areacalculators.com/b-value-calculator/","timestamp":"2024-11-03T03:50:44Z","content_type":"text/html","content_length":"111640","record_id":"<urn:uuid:cf55f851-7971-4af6-a2b1-639ff349c9ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00278.warc.gz"}
How to Find Cosine in Python To compute the cosine of an angle using Python, you can utilize the cos() function from the `math` library. import math math.cos (x) Here, "x" represents the angle in radians. The cos(x) function returns the cosine value of "x". Understanding the Cosine Function. In trigonometry, the cosine represents the ratio between the side adjacent to the angle (OA) and the hypotenuse (OB). The cosine value is 1 when the angle is 0 degrees and 0 when the angle is 90 degrees. Example 1 This script calculates the cosine of a 45° angle. The math.radians() function is used to convert degrees into radians. import math x = math.radians (45) math.cos (x) The output for the above code is: Thus, the cosine of a 45° angle (π/4 radians) is approximately 0.7071067811865476. Example 2 This script calculates the cosine of a 45° angle using π (π ≈ 3.14) to represent radians. import math x = math.pi / 4 math.cos (x) In the "math" library, π is represented as math.pi. The output for this code is also: This is the same result as in the previous example, but derived in a different manner. Note. The cosine function in Python doesn't return an exact zero value for angles of 90° and 270°. Instead, it returns a very small value close to zero. This can be problematic in certain applications. To mitigate this, it's recommended to round the result of `cos(x)` to at least 5 decimal places. In Python, the cosine values for 90° and 270° are not precisely zero. Report an error or share a suggestion to enhance this page
{"url":"https://how.okpedia.org/en/python/cosine-python-trigonometry","timestamp":"2024-11-07T13:13:44Z","content_type":"text/html","content_length":"13276","record_id":"<urn:uuid:127de9a4-5ca2-48b9-9395-cee557d9046a>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00521.warc.gz"}
directed homotopy type theory Hm, maybe the references given there so far factor through 2-type theory. Anyway, eventually somebody should say something that fits into this entry! :-) created directed homotopy type theory Next semester our “group seminar”, starting in a few weeks, has the topic “$(\infty,n)$-categories”. I guess I’ll be sort of running the seminar. Last semester we ended the analogous seminar on $(\ infty,1)$-toposes with me giving the survey talk, the notes for which now constitute the $n$Lab entry (infinity,n)-category. Now we plan to dig deeper into the details. So I need to acquaint myself with plenty of those details, to the extent that I haven’t yet (which is just a small extent). As usual, I have a “personal hook” into the topic: there is a certain question that I would like to eventually understand the answer to, and I’ll probably be looking at the entire seminar through this lens. I had recently posted here a link to a little text that describes this idea, at next (schreiber). In brief, the issue is this: Complete the step indicated in the very last paragraph of Quantum gauge field theory in Cohesive homotopy type theory (schreiber) . That is: inside cohesive homotopy type theory postulate a map $\mathbf{c}_{conn} : \mathbf{B}G_{conn} \to \mathbf{B}^n A_{conn}$ together with a suitable representation $\rho$ of $\mathbf{B}^{n-1}A$. At least when interpreted in a model such as Smooth∞Grpd, associated to this data should be an extended topological quantum field theory which to the point assigns the “space of sections” of the $\ infty$-bundle $\rho$-associated to the $\mathbf{B}^{n-1}A$-principal bundle modulated by $\mathbf{c}_{conn}$, and this should be internal. The task is: describe / construct this eTQFT in the internal logic, as far as possible. The result should involve $n$-directed homotopy type theory in one flavor or other. Moreover, it needs to be with all duals. I really like the approach to this rough kind of problem that Mike was recently suggesting on the blog here. Maybe that’s the lens through which I want to be looking at the whole question. Mike, did you make further progress with the ideas sketched there, meanwhile? I’d love to know about whatever you have, to the degree that you can share it. Another thought that crossed my mind was: for the purpose of $(\infty,n)$-cats with all duals, it might be best to go via blob n-categories. Does anyone what the latest status is of relating that theory to models of $(\infty,n)$-categories is? No, I haven’t gotten any further with that idea. I added the reference. • Paige North, Towards a directed homotopy type theory, (arXiv:1807.10566) Can’t see where they’re from. diff, v8, current Can’t see where they’re from. Added reference to Riehl-Shulman. diff, v9, current @David C You wean where Paige is from? @David R, yes. I couldn’t find anything to form a page on Paige, but then I am limited to Baidu as a search engine since in China. @David C note that her surname used to be just ’North’. Here is her current home page: https://paigenorth.github.io/ Oh, was “they” in #5 supposed to be a singular genderless pronoun? (-:O Coincidentally, Paige is here with me in San Diego this week, with Benedikt Ahrens and Dimitris Tsementzis, working on a different project. Seems like a good solution to me. Do you always write “he or she”? I do sometimes use “they” as a singular genderless pronoun, but it usually sounds slightly awkward to me because its primary meaning in my idiolect is still a plural. It also didn’t occur to me that Paige’s gender might not be obvious from her name, so I didn’t think of the possibility that “they” in #5 might be referring to her, and was therefore unable to make sense of the sentence. In this situation, if I were unsure of her gender, I would probably just have re-used her name: “Can’t see where Paige is from”. It’s not a name I’ve ever encountered. I dare say with Google I could have worked out the gender, but try Baidu. Had I known Wikipedia is available here, it wouldn’t have helped so much: Paige is a given name for males and females. It is of Latin origin (Byzantine “Págius” young boy helper/mate of young nobles, from “padius” young boy, derived from Greek “Paidion” child) and its meaning is “young helper” or “young child”. A page in medieval households was usually a young boy whose service was the first step in his training as a knight. Use may possibly indicate an ancestor who was a page. In modern times Paige has become a given name, generally given to girls living in North America since the middle of the 20th century, but also occasionally to boys. I’m not saying her gender should have been clear from her name, just that at the time, it failed to occur to me that it wouldn’t be. (-: I’ve been wondering how well the type theoretic usage of contexts captures our interpretative practices. We each have our own context, and then when we speak with someone else, we can theorise as to theirs. This includes what we take to be common knowledge, but we will also envisage that they will have parts different from ours, through ignorance, mistakes, greater knowledge, different experience, etc. Adjustments to our own context and to our model of the other’s context must continually be made as we converse. On reading ’Paige’, I adjust my context to add, ’Paige: Name of person’. You already have ’Paige: Name of female person’, and a supply of people so named, and take it to be within common knowledge. It seems we do a lot of work in reconstructing sensibly the contexts in which statements make sense. We often use the language of inference, but it’s better to distinguish presupposition from inference. We hear something, postulate a reasonable context in which it makes sense, then infer some component of that postulated context, and take it to be inference from the original statement. On hearing that X has stopped smoking without knowing anything of his habits, I see that it is presupposed that X once was in the habit of smoking. It’s better to distinguish this inference to presupposition from a proper inference, such as to the claim that X now has a lower chance of a heart attack. added the following pointer to this upcoming talk series: (Unfortunately there seems to be no way to directly/anchoredly link to the announcement item – does anyone know who at HoTT-CMU has control over their seminar webpage? I have had this problem before and will have it again.) • Denis-Charles Cisinski, Hoang Kim Nguyen. Tashi Walde: Univalent Directed Type Theory, lecture series in the CMU Homotopy Type Theory Seminar (13, 20, 27 Mar 2023) Abstract We will introduce a version of dependent type theory that is suitable to develop a synthetic theory of 1‑categories. The axioms are both a fragment and an extension of ordinary dependent type theory. The axioms are chosen so that (∞,1)‑category theory (under the form of quasi-categories or complete Segal spaces) gives a semantic interpretation, in a way which extends Voevodsky’s interpretation of univalent dependent type theory in the homotopy theory of Kan complexes. More generally, using a slight generalization of Shulman’s methods, we should be able to see that the theory of (∞,1)‑categories internally in any ∞‑topos (as developed by Martini and Wolf) is a semantic interpretation as well (hence so is parametrized higher category theory introduced by Barwick, Dotto, Glasman, Nardin and Shah). There are of course strong links with ∞‑cosmoi of Riehl and Verity as well as with cubical Hott (as strongly suggested by the work of Licata and Weaver), or simplicial Hott (as in the work of Buchholtz and Weinberger). We will explain the axioms in detail and have a glimpse at basic theorems and constructions in this context (Yoneda Lemma, Kan extensions, Localizations). We will also discuss the perspective of reflexivity: since the theory speaks of itself (through directed univalence), we can use it to justify new deduction rules that express the idea of working up to equivalence natively (e.g. we can produce a logic by rectifying the idea of having a locally cartesian type). In particular, this logic can be used to produce and study semantic interpretations of Hott. diff, v12, current Mathieu Anel was so kind to send the direct anchored link: …/hott/seminars/index.html#230313. (I checked and this is the general pattern for the HoTT-CMU seminar page: The anchor for a specific event is the date in the format #yymmdd.) diff, v13, current I have expanded the idea-section (here), to explain/recall how to count the directionality degree. diff, v13, current Re #17, so the novel aspect, over other attempts listed in the references, is that the type theory is univalent? I am not aware of publically available information beyond that abstract. But the abstract does make it sound like they claim the internal language of $(\infty,2)$-toposes. added pointer to: • Alex Kavvos, A quantum of direction (2019) &lbrack;pdf&rbrack; diff, v14, current added pointers to the available resources for Cisinski et al.’s presentations (here): &lbrack;web, video 1:YT, 2:YT, 3:YT; slides 0:pdf, 1:pdf, 2:pdf, 3:pdf&rbrack; diff, v17, current I’d be interested in people’s views on this work (#23). Others have been working on directed HoTT. Is there a breakthrough here? And more broadly what would it mean to have a good directed HoTT? E.g., if ordinary HoTT is a general-purpose programming language as well as a proof language for constructions in parameterized homotopy theory (Topological Quantum Gates in HoTT) what could be said for directed HoTT? The answer to your last question is: “…for constructions in parameterized directed homotopy theory”, namely in cocartesian slices of the $(\infty,2)$-category of $(\infty ,1)$-categories (they make this fully explicit on the top of slide 11 in the 2nd notes pdf) and presumeably (I guess) more generally of $(\infty,2)$-sheaves of such $(\infty,1)$-categories. added this remark: This is based on the discussion of straightening and unstraightening entirely within the context of quasi-categories from • Denis-Charles Cisinski, Hoang Kim Nguyen, The universal coCartesian fibration &lbrack;arXiv:2210.08945&rbrack; which (along the lines of the discussion of the universal left fibration from Cisinski 2019) allows to understand the universal coCartesian fibration as categorical semantics for the univalent type universe in directed homotopy type theory (see video 3 at 1:16:58 and slide 3.33). diff, v19, current added this quote: &lbrack;video 3 at 1:27:43&rbrack;: I won’t provide the full syntax yet and actually I would be very happy to discuss that, because we don’t know yet and I have questions myself, actually. diff, v19, current expanded this out to the following dialogue, highlighting that actual type-theoretic syntax (inference rules) for this intended semantics remains to be given: &lbrack;Cisinski in video 3 at 1:27:43&rbrack;: I won’t provide the full syntax yet and actually I would be very happy to discuss that, because we don’t know yet and I have questions myself, &lbrack;Awodey in video 3 at 1:46:23&rbrack;: Maybe I’ll suggest something, you tell me if you agree: What we have is a kind of axiomatization of the semantics of a system for type theory, so that we know what exactly we want formalize in the type theory, and what depends on what, and it articulates and structures the intended interpretation of the type theory in a very useful way. Maybe in the way that the axiomatic description of a cartesian closed category was very good to have for formulating the lambda-calculus. But I think that what we have is more on the side of the axiomatic description of the semantics, like the cartesian closed category, that it is on the side of the lambda-calculus itself. So, maybe I would suggest the term “abstract type theory” to describe this system as an intermediate in between an actual formally implemented system of type theory and the big unclear world of possible semantics and all the different structures that one could try to capture with a type theory, in between is this abstract type theory which specifies a particular structure that we want to capture in our type theory, which is a very very useful methodological step. &lbrack;…&rbrack; I am trying to maybe reconcile: Some people would prefer to call a type theory only something which can immediately be implemented in a computer. So that’s different than an abstract description of a structure that we would want to describe in such a type theory. &lbrack;Cisinski in video 3 at 1:49:28&rbrack;: I agree with what you say but I still have the hope to be able to produce an actual syntax &lbrack;…&rbrack; that’s really the goal. diff, v19, current This partly answers the first question in #24: They don’t have an actual type theory yet. It’s the same situation as with my suggestion of linear homotopy type theory in the past: A semantics neatly organized by simple rules which seem to lend themselves to type-theoretic formalization, but no actual formal syntax yet. …but no actual formal syntax yet Right, that’s the part which I know is important but gain very little insight from. Perhaps the part of my question I’m most interested in is whether there’s an analogue for ordinary HoTT is a general-purpose programming language Directed Algebraic Topology promises us “Models of non-reversible worlds”, and how we’ll be able to consider “concurrent processes, rewrite systems, traffic networks, space-time models, biological systems, etc.” Directed Algebraic Topology promises us “Models of non-reversible worlds”, and how we’ll be able to consider “concurrent processes, rewrite systems, traffic networks, space-time models, biological systems, etc.” All that advertisement is predicated on assuming that these things form given categories or higher categories. Directed homotopy theory itself is just another word for $(\infty,n)$-category theory, in extension of how “homotopy theory” is another word for $(\infty,0)$-category theory ($\pm 1$, depending on how you want to count). If we want to retain the topological aspect in this, then we are talking about some kind of stratified spaces presenting $(\infty,n)$-categories. Naturality TT citations. diff, v21, current Link to the generalizations section of the Grothendieck construction. diff, v21, current Remove citation of controversial note; remove section header as only two citations remain. diff, v23, current Add year to “Towards a Directed Homotopy Type Theory based on 4 Kinds of Variance” diff, v23, current Remove dangling link to generalizations of the Grothendieck construction. diff, v23, current added pointer to • Hoang Kim Nguyen, Directed univalence in simplicial sets, talk in Homotopy Type Theory Electronic Seminar Talks (March 2023) &lbrack;video, slides&rbrack; and cross-link with directed univalence axiom diff, v25, current • recording some thoughts on directed higher type theories from a conversation with Mike I had a while ago. • happy to discuss further • feel free to dramatically rewrite diff, v27, current These are interesting points. I’ll offer some critique: • To a fair extent these points apply really to higher category theory as such, not specifically to its (potential) formalization by type theories (though the lens of type theory puts some higher category theory concepts, such as universes, more into focus). A maybe more intrinsically type-theoretic motivation for directed HoTT is mentioned at the end of Cisinski et al.’s abstract (here): Formalization of categorical semantics of plain HoTT. • While it is clear that one eventually wants to speak about higher categorical concepts with type theory, it is not a priori clear that this motivates the dedicated formulation of new rules for directed HoTT: It might still be better to instead work internal to plain HoTT, I think this isn’t clear yet. Compare the history of non-type theoretic higher category theory: The attempts pre $\sim$2009 to get this up and running by direct definition of higher $n$-categories led little beyond nowhere. The breakthrough came from understanding that higher category theory works well inside or on a backdrop of homotopy theory/$(\infty,1)$-category theory. Of course, when people tried to carry this lesson to HoTT they ran into the technical problem of defining (semi-)simplicial types inside plain HoTT and got stuck with this on technical grounds. But maybe this just serves to remind us that simplicial objects are convenient a model but hardly an intrinsic definition of higher structures, and maybe the good definition of $\infty$-category internal to plain HoTT is still to be discovered. To a fair extent these points apply really to higher category theory as such True! Especially point 1. Of course point 2 and 3 were meant to allude to the rules you may (not) need in your directed higher type theory. A maybe more intrinsically type-theoretic motivation for directed HoTT is mentioned at the end of Cisinski et al.’s abstract (here): Formalization of categorical semantics of plain HoTT. I added this point to the list of pros. While it is clear that one eventually wants to speak about higher categorical concepts with type theory, it is not a priori clear that this motivates the dedicated formulation of new rules for directed HoTT: It might still be better to instead work internal to plain HoTT, I think this isn’t clear yet. I added this point to the list of cons. The attempts pre 2009 to get this up and running by direct definition of higher $n$-categories led little beyond nowhere. The breakthrough came from understanding that higher category theory works well inside or on a backdrop of homotopy theory/($\infty$,1) -category theory. Definitely agree with that example. While the $\infty$-PoV shifted paradigms in the field, one could argue that for some purposes it also came with drawbacks since spaces are (in some ways) incomputably hard. Incidentally, I always hoped manifold diagrams, as a resolution to the “directed tangle hypothesis”, would find a middle ground. (A highly hypothetical statement as you know.) Regarding the directed HIT example: if $\mathsf{succ}$ is to be interpreted as a(n ∞-)functor, then wouldn’t the correct definition have $\mathsf{less} : 0 \to \mathsf{succ}\ 0$ instead of $\mathrm {id}_\mathbb{N} \to \mathsf{succ}$? Otherwise it seems like you have $\mathsf{less}\ 1 eq \mathsf{succ}(\mathsf{less}\ 0) : 1 \to 2$, so that $\mathbb{N}$ isn’t a poset. Citations to Naturality Pretype Theory diff, v30, current Added reference • {#GWB24} Daniel Gratzer, Jonathan Weinberger, Ulrik Buchholtz, Directed univalence in simplicial homotopy type theory (arXiv:2407.09146) diff, v31, current
{"url":"https://nforum.ncatlab.org/discussion/3323/directed-homotopy-type-theory/?Focus=110228","timestamp":"2024-11-08T14:12:07Z","content_type":"application/xhtml+xml","content_length":"138654","record_id":"<urn:uuid:87c469cc-bf99-468a-bc51-db8a779f023c>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00604.warc.gz"}
Circles & Squares - Equipment & Facilities - Small Farmer's JournalCircles & Squares - Small Farmer's Journal Circles & Squares from The Carpenters’ Steel Square and Its Uses by Fred T. Hodgson In the hand of the intelligent mechanic the square becomes a simple calculating machine of the most wonderful capacity, and by it they solve problems of the kinds continually arising in mechanical The blade of the square should be 24 inches long and two inches wide, and the tongue from 14 to 18 inches long and 1-½ inches wide. The tongue should be at right angles with the blade, or in other words the “square” should be perfectly square. Fig. 28 In Fig. 28 we show how the centre of a circle may be determined without the use of compasses; this is based on the principle that a circle can be drawn through any three points that are not actually in a straight line. Suppose we take A B C D for four given points, then draw a line from A to D, and from B to C; get the centre of these lines, and square from these centres as shown, and when the square crosses the line, or where the lines intersect, as at X, there will be the centre of the circle. This is a very useful rule, and by keeping it in mind the mechanic may very frequently save themself much trouble, as it often happens that it is necessary to find the centre of the circle, when the compasses are not at hand. Fig. 33 On Fig. 33 we show a quick method of finding the centre of a circle: Let N N, the corner of the square, touch the circumference, and where the blade and tongue cross it will be divided equally; then move the square to any other place and mark in the same way and straight edge across, and where the line crosses A, B, as at O, there will be the centre of the circle.
{"url":"https://smallfarmersjournal.com/circles-squares/","timestamp":"2024-11-03T07:20:05Z","content_type":"text/html","content_length":"98866","record_id":"<urn:uuid:0e2fe394-a134-4c76-ba81-0571c90b1db1>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00597.warc.gz"}
seminars - Analytic approach to Saito's vanishing theorem Saito's vanishing theorem serves as a vast generalization of Kodaira vanishing in the sense that almost all of the classical and robust vanishing theorems regarding ample line bundles can be deduced from Saito's vanishing theorem. As an application of an L^2-theoretic study of degeneration (and variation) of Hodge structures, we present a new proof of Saito's vanishing theorem going back to the original idea of Kodaira. This method gives Saito's vanishing theorem also for complex Hodge modules in the sense of Sabbah-Schnell, which does not require the mathbb{Q}-structure which is necessary for the theory of Saito's Hodge modules.
{"url":"https://www.math.snu.ac.kr/board/index.php?mid=seminars&sort_index=date&order_type=desc&l=en&page=14&document_srl=1099723","timestamp":"2024-11-06T08:22:24Z","content_type":"text/html","content_length":"47836","record_id":"<urn:uuid:76f34e23-856b-4eb9-bd11-b5229aac977a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00519.warc.gz"}
Tips for Building Students’ Math Fluency and Number Sense - Professional Learning & Support | Literacy, Math, Curriculum Tips for Building Students’ Math Fluency and Number Sense Mastering math fluency and number sense is critical to setting students up for success in future mathematics courses. When students have the ability to perform mathematical problems quickly and accurately without relying on memorization, and truly understand the relationships between numbers and how those numbers are affected by mathematical operations, they have a strong knowledge foundation that can be built upon in higher grade levels. This webinar from the CORE, Inc., describes the characteristics of effective math fluency practice and offers ideas for many engaging math fluency activities and number sense activities that you can start using in your classroom right away. Improve Your Math Fluency Practice: Characteristics of Effective Math Fluency Interventions and Math Fluency Activities Teaching math fluency and number sense requires more than helping students memorize their math facts. This webinar explains that the following characteristics can ensure that your math fluency interventions and math fluency activities are effective at teaching students the foundational mathematics skills they need for future success. Look for these characteristics in your current math fluency interventions and math fluency activities in addition to any you plan to implement in the future. • Activities and interventions for building math fluency and number sense should be offered in short, regular doses. Research shows that students who need to improve math fluency and number sense should be offered daily opportunities to do so. • Activities and interventions for building math fluency and number sense should vary by intent. While memorization practice drills are one important tool for building math fluency, try to offer activities that go beyond simply memorization. • Activities and interventions for building math fluency and number sense should be varied. Rather than repeating the same activities every day, have a few to rotate through to prevent engagement from dropping off and learning from stalling. • Activities and interventions for building math fluency and number sense should not take over your lessons. Be intentional about the amount of time you spend on these activities. • Activities and interventions for building math fluency and number sense should be highly engaging. Keep activities interesting and interactive to encourage students to be active participants in learning and encourage maximum effort. • Activities and interventions for building math fluency and number sense should include questions and prompts. Plan these ahead of time and ask them strategically to keep students moving through the activity successfully. • At least some activities and interventions for building math fluency and number sense should involve a number line. This visual mathematical aid is a valuable learning tool when teaching math fluency and number sense and should be included in every classroom. Activities to Build Math Fluency and Activities to Build Number Sense In addition to describing the characteristics to include in your math fluency and math number sense activities to maximize their effectiveness, this webinar also offers ideas for different activities you can begin using in your classroom right away to build math fluency and number sense. These activities include: • Oral counting • Card games • Number talks • Mystery math grids • Individual white boards • Spend Some Time with 1 to 9 • Sprints • Make 24 math game Learn More About Building Math Fluency and Number Sense This webinar will teach you how to plan effective activities for building students’ math fluency and number sense, as well as offer ideas for specific activities you can start using right away in your classroom. Watch the full webinar to learn how to create effective math fluency interventions and activities and math number sense activities that will help students master these two critical foundational mathematical skills. Video Transcript Dean Ballard: Some look-fors in the classroom, some common things to look for. Students who need to work on fluency and building number sense get short, regular doses of fluency activities. Research shows that these short daily bursts are more effective than trying to do something like once a week review and drill. Fluency activities vary by intent. Some activities are simply memorizations practice drill, which is okay. Repetition is one of the ways we move things to longterm memory. Dean Ballard: However, some activities should be fluency type activities that build number sense beyond just memorization. I call these those fluency plus activities. We must vary the types of fluency plus activities we give students. Everyday should not be the same exact type of activity. That’s where the engagement will drop off, and thinking required for the activities will be absent. You don’t need eight different types of activities, but at least a few to rotate and use at different times in order to continue the excitement level for kids. Dean Ballard: And fluency activities should not be unintentionally taking over your lessons. It can easily happen. We see lots of interesting math connections that can be made into a sprint activity, so we keep asking and discussing with kids different connections they see, and 20 minutes later, oh my, kids are asleep. Or, the KenKen Puzzles are so engaging and fun, students are happy to keep solving them for half an hour or more. Yikes! My lesson’s gone. Dean Ballard: It’s okay if this is what we intend. Perhaps it’s the first day with the activity so I know we’re gonna take more time with it, with the students, or maybe I’m setting up a menu of activities for the day, or stations so that some students can be at those activities, and some students can be in small group time with the teacher. Dean Ballard: My point is that fluency plus the number sense type activities are good enough to take up more time, but should not take over important time from the day’s main lesson. The amount of time on the activities should be intentionally planned. Look for students to be highly engaged. The activities are not drill and kill, but rather strive and thrive. The activity engages students’ minds and interests. Students strive to succeed, and through effort, and the design of the activity, students begin to thrive where they once failed. Dean Ballard: Often the key to students thinking about the math in the activity comes through questions or prompts from the teacher. Expect a learning curve where a couple of key questions actually need to be planned ahead, not off the cuff. Over time, teachers build their question capacity, and students build their capacity to tackle those questions. And just a quick shout out to number lines in the classroom. Dean Ballard: This is one of, if not, the most useful visual aids in the class for mathematics, and in my mind, in my opinion, should be standard in all K-8 classrooms. Alright. Well, here’s a list of the things we did. I have to say, the oral counting, remember, that was early on. I talked about counting up and counting down by five, say, starting at 11. Dean Ballard: Number talks we didn’t get to, but that’s another great number sense and fluency building activity from Math Solutions. You can look in that there. The rest, we see on the list. Alright. So, that kinda wraps up my list of activities I wanna go over with you.
{"url":"https://www.corelearn.com/math-fluency/","timestamp":"2024-11-13T22:53:47Z","content_type":"text/html","content_length":"139906","record_id":"<urn:uuid:800ef355-75f8-40d5-9418-26b5abcc50ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00888.warc.gz"}
How to get update website To get update website, you can clear your browser chache data (if you browse this website before many days) or browse this website with another website browser. Steps for clear chache: 1. on your computer, open browser. 2. At the top right, click More. 3. Click More tools. Clear browsing data. 4. choose a time range. 4. "Cached images and files," check the boxe. 5. Click Clear data. Different browser may have different ways.
{"url":"https://graph2d.com/index.php","timestamp":"2024-11-13T19:25:11Z","content_type":"text/html","content_length":"61922","record_id":"<urn:uuid:fa92f1d1-911d-440b-8ef7-32ab18430b42>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00060.warc.gz"}
Annual growth rings in trees could be used as an indicator for the complex impact of climate change on tree growth and forest communities. This impact could be easily measured from the width of the annual rings, the characteristics of the summer and the late wood, the density of the wood and other factors. How SP-PAM works? After their measurement ring width values are used as input for regression analysis. The aim is to find a polynomial that best approximates the data from this dendrochronological row. The criterion of proximity to the approximate polynomial initial data is set by the determination coefficient R2 ( 0 ≤ R2 ≤ 1 ) . Larger values of R2 mean better approximation. After calculating the approximated values the growth indexes for each tree are calculated as a relation between the measured at the approximate value of ring width for each year. The calculation of the growth index helps eliminate the age of the tree as a factor influencing the tree wing width. When the index value is outside the range of the allowed interval for normal values the year it occurs for is said to be unfavorable for growth or stress year. The sequence of one or more adjacent stress years is perceived as stress period. Each stress period is defined by two years (margins) – beginning year (LB) and ending year (RB) for the sequence. When the stress period is consists of only one year the left and right margins are the same. The basic analysis implemented in SPPAM are based on the information related to locations and tree species sampled in these locations. 1.Polynomial approximation of series and selection of best approximation: • Automatic generation of approximate polynomial of the greatest extent possible. • Calculation of growth index (It) as the ratio between the measured and the estimated value for each year and row. • Finding an average model row from It for individual localities per species (standardization of dendrohronological rows). • Rows with R2 > 0.45 and correlation index r ≥ 0.1are present in the analysis 2. Calculation of EPS and rejection of localities with EPS < 80% 3. Multiple regression analysis of model index rows of widths, temperature and precipitation with intention to reveal the limiting factors for growth, type and localities 4. Identification of stress periods (SP) for individual species and localities – periods with It < 1 and of stress sections – In 5. Characteristic of identified In to find the most reliable stress-sections (intervals of years for various localities and types)– calculation of indicators of In: duration (D), amplitude (A) and frequency (F) – average and extreme values, coverage (Cov.) and cardinality (Card.) of stress sections. Stress sections with Cov. ≥ 50% take part in the analysis; 6. Polynomial approximation of climate data for temperatures and rainfall, and selection of the best approximation. Calculation of Itm, Ip 7. Finding the average temperatures and rainfall for 30 – year periods, and the confidence interval of the average climatic and biological climate years by localities. Calculating of av.T, dT, av. P and dP 8. Determination of adverse climatic and biological climatic years – AHD, AHW, ACD, ACW in both regimes and different combinations where one of the regimes is normal according to dT and dP 9. Parallel analysis of stress periods and climate data. Comparative analysis of the periods of stress sections with adverse years Measured indicators Growth index, It = MW/AW, where MW- measured and AW – calculated width for rows with confidence approximations (R² > 0.45). Eustress duration (D) - numebr of consequent adverse years in a series. Eustress frequency(F) - the number of stress years for a period of 100 years. Eustress depth (A) - \(A = \frac{1}{2}\sum_{i=1}^{s}(1-It_{i})\) where Iti are the growth indexes, where eustress is established. "К" - coefficients - ratio between the number of analyzed years and the number of stress years (SY). Cardinality (Card.) - number of series with the same eustress years. "Ct" - coefficient - ratio between Card. and the combined number of examined rows from one location (n). Coverage (Cov.) – the ration between cardinality (Card.) and the number of examined rows that have the same periods. Climatic year type (CY) – the deviation of the average annual temperature (Tavg.) of the climatic norms for temperature (dT) and the deviation of the average annual rainfall (Pavg.) of the climatic norms for precipitation (dP). Climatic norms for temperature - Tavg ± μti for periods of 30 years. Climatic norms for precipitation - Pavg. ± μpi for periods of 30 years.
{"url":"http://sppam.e-ecology.org/about","timestamp":"2024-11-10T08:47:22Z","content_type":"text/html","content_length":"11722","record_id":"<urn:uuid:cd198c96-c416-4ec0-8517-08d462a09876>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00858.warc.gz"}
Safe Haskell Safe-Inferred Language Haskell2010 Free algebra class class FreeAlgebra (m :: Type -> Type) where Source # A lawful instance has to guarantee that unFoldFree is an inverse of foldMapFree (in the category of algebras of type AlgebraType m). This in turn guaranties that m is a left adjoint functor from full subcategory of Hask (of types constrained by AlgebraType0 m) to algebras of type AlgebraType m. The right adjoint is the forgetful functor. The composition of left adjoin and the right one is always a monad, this is why we will be able to build monad instance for m@. returnFree :: a -> m a Source # Injective map that embeds generators a into m. foldMapFree Source # :: forall d a. (AlgebraType m d, AlgebraType0 m a) => (a -> d) a mapping of generators of m into d -> m a -> d a homomorphism from m a to d codom :: forall a. AlgebraType0 m a => Proof (AlgebraType m (m a)) (m a) Source # Proof that AlgebraType0 m a => m a is an algebra of type AlgebraType m. This proves that m is a mapping from the full subcategory of Hask of types satisfying AlgebraType0 m a constraint to the full subcategory satisfying AlgebraType m a, fmapFree below proves that it's a functor. (codom from codomain) forget :: forall a. AlgebraType m a => Proof (AlgebraType0 m a) (m a) Source # Proof that the forgetful functor from types a satisfying AgelbraType m a to AlgebraType0 m a is well defined. FreeAlgebra Identity Source # Defined in Data.Algebra.Free FreeAlgebra DList Source # DList is isomorphic to Free Monoid; it is free in the class of all monoids. Defined in Data.Algebra.Free FreeAlgebra DNonEmpty Source # Defined in Data.Algebra.Free FreeAlgebra FreeGroup Source # Defined in Data.Group.Free FreeAlgebra FreeGroupL Source # Defined in Data.Group.Free FreeAlgebra FreeAbelianMonoid Source # Defined in Data.Monoid.Abelian FreeAlgebra FreeAbelianSemigroup Source # Defined in Data.Semigroup.Abelian FreeAlgebra FreeSemilattice Source # Defined in Data.Semigroup.Semilattice FreeAlgebra NonEmpty Source # NonEmpty is the free semigroup in the class of semigroup which are strict in the left argument. Defined in Data.Algebra.Free FreeAlgebra Maybe Source # Defined in Data.Algebra.Free Note that '[]' is a free monoid only for monoids which multiplication is strict in the left argument ref. Note that being strict adds additional equation to the monoid laws: undefined <> a = undefined FreeAlgebra [] Source # Thus, expectedly we get an equational theory for left right two-sided strict monoids. Snoc lists are free monoids in the class of monoids which are strict in the right argument, Free Monoid and @DList are free in the class of all Haskell monoids. Defined in Data.Algebra.Free FreeAlgebra (Free Monoid) Source # Defined in Data.Algebra.Free FreeAlgebra (Free Semigroup) Source # Defined in Data.Algebra.Free FreeAlgebra (Free Group) Source # Defined in Data.Algebra.Free Type level witnesses Algebra types / constraints type family AlgebraType (f :: k) (a :: l) :: Constraint Source # Type family which for each free algebra m returns a type level lambda from types to constraints. It is describe the class of algebras for which this free algebra is free. A lawful instance for this type family must guarantee that the constraint AlgebraType0 m f is implied by the AlgebraType m f constraint. This guarantees that there exists a forgetful functor from the category of types of kind * -> * which satisfy AlgebraType m constrain to the category of types of kind * -> * which satisfy the 'AlgebraType0 m constraint. type AlgebraType FreeGroup (g :: Type) Source # Defined in Data.Group.Free type AlgebraType FreeGroupL (g :: Type) Source # Defined in Data.Group.Free type AlgebraType FreeAbelianMonoid (m :: Type) Source # Defined in Data.Monoid.Abelian type AlgebraType FreeAbelianSemigroup (a :: Type) Source # Defined in Data.Semigroup.Abelian type AlgebraType FreeSemilattice (a :: Type) Source # Defined in Data.Semigroup.Semilattice type AlgebraType Maybe (m :: Type) Source # Defined in Data.Algebra.Free type AlgebraType Identity (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType DList (a :: TYPE LiftedRep) Source # Defined in Data.Algebra.Free type AlgebraType DNonEmpty (m :: TYPE LiftedRep) Source # Defined in Data.Algebra.Free type AlgebraType NonEmpty (m :: TYPE LiftedRep) Source # Defined in Data.Algebra.Free type AlgebraType [] (m :: TYPE LiftedRep) Source # Defined in Data.Algebra.Free type AlgebraType (Free Monoid) (a :: TYPE LiftedRep) Source # Defined in Data.Algebra.Free type AlgebraType (Free Semigroup) (a :: TYPE LiftedRep) Source # Defined in Data.Algebra.Free type AlgebraType (Free Group) (a :: TYPE LiftedRep) Source # Defined in Data.Algebra.Free type AlgebraType Alt (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType Ap (g :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType Ap (g :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType Ap (g :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType F (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType DayF (g :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType Free (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType Coyoneda (g :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType ListT (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType MaybeT (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType (Free1 c :: (Type -> Type) -> Type -> TYPE LiftedRep) (f :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType (FreeMAction m :: (Type -> Type) -> Type -> TYPE LiftedRep) (f :: Type -> Type) Source # Defined in Control.Monad.Action type AlgebraType (ReaderT r :: (Type -> Type) -> Type -> TYPE LiftedRep) (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType (StateT s :: (Type -> Type) -> Type -> TYPE LiftedRep) (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType (StateT s :: (Type -> Type) -> Type -> TYPE LiftedRep) (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType (ExceptT e :: (Type -> Type) -> Type -> Type) (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType (WriterT w :: (Type -> Type) -> Type -> Type) (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType (WriterT w :: (Type -> Type) -> Type -> Type) (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType (RWST r w s :: (Type -> Type) -> Type -> TYPE LiftedRep) (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType (RWST r w s :: (Type -> Type) -> Type -> TYPE LiftedRep) (m :: Type -> Type) Source # Defined in Control.Algebra.Free type family AlgebraType0 (f :: k) (a :: l) :: Constraint Source # Type family which limits Hask to its full subcategory which satisfies a given constraints. Some free algebras, like free groups, or free abelian semigroups have additional constraints on on generators, like Eq or Ord. type AlgebraType0 Coyoneda (g :: l) Source # Algebras of the same type as Coyoneda are all functors. Defined in Control.Algebra.Free type AlgebraType0 FreeGroup (a :: Type) Source # Defined in Data.Group.Free type AlgebraType0 FreeGroupL (a :: Type) Source # Defined in Data.Group.Free type AlgebraType0 FreeAbelianMonoid (a :: Type) Source # Defined in Data.Monoid.Abelian type AlgebraType0 FreeAbelianSemigroup (a :: Type) Source # Defined in Data.Semigroup.Abelian type AlgebraType0 FreeSemilattice (a :: Type) Source # Defined in Data.Semigroup.Semilattice type AlgebraType0 DList (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType0 DNonEmpty (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType0 Identity (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType0 NonEmpty (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType0 Maybe (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType0 [] (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType0 (Free1 c :: (Type -> Type) -> Type -> TYPE LiftedRep) (f :: l) Source # Defined in Control.Algebra.Free type AlgebraType0 (Free Monoid) (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType0 (Free Semigroup) (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType0 (Free Group) (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType0 Alt (f :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType0 Ap (g :: Type -> Type) Source # Algebras of the same type as Ap are the applicative functors. Defined in Control.Algebra.Free type AlgebraType0 Ap (g :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType0 Ap (g :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType0 F (f :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType0 DayF (g :: Type -> Type) Source # Algebras of the same type as DayF are all the applicative functors. Defined in Control.Algebra.Free type AlgebraType0 Free (f :: Type -> Type) Source # Algebras of the same type as Free monad is the class of all monads. Defined in Control.Algebra.Free type AlgebraType0 ListT (f :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType0 MaybeT (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType0 (FreeMAction m :: (Type -> Type) -> Type -> TYPE LiftedRep) (f :: Type -> Type) Source # Defined in Control.Monad.Action Algebras of the same type as ReaderT monad is the class of all reader monads. type AlgebraType0 (ReaderT r :: (Type -> Type) -> Type -> TYPE LiftedRep) (m :: Type -> Type) Source # TODO: take advantage of poly-kinded ReaderT Defined in Control.Algebra.Free type AlgebraType0 (StateT s :: (Type -> Type) -> Type -> TYPE LiftedRep) (m :: Type -> Type) Source # Algebras of the same type as StateT monad is the class of all state monads. Defined in Control.Algebra.Free type AlgebraType0 (StateT s :: (Type -> Type) -> Type -> TYPE LiftedRep) (m :: Type -> Type) Source # Algebras of the same type as StateT monad is the class of all state monads. Defined in Control.Algebra.Free type AlgebraType0 (ExceptT e :: (Type -> Type) -> Type -> Type) (m :: Type -> Type) Source # Algebras of the same type as ReaderT monad is the class of all reader monads. Defined in Control.Algebra.Free type AlgebraType0 (WriterT w :: (Type -> Type) -> Type -> Type) (m :: Type -> Type) Source # Algebras of the same type as WriterT monad is the class of all writer monads. Defined in Control.Algebra.Free type AlgebraType0 (WriterT w :: (Type -> Type) -> Type -> Type) (m :: Type -> Type) Source # Algebras of the same type as WriterT monad is the class of all writer monads. Defined in Control.Algebra.Free type AlgebraType0 (RWST r w s :: (Type -> Type) -> Type -> TYPE LiftedRep) (m :: Type -> Type) Source # Defined in Control.Algebra.Free type AlgebraType0 (RWST r w s :: (Type -> Type) -> Type -> TYPE LiftedRep) (m :: Type -> Type) Source # Defined in Control.Algebra.Free unFoldMapFree :: FreeAlgebra m => (m a -> d) -> a -> d Source # Inverse of foldMapFree It is uniquely determined by its universal property (by Yoneda lemma): unFoldMapFree id = returnFree Note that unFoldMapFree id is the unit of the unit of the adjunction imposed by the FreeAlgebra constraint. foldFree :: forall m a. (FreeAlgebra m, AlgebraType m a) => m a -> a Source # All types which satisfy FreeAlgebra constraint are foldable. foldFree . returnFree == id foldFree is the unit of the adjunction imposed by FreeAlgebra constraint. foldFree @[] = foldMap id = foldr (<>) mempty foldFree @NonEmpty = foldr1 (<>) Note that foldFree replaces the abstract / free algebraic operation in m a to concrete one in a. natFree :: forall m n a. (FreeAlgebra m, FreeAlgebra n, AlgebraType0 m a, AlgebraType m (n a)) => m a -> n a Source # The canonical quotient map from a free algebra of a wider class to a free algebra of a narrower class, e.g. from a free semigroup to free monoid, or from a free monoid to free commutative monoid, natFree . natFree == natFree fmapFree f . natFree == hoistFree . fmapFree f the constraints: * the algebra n a is of the same type as algebra m (this is always true, just GHC cannot prove it here) * m is a free algebra generated by a * n is a free algebra generated by a cataFree :: (FreeAlgebra m, AlgebraType m a, Functor m) => Fix m -> a Source # Fix m is the initial algebra in the category of algebras of type AlgebraType m (the initial algebra is a free algebra generated by empty set of generators, e.g. the Void type). Another way of putting this is observing that Fix m is isomorphic to m Void where m is the free algebra. This isomorphisms is given by fixToFree :: (FreeAlgebra m, AlgebraType m (m Void), Functor m) => Fix m -> m Void fixToFree = cataFree For monoids the inverse is given by ana (_ -> []). foldrFree :: forall m a b. (FreeAlgebra m, AlgebraType m (Endo b), AlgebraType0 m a) => (a -> b -> b) -> b -> m a -> b Source # A version of foldr, e.g. it can specialize to • foldrFree @[] :: (a -> b -> b) -> [a] -> b -> b • foldrFree @NonEmpty :: (a -> b -> b) -> NonEmpty a -> b -> b foldlFree :: forall m a b. (FreeAlgebra m, AlgebraType m (Dual (Endo b)), AlgebraType0 m a) => (b -> a -> b) -> b -> m a -> b Source # Generalizes foldl, e.g. it can specialize to • foldlFree @[] :: (b -> a -> b) -> b -> [a] -> b • foldlFree @NonEmpty :: (b -> a -> b) -> b -> NonEmpty a -> b General free type newtype Free (c :: Type -> Constraint) a Source # Free c a represents free algebra for a constraint c generated by type a. • runFree :: forall r. c r => (a -> r) -> r FreeAlgebra (Free Monoid) Source # Defined in Data.Algebra.Free FreeAlgebra (Free Semigroup) Source # Defined in Data.Algebra.Free FreeAlgebra (Free Group) Source # Defined in Data.Algebra.Free Monoid (Free Monoid a) Source # Defined in Data.Algebra.Free Monoid (Free Group a) Source # Defined in Data.Algebra.Free Semigroup (Free Monoid a) Source # Defined in Data.Algebra.Free Semigroup (Free Semigroup a) Source # Defined in Data.Algebra.Free Semigroup (Free Group a) Source # Defined in Data.Algebra.Free Group (Free Group a) Source # Defined in Data.Algebra.Free type AlgebraType0 (Free Monoid) (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType0 (Free Semigroup) (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType0 (Free Group) (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType (Free Monoid) (a :: TYPE LiftedRep) Source # Defined in Data.Algebra.Free type AlgebraType (Free Semigroup) (a :: TYPE LiftedRep) Source # Defined in Data.Algebra.Free type AlgebraType (Free Group) (a :: TYPE LiftedRep) Source # Defined in Data.Algebra.Free newtype DNonEmpty a Source # DNonEmpty is the free semigroup in the class of all semigroups. DNonEmpty ([a] -> NonEmpty a) FreeAlgebra DNonEmpty Source # Defined in Data.Algebra.Free Semigroup (DNonEmpty a) Source # Defined in Data.Algebra.Free type AlgebraType0 DNonEmpty (a :: l) Source # Defined in Data.Algebra.Free type AlgebraType DNonEmpty (m :: TYPE LiftedRep) Source # Defined in Data.Algebra.Free
{"url":"https://hackage.haskell.org/package/free-algebras-0.1.1.0/docs/Data-Algebra-Free.html","timestamp":"2024-11-08T17:21:09Z","content_type":"application/xhtml+xml","content_length":"245602","record_id":"<urn:uuid:10f93615-ccd1-4d90-b670-0d7ecf96d287>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00763.warc.gz"}
Is the Law of Conservation of Energy Cancelled? Physics is often baffling, but one principle seems rock-solid: the law of conservation of energy. The world contains this thing called “energy” whose amount never changes. It can change its form or go from one body to another, but its total amount remains constant. Everything from the arc of a well-kicked football to the purring of a car engine depends on this law. It makes energy a precious commodity, counted, hoarded, and fought over. The quantum world is uncertain; attributes such as energy are ill-defined or fuzzy. We physicists have learned that our bodies do not merely use energy, but consist of it. Einstein’s formula E=mc^2 identifies mass as a form of energy, one that can be converted to other forms (by a nuclear bomb, say) or created from those forms (in a particle collider). The formula strengthens our intuition that energy is the basic stuff of which things are made. When one gets deeper into physics, one also learns that conservation laws are intimately tied to symmetries, as first appreciated by the German mathematician Emmy Noether nearly a century ago. Energy is conserved because the laws of nature are symmetric in time—they do not change from moment to moment. But physics wouldn’t be physics if it did not continually question itself. Not long after Einstein derived his famous formula, he began to create a theory of gravitation, his general theory of relativity. Energy conservation became a bit dicey. Although individual observers can measure the energy density immediately around them and confirm that the total energy of localized systems remains constant, it is impossible to define an overall energy that is strictly conserved. It might sound strange to be able to define a local quantity of energy and not a global one. And it is. Our own expanding universe is a good example of that strangeness. The energy density of matter decreases in inverse proportion to the volume of space. For instance, galaxies move apart, so that there are fewer of them in a given volume, in accordance with energy conservation. But the energy density of starlight and other forms of radiation decreases at a steeper rate. Their energy is lost. It does not go into some other form. This is allowed because an expanding universe is not symmetric in time; its growth differentiates past from future. So, general relativity makes it hard to sustain the view that energy is fundamental stuff from which everything else is made. That is just the start. Consider the other theory that revolutionized the physics of the 20th century, quantum mechanics. The quantum world is uncertain; attributes such as energy are ill-defined or fuzzy. Worse, the theory has a very serious conceptual flaw, which must be taken into consideration when reviewing the ultimate fate of the conservation of energy. Physics wouldn’t be physics if it did not continually question itself. Namely, quantum mechanics involves two distinct and incompatible recipes to determine how a particle or system of particles evolves in time. The first applies when the system is not being observed, the second when it is observed. The theory is vague about which recipe to use. What exactly constitutes a measurement or observation? Need a conscious being be involved? Can a flea make a measurement? A virus? This issue is known as the measurement problem, which, as various critics have noted, should be referred to as the reality problem: The theory is unclear what exists “out there” independently of our perceptions. As Tim Maudlin of New York University has discussed, approaches to dealing with the problem come in three types.^1 One adds so-called hidden variables—ingredients beyond what ordinary quantum theory provides—to provide a fuller description of the state of a system. The best known example is the de Broglie–Bohm theory, which supposes that besides the wave function there are particles that have definite positions that the standard quantum formalism does not capture. The wave function simply guides them like a sheep dog. A second kind of approach postulates a random process that collapses the system’s uncertainty and eliminates its fuzziness. A third solution involves a multiplicity of universes. What we call a measurement somehow corresponds to a splitting of our universe into many branches, one corresponding to each possible result. All these ideas dispense with the problematic recipe for measurement. None is without problems, but that is what there is. This year Maudlin, Elias Okon of the National Autonomous University of Mexico, and I set out to study the fate of conservation laws in these three approaches.^2 Our analysis involved general considerations as well as various thought experiments. Consider a standard experiment in which a quantum system—made of, for instance, a few photons—evolves into that characteristic quantum type of combination known as the “superposition” of two paths. These lead to situations that, at the classical level, correspond to different values of the energy. One path takes photons to a distant galaxy and back, thus making them lose energy due to cosmic expansion. The other path involves no change in their original energy. According to a central tenet of quantum theory, each photon takes both paths. Dark energy is a sort of cumulative memory of all the violations of local energy conservation that have taken place in the universe’s history. The standard story in quantum physics is that any non-conservation can be explained away by taking into account the energy supplied or absorbed by the measuring apparatus. We remove that option by using another quantum effect, entanglement, to let us make the measurement remotely. The three interpretive approaches offer different accounts of what happens to the energy. In spontaneous-collapse theories, the system, after a sufficiently long time, undergoes a sudden collapse to one of the energy values, leading to energy non-conservation. In the de Broglie–Bohm approach, any notion of energy with any chance of being generally conserved must involve both the particles and the guiding wave function. The wave function is split and later reunited in the lab, and the interference that occurs at the reunion makes the photons behave in such ways that energy is not conserved. In the many-worlds setting, the average energy of all the branches into which the world splits might be conserved, but in each branch energy will not be conserved. From the point of view of each branch, what occurs is just the same as in the collapse theories. In short, we concluded that no scheme offers a reasonable definition of the global energy of a system that is strictly conserved. None offered a notion of local energy conservation, either—which is bad, because general relativity requires local energy conservation to be internally consistent. To reconcile quantum mechanics and general relativity will require a quantum theory of gravity. Physicists disagree vehemently on what such a theory will look like, but most agree on one thing: The notion of spacetime will disappear at the fundamental quantum-gravity level. In that case, conservation laws lose their relevance completely. How can you say a certain quantity does not change with time if there is no time at the fundamental level? At the practical level, the deviations from strict conservation are expected to be minuscule and will not help with the concrete problems we humans face with energy. But for many theorists, any violation is blasphemous. Still, there may be a compensation. Thibaut Josset and Alejandro Perez of the University of Marseille, James Bjorken of Stanford University, and I have shown that a modification to general relativity (originally considered by Einstein himself) permits small deviations from local energy conservation.^3-5 And such a theory may offer a path to resolve one of the biggest mysteries in modern science: dark energy. Dark energy is the mysterious component of the universe—some 70 percent of its total content—that is causing its expansion to accelerate. According to our analysis, dark energy is a sort of cumulative memory of all the violations of local energy conservation that have taken place in the universe’s history. In one of the specific models considered, the predicted value matches observations in a completely natural way. Of course, the situation is far from settled. The exploration of these and related issues is still in its infancy. But when we look anew at a principle we used to take for granted, we expect to continue being surprised by its implications. Daniel Sudarksy is a theoretical physicist at the National Autonomous University of México in Mexico City. He focuses on the interplay of Einstein’s general theory relativity and quantum physics, searching for clues about a deeper theory that my be unearthed by focusing on the friction points. 1. Maudlin, T. Three measurement problems. Topoi 14, 7-15 (1995). 2. Maudlin, T., Okon, E., & Sudarsky, D. On the status of conservation laws in physics: Implications for semiclassical gravity. arXiv:1910.06473 (2019). 3. Josset, T., Perez , A., & Sudarsky, D. Dark energy from violation of energy conservation. Physical Review Letters 118, 021102 (2017). 4. Perez, A., Sudarsky, D., & Bjorken, J.D. A microscopic model for an emergent cosmological constant. International Journal of Modern Physics D27, 1846002 (2018). 5. Perez, A. & Sudarsky, D. Dark energy from quantum gravity discreteness. Physical Review Letters 122, 221302 (2019). Lead image: patpitchaya / Shutterstock
{"url":"https://nautil.us/is-the-law-of-conservation-of-energy-cancelled-237640/","timestamp":"2024-11-07T18:40:16Z","content_type":"text/html","content_length":"316889","record_id":"<urn:uuid:30ce4347-dd71-443a-af29-202c2cc9e108>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00316.warc.gz"}
Crank arm length? who Knows? Just read something on the net at www.nettally.com/palmk/bikefit.html that a engineer raise about multipy 5.48 times your inseam and this give you your crank size. Pretty interesting read. Bases it on pretty interesting facts. The biggest point, and what I’m wondering is who came up with the formula for cranks. If you put a 170 on the left and a 172.5 on the right you probably will not know the differnece. Because the difference is so minute. How do we trust the formulas that have always been used and reserch a new and better one if one exists. I have used the Lemond formula’s and have discovered them to put me on a bike that is too small. A seat height that is to short. I have the computrainer to prove that. More power with a 1 1/4 higher that what Lemond would have put me at. How do say that a road bike fit and a tri bike fit is the same. Or even a mountain bike frame. Different crank sizes and especially a different seat tube degree. Will these frame and seat heights not be radically different, especially since the riding and the terrain is so different? Just curious–with a 32.5in inseam acccording to the formula I would be using 178 size cranks. Radically different than the LBS, Lemond, Peter White formulas of 170-172.5 cranks. Who’s right!? Thanks for reading, Matt (1) If you put a 170 on the left and a 172.5 on the right you probably will not know the differnece. Because the difference is so minute. How do we trust the formulas that have always been used and reserch a new and better one if one exists. (2) More power with a 1 1/4 higher that what Lemond would have put me at. 3. How do say that a road bike fit and a tri bike fit is the same. Or even a mountain bike frame. 4. Just curious–with a 32.5in inseam acccording to the formula I would be using 178 size cranks. Radically different than the LBS, Lemond, Peter White formulas of 170-172.5 cranks. Who’s right!? Thanks for reading, Matt 1. if you put a 170 on the left and a 172.5 on the right, you will feel it. Guaranteed. (I know from personal experience, putting the bike together late at night, grabbed the wrong crankarm.) 2. 1 1/4 what? inches, cms, mms. Remember all that any formula gives you is a rough guide, to be fine tuned by YOU. 3. They’re not. esp not the mtn bike, I need about 1cm lower on that. horses for courses. 4. Who’s right? All of them. No one. Who knows. Borrow a set of 177.5 mm cranks and find out if they work for you. I have almost the same leg length as you, if I put on 175’s my spin goes to pot, so I really doubt that I could use 177.5 's. Again, this one is a bugger. The best crank literature I’ve seen has been Lennard Zinn’s columns in Velo-News and Inside Triathlon fro some years ago and also in Bernard Hinault and Claude Genzling’s book “Bicycle Road Racing” which is no longer in print in English. I tried your formula just for amusement and got some pretty wacky results. There are enough factors involved that I haven’t yet seen a “formula” where you could plug one dimension in and then get a crank length out. We look at shoe size, leg length (overall), femur length as compared to leg length and pedalling style as well as body type. Remember though, the difference between 172.5 and 175 cranks is not 2.5mm, it’s 5mm or 2X2.5mm. BTW, I could never see a situation where I would put a customer on two different crank lengths- Holy back problems Batman! I did a search a view weeks ago on the net for information about crank length. There certainly are a bunch of theories. Most of it somewhat contradictory and nearly all of it ignored by the bike companies when they spec bikes. And frankly, I am not sure how much it matters. I have seen these theories put into practice in the case of leg length differences. I was pretty interested in the theory of longer cranks for mountain bikes. Given that most of the time on mtb the consistant spin is not as inportant as the ability to turn power on and off. I asked some former bmx rats and they confirmed that the added leverage of longer cranks was helpful. Do bike companies really investigate this problem or answer this question–you never see literature by bike companies on crank arm length. Do they care? They probably don’t because the supply is not great for their demand. If you only care about money and selling bikes, they will put anything on a bike and sell it. Is R&D put into every aspect of the bike. No but we now have so great XTR shifters that cost a arm and a leg but really don’t give a whole lot of improvement to the old but we can sell them as new and make money. The change is 1 1/4 inch higher than Lemond.
{"url":"https://forum.slowtwitch.com/t/crank-arm-length-who-knows/276090","timestamp":"2024-11-02T21:42:50Z","content_type":"text/html","content_length":"24363","record_id":"<urn:uuid:55a90596-43a3-457c-9d31-192decb2ba0f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00518.warc.gz"}
251 research outputs found The Gutzwiller trace formula is extended to include diffraction effects. The new trace formula involves periodic rays which have non-geometrical segments as a result of diffraction on the surfaces and edges of the scatter.Comment: 4 pages, LaTeX, 1 ps figur The exact elastodynamic scattering theory is constructed to describe the spectral properties of two- and more-cylindrical cavity systems, and compared to an elastodynamic generalization of the semi-classical Gutzwiller unstable periodic orbits formulas. In contrast to quantum mechanics, complex periodic orbits associated with the surface Rayleigh waves dominate the low-frequency spectrum, and already the two-cavity system displays chaotic features.Comment: 7 pages, 5 eps figures, latex (with epl.cls We present a new, model-independent method to analyze radiative decays of mesons to a vector, isovector pair of pions of invariant mass square below the first significant pion-pion threshold in the vector channel. It is based on a combination of chiral perturbation theory and dispersion theory. This allows for a controlled inclusion of resonance physics without the necessity to involve vector meson dominance explicitly. As an example, the method is applied to an analysis of the reactions eta -> pi+ pi- gamma and eta'->pi+ pi- gamma.Comment: 16 pages, 4 figure We present a brief account of two phenomena taking place in a neutron star crust: the Fermionic Casimir effect and the major density depletion of the cores of the superfluid neutron vortices.Comment: 6 pages, invited talk presented by AB at Tours 2003 Symposium on Nuclear Physics, August 26-29,Tours, Franc The generating functional of heavy baryon chiral perturbation theory at order {\cal O}(Q^2) in the mean field approximation (with a pseudoscalar source coupling which is consistent with the PCAC-Ward identities on the current quark level) has been exploited to derive Migdal's in--medium pion propagator. It is shown that the prediction for the density dependence of the quark condensate obtained on the composite hadron level by embedding PCAC within the framework of Migdal's approach to finite Fermi systems is identical to that resulting from QCD Two--point functions related to the pion weak decay constant $f_\pi$ are calculated from the generating functional of chiral perturbation theory in the mean field approximation and the heavy--baryon limit. The aim is to demonstrate that Lorentz invariance is violated in the presence of background matter. This fact manifests itself in the splitting of both $f_\pi$ and the pion mass into uncorrelated time-- and space--like parts. We emphasize the different in--medium renormalizations of the correlation functions, show the inequivalence between the in--medium values of $f_\pi$ deduced from Walecka--type models, on the one hand, and QCD sum rules, on the other hand, and elaborate on the importance for some nuclear physics observables The neutron-proton mass difference in (isospin asymmetric) nuclear matter and finite nuclei is studied in the framework of a medium-modified Skyrme model. The proposed effective Lagrangian incorporates both the medium influence of the surrounding nuclear environment on the single nucleon properties and an explicit isospin-breaking effect in the mesonic sector. Energy-dependent charged and neutral pion optical potentials in the s- and p-wave channels are included as well. The present approach predicts that the neutron-proton mass difference is mainly dictated by its strong part and that it markedly decreases in neutron matter. Furthermore, the possible interplay between the effective nucleon mass in finite nuclei and the Nolen-Schiffer anomaly is discussed. In particular, we find that a correct description of the properties of mirror nuclei leads to a stringent restriction of possible modifications of the nucleon's effective mass in nuclei.Comment: 10 pages, 8 figures, presentation at the 19th Int. IUPAP Conf. on Few-Body Problems in Physics (Aug.31-Sep.5, 2009, Univ.of Bonn, Germany We present new results for Casimir forces between rigid bodies which impose Dirichlet boundary conditions on a fluctuating scalar field. As a universal computational tool, we employ worldline numerics which builds on a combination of the string-inspired worldline approach with Monte-Carlo techniques. Worldline numerics is not only particularly powerful for inhomogeneous background configurations such as involved Casimir geometries, it also provides for an intuitive picture of quantum-fluctuation-induced phenomena. Results for the Casimir geometries of a sphere above a plate and a new perpendicular-plates configuration are presented.Comment: 8 pages, 2 figures, Submitted to the Proceedings of the Seventh Workshop QFEXT'05 (Barcelona, September 5-9, 2005), Refs updated, version to appear in JPhys The instanton-induced determinantal 't Hooft interaction is built into a three-flavor linear sigma model which is considered in the OZI-rule-respecting basis. The mixing of the strange and non-strange quarkonia, which is due to the presence of instantons in combination with the spontaneous breaking of chiral symmetry, is shown to be ideal thus leading to the formation of an octet-flavor state. We study the impact of 't Hooft's interaction on the eta NN coupling finding the usual SU(3) results for this coupling, however, with possible generalizations to non-ideal mixing angles and different values of the meson decay constants in the strange and non-strange sectors, respectively
{"url":"https://core.ac.uk/search/?q=author%3A(A%20Wirzba)","timestamp":"2024-11-02T11:07:10Z","content_type":"text/html","content_length":"148158","record_id":"<urn:uuid:46336545-330f-497a-802c-7558149eec5d>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00002.warc.gz"}
Basic College Mathematics (10th Edition) Chapter 5 - Ratio and Proportion - Review Exercises - Page 373 19 Ans: 8 ounces for 4.98 dollars is the best buy. Work Step by Step 1. 8 ounces for 4.98 Dollars First we try to put the detail in ratio as $\frac{8 ounces}{4.98 Dollars}$ $\approx \frac{1.61 ounces}{1 Dollars}$ 2. 3 ounces for 2.49 Dollars First we try to put the detail in ratio as $\frac{3 ounces}{2.49 Dollars}$ $\approx \frac{1.20 ounces}{1 Dollars}$ 3. 2 ounces for 1.89 Dollars First we try to put the detail in ratio as $\frac{2 ounces}{1.89 Dollars}$ $\ approx \frac{1.06ounces}{1 Dollars}$ So compairing All three options option 1. 8 ounces for 4.98 dollars is the best buy.
{"url":"https://www.gradesaver.com/textbooks/math/other-math/CLONE-547b8018-14a8-4d02-afd6-6bc35a0864ed/chapter-5-ratio-and-proportion-review-exercises-page-373/19","timestamp":"2024-11-13T05:07:28Z","content_type":"text/html","content_length":"66722","record_id":"<urn:uuid:637ce447-75fc-4162-8365-901097904e37>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00686.warc.gz"}
The university policy department must write, on average, five tickets per day to keep department revenues at budgeted levels. Answered You can hire a professional tutor to get the answer. The university policy department must write, on average, five tickets per day to keep department revenues at budgeted levels. 1. The university policy department must write, on average, five tickets per day to keep department revenues at budgeted levels. Suppose the number of tickets written per day follows a Poisson distribution with mean of 6.0 tickets per day. Find the probability that exactly six tickets are written on a randomly selected day from this distribution.a) .160b) .446c) .606d) 0 1. The university policy department must write, on average, five tickets per day to keep department revenues at budgeted levels. Suppose the number of tickets written per day follows a Poisson... Show more Homework Categories Ask a Question
{"url":"https://studydaddy.com/question/the-university-policy-department-must-write-on-average-five-tickets-per-day-to-k","timestamp":"2024-11-03T09:51:08Z","content_type":"text/html","content_length":"26360","record_id":"<urn:uuid:20c1b318-ca95-424b-b7c1-372946c5b582>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00424.warc.gz"}
It's Not Just Analysis, It's A Transformer! It's Not Just Analysis, It's A Transformer! How to make PCA, MNF, and ICA work for you In geospatial work we’re trying to answer questions about where things are on the earth and how they work. Exact scales and applications can vary, and there are only so many measurements we can take or how much data we can get. As a result, a lot of our work becomes getting as much information as we can and then trying to get all that different data to work together, hopefully resulting in a clear picture answering our question. Data transforms are an excellent set of tools for making lots of data help us. Too often, information on tools and analyses are aimed at the wrong audiences, assuming the user wants to be an expert and derive the algorithm from first principles. It is important that the underpinnings and mathematical derivations analysis be open and available to anyone who needs to see them. However, often, what is needed is a clear description of how to use tools reasonably. You don’t have to know how to make wine to enjoy a glass with dinner. There is a lot of detailed information on transforms available; this post is a summary of the important parts and difference of data transforms. Principal Components Analysis (PCA) has been around since the early 20^th century. PCA assumes we have some measurements of points we’re interested in. In image analysis it means having some number of spectral band brightnesses for each of the pixels in our image. With no prior knowledge of an answer the smart bet is “average” and PCA assumes this. Their histograms should be classic bell/ Gaussian/normal-distribution-ish curves. Here are the histograms for Landsat 5 multispectral bands over a part of coastal Alabama: Taken a step further and plotting the brightness of each pixel in two bands, a scatterplot, is created: PCA looks at that scatter plot and says, “Why do we need two bands to describe each pixel, when we could use one number and get most of the information?” So, a new axis is drawn through the average and along the longest axis of the cloud of data points, then all the pixels are scored (shortest distance) on that new axis. That’s the First Principal Component. A second axis is drawn perpendicular to the first, also through the average, to capture remaining information. Roughly, it would look like this: You always end up with as many Principal Components as bands you started with. While we can’t draw in 4 (or more) dimensions, creating those axes works the same. In the case of Landsat TM data we get 6 Principal Components. There are several very good reasons why you would go to all this effort. First, because PCA packs as much independent information as possible in to the components, the first ones have the most information. This means you can make an RGB display of the first three PCA bands and have an image containing the maximum amount of information you can put on the screen at one time. In the case of our Alabama Landsat scene, we go from a scene that has a lot of information but can be hard to interpret: To a PCA composite that maximizes the amount of information and visual separation of what’s going on in the image. Here’s what we get when we put the first three PCA bands in to an RGB composite: The image content shows up much more distinctly because PCA is packing as much signal as possible in to those three bands. You can see this with the Eigenvalue plot that gets generated when you run The short story on the plot is that high eigenvalues (y-axis) mean lots of information in the PCA band (x-axis). Here we’re really not getting much after about the third component. This brings up a second benefit of PCA: “reducing data dimensionality”. We can get almost all of the information from a 6 band image in just 3 well-crafted PCA bands. This reduces data processing, especially with hyperspectral data, taking you from hundreds of bands to tens of bands. With most content in the first 3 bands, a third benefit of PCA appears, de-noising. Those later bands are mostly noise or faint signal indistinguishable from noise. Note that I did not say they are only noise. They are worth a look.Some interesting sensor artifacts reside in the 5^th and 6^thPCA bands from our Landsat scene. There is some signal, but a grid pattern appears in the otherwise noisy-looking image, artifacts of the sensor and processing: PCA helps us get as much information as possible from our data and make it as easy to view as possible. With more advanced work, we could use it for noise filtering or diagnosing sensor problems. But we can build onPCA, which brings us to our second data transform. MNF, which is Minimum Noise Fraction or Maximum Noise Fraction in various publications, is two PCA transforms in a row. One of them is based on the data statistics, just like PCA, but the other one is based on noise statistics. Using the same idea of drawing our new component axes to maximize when and how we catch signal, but doing it with an eye towards the noise information, MNF does a better job of pushing signal to the first components and noise to the later ones. It is more work, but worth it for the same reasons PCA is a good idea. Here are the first 3 MNF components in an RGB MNF improves on PCA by doing two transforms and including information about noise. Independent Components Analysis (ICA) improves on it by examining that assumption about our normal distribution of data, all the way back in our first graph. We can see those curves aren’t ideal normal distributions. Perfect bell curves don’t usually happen. ICA accounts for that messiness, or clumping in the data. It looks at more advanced statistics than just the variance when it draws new axes. The results are great for filtering signal and noise. Same scene, first three ICA components: Capturing and including some of that more subtle signal can make the image harder to interpret than the distinct colors of our MNF results,but it is often an improvement for further processing. The next time you’re trying to pull information out of an image, give transforms a try. You can get more information on screen, clean up noise, reduce data volumes, and maximize results in further processing. Best of all, you don’t have to be an expert in math and stats to use transforms!
{"url":"https://www.nv5geospatialsoftware.com/Learn/Blogs/Blog-Details/its-not-just-analysis-its-a-transformer","timestamp":"2024-11-06T08:27:54Z","content_type":"text/html","content_length":"89985","record_id":"<urn:uuid:da05020e-ea1c-4a1e-b4b9-aacd679f5665>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00138.warc.gz"}
Specific Heat Capacity Formula - Definition, Types and FAQs What is the Specific Heat Capacity Formula? Specific heat is the heat energy required to change the temperature of one unit mass of a substance of a constant volume by 1 °C. The specific heat capacity of a substance is the amount of energy needed to change the temperature by 1 unit of material of 1 kg mass. The SI unit of specific heat and specific heat capacity is J/Kg. The specific heat formula and the formula of specific heat capacity will be discussed here. Thermal Capacity Formula The thermal capacity or heat capacity is a physical property of matter or substance. We define this property as the amount of heat supplied to a given mass of a material to generate a change of unit The thermal capacity SI unit is Joule per Kelvin or J/Kg. Besides this, the heat capacity formula or the thermal capacity formula is: \[C = lim \frac{\Delta Q}{\Delta T}\] \[\Delta T \rightarrow 0\] Delta Q = It is the amount of heat that must be added to the object of mass ‘M” to raise its temperature by delta T. Also, the value of the above parameter varies considerably depending on the initial temperature {ΔT} T of the object and the pressure {ΔP}P applied to it. Molar Heat Capacity Formula Molar heat capacity is the amount of heat needed for the temperature rise of a given substance by 1 ⁰C. The molar heat capacity formula is given by: C[m] = C/n C[m] = molar heat capacity C = heat capacity n = number of moles We see that “n” is the number of moles of the sample. The number of moles can be determined by the following formula: \[= \frac{Quantity \, of \, a \, sample \, or \, mass \, of \, the \, substance}{molar \, mass} = \frac{m}{M}\] Now, let’s understand the specific heat capacity equation: Specific Heat Formula The specific heat capacity formula is: \[Q = mc \Delta t\] \[c = \frac{Q}{m \Delta t}\]......(1) Q is the heat energy m = mass in Kg c = specific heat capacity, and \[\Delta \] \[\Delta \] t is the temperature change in Kelvin Also, the change in temperature is given by: Δ T = (T[f] – T[i]) Where Tf is the final temperature and Ti is the initial temperature in K. Unit of Specific Heat Capacity The unit of specific heat capacity is J/Kg. K, or J/Kg °C. Dimensional Formula of Specific Heat The dimensional formula of specific heat is calculated as; The dimensional formula of heat is \[[M^{1} L^{2} T^{-2}]\] The dimensional formula of m is \[M^{1} L^{0} K^{0}\] The dimensional formula of \[\Delta T\] is \[M^{0} L^{0} K^{1}\] Step 1: Now, putting the dimensional formulas of Q, m, and \[\Delta T\]: \[= \frac{[M^{1} L^{2} T^{-2}]}{[M^{1} L^{0} K^{0}][M^{0} L^{0} K^{1}]}\] Step 2: Canceling out the common terms: \[= M^{0} L^{2} T^{-2} K^{-1}\] So, the dimensional formula of specific heat capacity is \[= M^{0} L^{2} T^{-2} K^{-1}\] Specific Latent Heat Formula Specific latent heat is the measure of the heat energy (Q) released/absorbed per mass (m) during a phase/state change. The specific latent heat formula is: \[Q = mL\] \[L = \frac{Q}{m}\] Q = the heat absorbed or released depending on the direction of the transition. It is measured in KJ L = specific latent heat of the material. It is measured in KJ/kg. m = mass of the substance. It is measured in Kg. Points to Remember • Transferred heat depends on three factors which are - The change in temperature, the mass of the system, and the phase of a substance. • The temperature change is directly proportional to the amount of heat transferred. • Heat can be added twice to double the temperature and mass. • Mass is directly proportional to the amount of heat transferred. • Heat has to be added in order to cause an equivalent temperature change in a doubled mass. • The amount of heat transferred depends on the materials and the phase. • The specific heat power of water is 4.2 joules per gram per Celsius degree • The specific heat capacity of water is higher than than the metal • The application of specific heat is used in the swimming pools. • The constant volume of the specific heat capacity of steam is 1.4108 kJ/kg K. • The constant pressure of the specific heat capacity of steam is 1.8723 kJ/kg K. • The heat needed to raise a substance's temperature by 1 degree Celcius is called the specific heat capacity. • The specific heat power of water is 4.2 joules per gram per Celsius degree. • Q = C m ∆t is the formula for the specific heat capacity Tips for Understanding the Topic • Specific Heat capacity is an important chapter for competitive exams such as JEE Mains and BITSAT so this chapter is important in Physics. • This chapter is also covered in the school so students are suggested to clear the concepts during the school which will help them for the further exams too. • As this topic is important from an exams point of view the students are recommended to cover this topic. • The students should understand the concept of specific heat capacity and can make notes regarding this topic. • This chapter consists of the problems in which mathematical solutions are to be solved so students need to practice the problems so that they won't face problems during the exams. • Make notes while the teacher teaches this will help them to make the running notes. This will make a brief understanding of this chapter. • They can make their own personal notes by studying the chapters and referring to the external notes which are available on the website. This will make the students remember the topic and can refer for quick study. • Practice the problems given in the textbooks, don't just stay with a single book for solving the questions. They should solve multiple questions from different books this will make the student more acquainted with a topic. • They can refer to the solution books for the solutions of the problems after they try to solve the question. By double-checking the steps used to answer the problem is correct or not and comparing the answer to that question will help the students to be sure if they can solve a problem or not. • Some students may face problems they should practice more that will make their concepts more clear. • Clear the basic concepts of the topic so that it will be easy to understand and answer the questions. • Students are recommended to have external guidance to prepare for the exam and they can guide them to study the right topic with better knowledge. • If they get stuck in a problem or have any doubts they can ask the teachers or tutors so that they can help the students regarding the topic. • Students should always revise the chapter after they finish studying. This will help the students to come up with doubts which they might have while revising the topic. Doubts help the student to know and gain more knowledge about the subject that might help them. • Strengthen the math skills because this will help to solve the problems in Physics. • Students also have to remember the formula and have the knowledge of the formula to use it in the solution so that they can get the answers right. • Students should learn the SI units for using the equations in the correct way. By this, it will be easy for them to simplify the equations and can get the answers they want. Vedantu has everything prepared for the students so that students can have the materials that they need. Free notes and study materials of this are available on the website of Vedantu and can be downloaded with the help of a mobile or laptop. It can be downloaded in PDF form. Students can refer to the study material for their exams. The specific heat is the measure of heat per unit mass needed for the temperature rise by one degree Celsius. The connection between heat and temperature change is typically communicated in the structure that appeared underneath where c is specific heat. FAQs on Specific Heat Capacity Formula 1. Why is the Specific Heat Capacity Formula Used? The specific heat formula is used to determine the specific heat of any given material, if any of the parameters, like mass, heat gained, or the difference in temperature is given. Its measuring unit is Joule/Kg Kelvin, abbreviated as J/Kg K. The specific heat capacity formula helps us measure the specific heat capacity of a given material in Joule/Kilogram-Kelvin (abbreviated as J/Kg. K). The significance of specific heat capacity is: It is the amount of heat energy needed for the temperature rise of 1 kg of a substance by 1 K. Thus, it gives an important indication of how much energy will be needed to heat/cool the material of a given mass by a given amount. 2. What is the Specific Heat of Liquid? At room temperature, the value of specific heat capacity (Cp) is roughly 4.2 J/g°C. This states that it takes 4.2 joules of energy to increase 1 gram of the temperature of water by 1 degree Celsius. 3. What is the Specific Heat of Milk? According to the handbook "Dairy-Based Ingredients" of the American Association of Cereal Chemists, by Ramesh Chandan, skimmed milk has a specific heat of 3.97 J/g °C, and whole fat milk has a specific heat of 3.89 J/g °C. The cream of milk has a specific heat of 3.35 J/g °C.
{"url":"https://www.vedantu.com/formula/specific-heat-capacity-formula","timestamp":"2024-11-03T12:30:20Z","content_type":"text/html","content_length":"300865","record_id":"<urn:uuid:63d4c592-d710-4225-9700-0bfb8a6050c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00434.warc.gz"}
Finding the Acceleration of a Particle Moving in a Straight Line given Its Distance-Time Relationship Question Video: Finding the Acceleration of a Particle Moving in a Straight Line given Its Distance-Time Relationship Mathematics • Third Year of Secondary School A particle moves along a straight line. Its displacement at time π ‘ is π ₯ = β cos (π ‘). Which of the following statements about the acceleration of the particle is true? [A] it is equal to π ₯ [B] it is equal to β π £, where π £ is the velocity of the particle [C] it is equal to the velocity of the particle [D] it is equal to β π ₯ Video Transcript A particle moves along a straight line. Its displacement at time π ‘ is π ₯ equals negative cos of π ‘. Which of the following statements about the acceleration of the particle is true? Is it (A) it is equal to π ₯? (B) It is equal to negative π £, where π £ is the velocity of the particle. (C) It is equal to the velocity of the particle. Or is it (D) it is equal to negative π ₯? In this question, weβ ve been given information about the displacement of a particle time π ‘. And weβ re looking to find information about the acceleration of that same particle. And so we recall the link between acceleration and displacement. Acceleration is the rate of change of velocity of the particle. And the velocity itself is the rate of change of displacement. So, we differentiate an expression for displacement with respect to time to get us an expression for velocity. And then we differentiate once more to get an expression for acceleration. So, to find an expression for the acceleration of the particle, weβ re going to differentiate negative cos of π ‘ with respect to π ‘. In fact, thereβ s a cycle that can help us remember how to differentiate trigonometric functions. The derivative of sin π ₯ is cos π ₯. Then, the derivative of cos π ₯ is negative sin π ₯. If we differentiate negative sin π ₯ with respect to π ₯, we get negative cos π ₯. Then, if we differentiate negative cos π ₯ with respect to π ₯, we get back to sin π ₯. So, letβ s begin by differentiating our expression for π ₯ with respect to time π ‘. This tells us that the velocity is the derivative of negative cos π ‘. We can see from our cycle that the derivative of negative cos π ₯ is sin π ₯. So, the derivative of negative cos π ‘ with respect to π ‘ is sin π ‘. To find an expression for acceleration, weβ re going to differentiate our expression for velocity with respect to time. Once again, we see from our cycle that the derivative of sin π ₯ with respect to π ₯ is cos π ₯. And so this means that the derivative of sin π ‘ with respect to π ‘ is cos π ‘. So, we have three expressions describing the motion of the particle. Velocity is sin π ‘, acceleration is cos π ‘, and π ₯, displacement, is negative cos π ‘. We can see that, in fact, our expressions for acceleration and displacement look quite similar. However, they are negatives of one another. So, we can say that π ₯ is the negative of the acceleration or vice versa. π is equal to negative π ₯. Going back to the options given to us in this question, we see that that is equivalent to (D). The answer is (D). It is equal to negative π
{"url":"https://www.nagwa.com/en/videos/763152737213/","timestamp":"2024-11-02T12:19:47Z","content_type":"text/html","content_length":"254481","record_id":"<urn:uuid:b2761ad3-6e26-44f8-871a-df6e705fa940>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00846.warc.gz"}
FEDERAL OPEN MARKET COMMITTEE: A part of the Federal Reserve System that's specifically responsible for directing open market operations, and is more generally charged with guiding the nation's monetary policy. The FOMC includes the 7 members of the Fed's Board of Governors and 5 of the 12 presidents of Federal Reserve District Banks. The chairman of the Federal Reserve System is also the chairman of the FOMC. The FOMC meets every 45 days to evaluate monetary policy. Visit the GLOSS*arama A basic technique used in economics that analyzes small, incremental changes in key variables. Marginal analysis is the primary analytical approached used in the study of markets, production, consumption, business cycles, and economic policies. It not only reflects how most economic decisions are made, it also lends itself to mathematical and graphical analysis. Marginal analysis is based on a simple question often posed in the study of economics: "What happens if something changes by one dollar, one unit, one person, or one whatever?" For example, what happens to the quantity demanded of hot fudge sundaes if the market price',500,400)">market price increases by one cent? Or what happens to gross domestic product if investment decreases by $1? Or what happens to the market price of computers if one more computer supplier enters the industry? Marginal Obsession The apparent economic obsession with marginal changes exists for at two notable reasons. • Incremental Decisions: One reason is that many economic decisions made in the real world are made "at the margin." Duncan Thurly decides whether or not to eat one more slice of pizza at the all-you-can-eat pizza lunch buffet after having eaten five slices. Winston Smythe Kennsington III decides whether or not to hire an additional worker to the current staff. The Shady Valley City Council debates over adding an extra penny to their existing sales tax. These are marginal decisions, one and all, and just the sort of phenomena investigated with marginal analysis. • Sophisticated Analysis: A second reason for using marginal analysis can best be termed analytical sophistication. Economists frequently make use of high-powered mathematical techniques, especially calculus, to create models of markets, consumer behavior, production decisions, or the aggregate economy. Such high-powered mathematical techniques not only lend themselves easily to analyzing incremental changes, but also to building extremely complex models that use these incremental changes to reveal interactions, implications, and conclusions about the economy that are often far from obvious. For example, such a complex model might reveal how a financial crises in Asia affects the construction of new homes in California. Marginal Slope The use of marginal analysis works nicely with both mathematical and graphical analysis. Marginal means incremental change. In simple mathematical terms the slope parameter of an equation captures the marginal change. In a graph, the slope of a line captures a marginal change. In effect, the term "marginal" is synonymous with the term "slope." Consider this simple equation that captures a linear relation between two variables X and Y: Y = a + bX The key point of focus is the slope parameter, b. This equation indicates that each 1 unit change in X results in a change in Y by the value of b. If b is 4, then an increase in X by 1 results in Y increasing by 4. The slope parameter b captures the marginal change in Y resulting from a change in X. Now consider a simple graph of a line such as Y = a + bX. It too captures marginal change as the slope. This exhibit displays a positively-sloped line. The numerical value of the slope of the line is 4. This value captures the marginal change in Y measured on the vertical axis resulting from a change in X measured on the horizontal axis. An increase in X by 1 results in Y increasing by 4. <= MANAGERIAL BEHAVIOR MARGINAL BENEFIT OF SEARCH => Recommended Citation: MARGINAL ANALYSIS, AmosWEB Encyclonomic WEB*pedia, http://www.AmosWEB.com, AmosWEB LLC, 2000-2024. [Accessed: November 9, 2024]. Check Out These Related Terms... | | | Or For A Little Background... | | | | | | | | | | And For Further Study... | | | | | | | | | | | Search Again? Back to the WEB*pedia YELLOW CHIPPEROON [What's This?] Today, you are likely to spend a great deal of time at a garage sale trying to buy either a replacement washer for your kitchen faucet or a stretchable, flexible watch band. Be on the lookout for jovial bank tellers. Your Complete Scope This isn't me! What am I? A thousand years before metal coins were developed, clay tablet "checks" were used as money by the Babylonians. "Laughter is the shortest distance between two people. " -- Victor Borge, musician, humorist Tell us what you think about AmosWEB. Like what you see? Have suggestions for improvements? Let us know. Click the User Feedback link. User Feedback
{"url":"http://www.amosweb.com/cgi-bin/awb_nav.pl?s=wpd&c=dsp&k=marginal%20analysis","timestamp":"2024-11-09T19:04:30Z","content_type":"text/html","content_length":"36210","record_id":"<urn:uuid:d48d295b-5294-4d10-a5f5-b2b5f70b0609>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00751.warc.gz"}
Tuesday temperature - math word problem (83812) Tuesday temperature The temperature on Monday was -23°F. The temperature on Tuesday was 18° higher. What was the temperature on Tuesday? Correct answer: Did you find an error or inaccuracy? Feel free to write us . Thank you! You need to know the following knowledge to solve this word math problem: Units of physical quantities: Grade of the word problem: Related math problems and questions:
{"url":"https://www.hackmath.net/en/math-problem/83812","timestamp":"2024-11-13T22:00:09Z","content_type":"text/html","content_length":"49589","record_id":"<urn:uuid:b1c135d9-6081-4576-85e9-13a40f9de7fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00255.warc.gz"}
Turn off box plot in variability chart using JSLTurn off box plot in variability chart using JSL Hi All, Does anyone know why the following script failed to turn off the box plot and cell mean? I copy this script after I manually created the variability plot. However, if I run the script, the box plot and cell mean are not turn off. I am currently using JMP17. The only workaround I found is by using either of the two options below : 1. << (Variability Analysis[1] << Show Box Plots( 0 )) 2. << Variability Analysis << Show Box Plots( 0 ) In option 1, what does the [1] in Variability Analysis mean? It seems like I can omit without any issue. I found the solution from JMP -> Scripting Index. dt = Open("$SAMPLE_DATA/Big Class.jmp"); Variability Chart( Y( :height ), Model( "Main Effect" ), X( :sex ), Sigma Multiplier( 6 ), Analysis Type( "Choose best analysis (EMS REML Bayesian)" ), Variability Analysis( Show Range Bars( 0 ), Show Cell Means( 0 ), Std Dev Chart( 0 ), Points Jittered( 1 ), Show Box Plots( 0 )
{"url":"https://community.jmp.com/t5/Discussions/Turn-off-box-plot-in-variability-chart-using-JSL/td-p/801205?trMode=source","timestamp":"2024-11-05T10:35:11Z","content_type":"text/html","content_length":"412339","record_id":"<urn:uuid:9abe2ee3-cf0f-488f-9963-91e471dff51a>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00519.warc.gz"}
Airbus crypto challenge write-up Airbus crypto challenge write-up posted December 2014 Airbus made a "private" challenge called « Trust the future » and accessible only by some selected schools (epitech, insa, and others). I wasn't invited to participate but there was a "crypto" challenge I thought was interesting. Since the challenge just finished I'm posting the write up. Crypto challenge #1 We have 4 certificates and a challenge1 file that seems to be a s/mime file of a pkcs#7 enveloped-data object. 2.4.3 EnvelopedData Content Type This content type is used to apply privacy protection to a message. A sender needs to have access to a public key for each intended message recipient to use this service. This content type does not provide authentication. 3.2 The application/pkcs7-mime Type The application/pkcs7-mime type is used to carry CMS objects of several types including envelopedData and signedData. The details of constructing these entities is described in subsequent sections. This section describes the general characteristics of the application/pkcs7-mime type. We dump the info of each certificates in human readable format, openssl has commands for that (I think certtool does as well, but I'm on windows using cmder and openssl is the one included). openssl x509 -in alice.crt -text -noout -out alice.crt.txt We see that alice, bob and charly use the same rsa exponent (3). Reminder: RSA If you're familiar with RSA (and it's highly probable you are if you read this blog) you can skip this section. RSA is an asymmetric encryption scheme (also used as a signature). It works by generating a set of private key/public key, the private key is of course kept private and the public key is publicly disclosed. If someone wants to send us a private message he can encrypt it with our public key and we will be able to decrypt it with the private key. The public key is the pair of number (n, e) where n is called the modulus and e is called the exponent. If we want to encrypt a message m with the public key we "basically" do c = m^e modulo n and send c. To decrypt it we use our private key d like this: m = c^d modulo n. The math behind this is that n is generated from two secret primes p and q (big enough) n = p x q and d = e^-1 modulo (p-1)(q-1), (p-1)(q-1) being phi(n) being the order of the multiplicative group Z /nZ. The security comes from the fact that it's Computationally Hard to find the inverse of e if we don't know p and q. By the way, Heartbleed (a recent attack on openssl) led to finding one of the prime, thus the entire decomposition of n. Textbook RSA vs real life RSA This is all theory. And in practice we have to go through several steps to encrypt an ascii message, make sure it is of length lesser than the modulus, make sure the modulus is big enough, etc... Textbook RSA is also deterministic and thus not semantically secure (see my previous post) + it is malleable: imagine you intercept c, and of course you know (n, e) (the public key). You could compute c' = 2^e * c = 2^e * m^e = (2m)^e modulo n, this would correctly decrypt as 2m. Thus, to counter those in practice, RSA Encrytion uses padding (usually OAEP) to make it probabilist and not malleable. Let's go back to our challenge We open our challenge1 file: MIME-Version: 1.0 Content-Disposition: attachment; filename="smime.p7m" Content-Type: application/x-pkcs7-mime; smime-type=enveloped-data; name="smime.p7m" Content-Transfer-Encoding: base64 To read that we need to extract the pkcs7 object and parse it. Openssl allows us to do this: openssl smime -in challenge1 -pk7out -out challenge1.p7m openssl asn1parse -text -in challenge1.p7m We get an annoying dump of info to read. With three of those things: 95:d=6 hl=2 l= 16 prim: INTEGER :6384E2B2184BCBF58ECCF10CA7A6563C 113:d=5 hl=2 l= 13 cons: SEQUENCE 115:d=6 hl=2 l= 9 prim: OBJECT :rsaEncryption 126:d=6 hl=2 l= 0 prim: NULL 128:d=5 hl=4 l= 256 prim: OCTET STRING [HEX DUMP]:C1E2357C6FC53F1CC5E0E76EB1224BE8F24E8839251CF954A98090C4549F1BAFB7BCB1006DD2A982D56C1D2D3E6B422122D01F78DA0099776B789162E8CE94EE3D1E7B88AA8175DBB86F2F4AC65361BC949B3B90460741CAEE6F1ABDC5BD6C5296FBC8F2DAFF77F7110EA32D330D38DD2CA2FE13E785C86FE2210B58074C2DA5F440794BA023FC98B3D1E7DC979DBAC6672B5C19ABF4A91E21D5E474475BC09B78910D1F8E0290B38AE8D756E04D7F5EFBA64BFB5A0E96CD3DE1D82F609544A423F666D08B63262229687E1982BC8E424C7B5266B11A59036625F8E92C06740A3C9D8F3CE87FEB1F4444BC2039C8C6FF0AB9457D8AA63851ECF3C4AF1A2328FD Which means the same message was sent to three recipients, identified by their serial number which we recognize as being our alice, bob and charly. We also get this at the end: 1110:d=4 hl=2 l= 9 prim: OBJECT :pkcs7-data 1121:d=4 hl=2 l= 20 cons: SEQUENCE 1123:d=5 hl=2 l= 8 prim: OBJECT :des-ede3-cbc 1133:d=5 hl=2 l= 8 prim: OCTET STRING [HEX DUMP]:01D4CE3AF4D17ABB Which means that the data sent (after this dump) is encrypted by 3DES version 3 (three different keys) in CBC mode with an IV 01D4CE3AF4D17ABB. Reminder: DES-EDE3-CBC I like to put reminders like this so you don't have to switch to Wikipedia if you don't remember what are those letters. DES (Data Encryption Standard) is the famous no-longer-used block cipher (because it was broken ages ago). EDE3 short for the third version of the Triple DES block cipher (that is still considered secure today, it was a response to DES no longer being secure) which uses 3 different keys. Encrypting is done like this: • we encrypt with key1 • then we decrypt with key2 • then we encrypt again with key3 E(k3, D(k2, E(k1, M))) Hence the triple DES. CBC is a mode of operation. A block cipher can only encrypt/decrypt blocks of a certain size (64bits with DES). If you want to do more (or less) you have to use a mode of operation (and a padding Chinese Remainder Theorem Here the interesting thing is that the same message was sent to three different recipients, encrypted with the same exponent (3). Let's write down the informations we have: c1 = m^3 modulo n1 c2 = m^3 modulo n2 c3 = m^3 modulo n3 c1 being the encrypted message sent to Alice, n1 being Alice's modulus, and so on... We have a system with one unknown: the message. The Chinese Remainder Theorem works in a similar fashion to Lagrange Interpolation (anecdote time: it is used in Shamir's Secret Sharing). So that we have: m^3 = c1 * n2 * n3 * ((n2 * n3)^-1 [n1]) + c2 * n1 * n3 * ((n1 * n3)^-1 [n2]) + c3 * n1 * n2 * ((n1 * n2)^-1 [n3]) modulo n1 * n2 * n3 A brief explanation: We have `c1 = m^3 modulo n1, to place it in a formula modulo n1 * n2 * n3 we have to cancel it when it's modulo n2 or modulo n3. How to make something congruent to zero when its modulo n2 or n3 ? Make it a multiple of n2 or n3. So we multiply c1 with n2 and n3. But then when it will be modulo n1 we will have the value c1 * n2 * n3 which is not correct (c1 = m^3 modulo n1 !). So let's cancel the n2 and n3 with their inverse modulo n1. We then have c1 * n2 * n3 * ((n2 * n3)^-1 [n1]). We do this with all the equations to find the bigger equation. This is the Chinese Remainder Theorem. Simple no? And this result is even more useful since we know that: m < n1 m < n2 m < n3 m^3 < n1*n2*n3 Of course if m was greater than one of the modulus then it would decrypt incorrectly. So what we have is: m^3 = something modulo n1*n2*n3 m^3 = something That's right, we can get rid of the modulo. We then do a normal cubic root and we find m. Here's the quick python code I hacked together for this: (by the way we can quickly get the modulus of each recipients with openssl: openssl x509 -in alice.crt -modulus) ## 6384E2B2184BCBF58ECCF10CA7A6563C (Alice) c1 = "C1E2357C6FC53F1CC5E0E76EB1224BE8F24E8839251CF954A98090C4549F1BAFB7BCB1006DD2A982D56C1D2D3E6B422122D01F78DA0099776B789162E8CE94EE3D1E7B88AA8175DBB86F2F4AC65361BC949B3B90460741CAEE6F1ABDC5BD6C5296FBC8F2DAFF77F7110EA32D330D38DD2CA2FE13E785C86FE2210B58074C2DA5F440794BA023FC98B3D1E7DC979DBAC6672B5C19ABF4A91E21D5E474475BC09B78910D1F8E0290B38AE8D756E04D7F5EFBA64BFB5A0E96CD3DE1D82F609544A423F666D08B63262229687E1982BC8E424C7B5266B11A59036625F8E92C06740A3C9D8F3CE87FEB1F4444BC2039C8C6FF0AB9457D8AA63851ECF3C4AF1A2328FD" n1 = "EFBA9C442084759DC9770021B03C2E2913053E770779316F92C5DBFCAE4D3682E64006E38FA6A3AC24CC13AD2E747A50E5735064549F590294E36F2A1B23DB29567B49C007F8F8C224D3CD19B81D3F198C540291C135965E549881B775EEE29684F0E6CD4C2A017BE38F2E78E070D503BE9EE3EA2C491E53DE9C705FEB973918A168F275D90D055778289598BD2377D79ACC1BA493F570C5C8301913CEF12CD513321F8F320D8EC8172182D03F33721F02DFCE24463AE7A6CAB7C3A0CBB7D2AB149D347A2C9ABDB81BE4B60CAECBF31CF79C4BA0081FC00BB0939A950CBACA5B7B79FF92AF273B0D01A7E183FF30C90F27705D18F70EBB32281C5A873ED0A90D" ## 9F9D51BC70EF21CA5C14F307980A29D8 (Bob) c2 = "27EB5A62A311DFAECF09318BEF7D60B98EA151AF09BDBF2A89A884617B8A8A14FF6F8045A8FD5D8956F5768C32A7E47AB17FA08D5F7D2EB590C4FC8296A1F70069C338CFA3C131A58FE05A75E36D7457ECC7B1BB403C1FF31FA66FB478B1F4548325B57961191AE1BFDDD7F5AF6FEC6FD94F66B1BC482337B579AD790466D1F33EDB09AA388085053D3C383F91A8EC40DB150365735B7E2D01F172E23717D31ABE0350FAC6730673C3C70EE593E8008A222DE40CF8A62615D119CD119FE8C30DA49E7A1D3596279B659E72C584D6D8262B1126B8C8BCDC09A31761EE746A14ADA7EC387AF2C52CBABDD8F443DF1F4C5E70C83B82A12F3BBCB21494689D13791F" n2 = "C0028301D5645483592E2117463A804454BD1AE33B1CBEF33D9CDFF86EFD46CB24BA1984874DAE3A4A3D3609D9607276F91DFAD38E9ED6B6BCE15F29C49E1C7E0B532AE2A1343AF3F8064B4A093F23F981624D4E08F901787B761847B4F0963512EAC13B5D47DA4295FF614501A3D7D7EDA6B4E6197B974A70BF11FDC5D619D50974415E209DE76C68F190DBCA5BB6F1963C1E0E987BE105FF9082EAE003C2051A4C95E3299D3C1BC64D5F8E95D40C7B27EEEA5EBCD9B54CD6B5655B0AB9AEAA15E976AE37EA228A151D2AFD5417D4BCBC3BFB396E6B10AF561CBE0FD0374081B7034585C54096849FF82B79A117E44AE8FDB5B304BC0D9CA297C2F4A57FF91F" ## A6D4EF4DD38B1BB016D250C16A680470 (Charly) c3 = "04991C5BA4882F329B03B18E2B317F4A54905ED4EB832B084A42AD700A0D3136A14BB57D61D4A1982E2CAB0FF773356759EE4AD77C1982E642CF574332AB32D109952FDE6221D77C35E4D0B69E559392DBE602E5336BD09239E85F21A70F4A824907AF75C9C372D4BE4C15E45431C35FE678E2646017D74186B3B084A41F217655A2ED262AA5C300BA737AB0DF270BD0B38A2FF215A3B5DB3CBB79350DDFEF1A08E40CB253B506D92002BBF4AD112AC1DDDB96CD4539A01035E76B1CC5C43427F46C83DBAA318387FE2C8C7FAA75FC0099050CF98671015A568CFFC56DFF6F8CB80A6A55B4CCB0D825AA9D99098DDA5D2EEC7D40D0BCCDA42D9E618A09AEC50A" n3 = "CFA7854352FF9DF5E84AB10AB8F034F8106811D973BAB528BFCBD3DCBCFDF9FB5C398E23B58BA883F7C78F47C6694B4F042CDB8E54E856040F8A8A9ADBCA4C6D0813894C43352EB3EE19C1F76DF46DFD1B6BB38349BCE811036B0ED7ACFE2E5045FE4232F11DA3F113189A176964C206155342FD9E2E8AD11CBBCB85DFDF30E62AEA068F2DD7CEC6CF818D1E312BBA5FA6385461CA5ADCA0F95B6299FA366EEF8856416D72A42A93FD979E269D8FEA143870985FD353C85850FB4A11B6E4BA483CDC97F7E1717C34DF7D9E34DF83F67A9DA97ACA69926167D44C2CB3BB858EC041A244A6197D7F3B9AFD02A0562F13EACE6494F289184DAD16D2D995ED1ADC13" ## base16 -> base10 c1 = int(c1, 16) c2 = int(c2, 16) c3 = int(c3, 16) n1 = int(n1, 16) n2 = int(n2, 16) n3 = int(n3, 16) ## extended euclide algorithm def xgcd(a,b): """Extended GCD: Returns (gcd, x, y) where gcd is the greatest common divisor of a and b with the sign of b if b is nonzero, and with the sign of a if b is 0. The numbers x,y are such that gcd = ax+by.""" prevx, x = 1, 0; prevy, y = 0, 1 while b: q, r = divmod(a,b) x, prevx = prevx - q*x, x y, prevy = prevy - q*y, y a, b = b, r return a, prevx, prevy ## chinese remainder formula n2n3 = n2 * n3 n1n3 = n1 * n3 n1n2 = n1 * n2 n2n3_ = xgcd(n2n3, n1)[1] n1n3_ = xgcd(n1n3, n2)[1] n1n2_ = xgcd(n1n2, n3)[1] m3 = c1 * n2n3 * n2n3_ + c2 * n1n3 * n1n3_ + c3 * n1n2 * n1n2_ m3 = m3 % (n1n2 * n3) from decimal import * getcontext().prec = len(str(m3)) x = Decimal(m3) power = Decimal(1)/Decimal(3) answer = x**power ranswer = answer.quantize(Decimal('1.'), rounding=ROUND_UP) diff = x - ranswer**Decimal(3) if diff == Decimal(0): print("x is the cubic number of", ranswer) print("x has a cubic root of ", answer) • The xgcd function is included in sage but here I use Python so I included it in the code. • We need to use the decimal package to calculate the cubic root because our number is too big. We then get this big ass number that we convert to hexadecimal (hex(number) in python). This yields: We refer once more to the RFCs 8.1 Encryption-block formatting A block type BT, a padding string PS, and the data D shall be formatted into an octet string EB, the encryption block. EB = 00 || BT || PS || 00 || D . (1) The block type BT shall be a single octet indicating the structure of the encryption block. For this version of the document it shall have value 00, 01, or 02. For a private- key operation, the block type shall be 00 or 01. For a public-key operation, it shall be 02. The padding string PS shall consist of k-3-||D|| octets. For block type 00, the octets shall have value 00; for block type 01, they shall have value FF; and for block type 02, they shall be pseudorandomly generated and nonzero. This makes the length of the encryption block EB equal to k. We have our 3DES key: 4f8957408f0ea202c785b95e206b3ba8da3dba7aea08dca1 to use. Let's get the hexdump the end of the file (you can use commandline utilities like base64, hexdump, dd and xdd): openssl smime -in challenge1 -pk7out > b64file` base64 -d b64file > hexfile hexdump -s 1135 hexfile And finally decrypt our encrypted file with openssl since it provides a command for that: openssl des-ede3-cbc -d -iv 01D4CE3AF4D17ABB -K 4f8957408f0ea202c785b95e206b3ba8da3dba7aea08dca1 -in encrypted Voila ! That was really fun :) This is a Bleichenbacher-style attack that shows up in all sorts of competitions (like PicoCTF, Matasano). It also broke Firefox's certs a few years ago. Aren't all the attacks from Bleinchenbacher anyway : )) ? How can we call one type of attack a Bleichenbacher-style attack. I am aware of two attacks that are attributed to Bleichenbacher: one is an adaptive chosen-ciphertext attack on a padding oracle, the other is related to small e / small plaintext, in which case you can just extract the cubic root from the ciphertext. The case discussed in this post, however, is Håstad's Broadcast Attack, demonstrated in 1988 (see http://www.csc.kth.se/~johanh/rsalowexponent.pdf). Nowadays it is prevented by random paddings. To follow up on that: yes, there are more attacks from Bleichenbacher, and no, I don't why someone mentioned the cubic root thing being his idea. Still, his attacks do not apply here, and this attack is quite clearly Håstad. My question if off-topic, but do you have the GPG key used to decrypt the "challenges.gpg" file? Thank you. I'm sorry but what challenges.gpg file? leave a comment...
{"url":"https://cryptologie.net/article/182/airbus-crypto-challenge-write-up/","timestamp":"2024-11-06T12:04:09Z","content_type":"text/html","content_length":"36090","record_id":"<urn:uuid:f5df4328-04dd-4060-8cd3-7bbdb9e59b2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00229.warc.gz"}
Hide & Seek - Page: 1.3 » Linux Magazine Indoor navigation with machine learning Evaluating the Properties There is a reason why many ensemble learning methods rely on decision trees. Decision trees are robust against outliers, and they process categorical data that does not need to be metrically related. You do not need to scale the data. And, incidentally, they prioritize attributes. Listing 6 finds a relative importance of 9.0 percent for the second attribute, WLAN 1, and 7.7 percent for WLAN 6. The query feature_importances_<0.1 assigns these two indices to the variable csel, which then reduces the output data by just these two columns. Repeating the calculations above with the adjusted data yields a similar result: Tom is in the living room with a probability of 89 percent. Prioritizing the Properties fi = classifier.feature_importances_ print('Feature importance: ', fi) csel = np.where(fi<0.09) df.drop(df.columns[csel], axis=1, inplace=True ) # output: # Feature importance: # array([0.25384511, 0.00911624, 0.09055321, # 0.21282642, 0.24906961, 0.1073945, 0.07719491]) Converting the Euclidean Distance def dbm2DistanceConverter(rssi, db0 = -20, N = 4): RSSI to distance converter Input: mesured power RSSI in dBm; db0 power in 1m distance; N attenuation exponent Output: distance in meters formula: Distance = 10 ((db0 - RSSI)/(10 * N)) # free space path loss: N=2 # reduced path loss: N>2 return 10 ((db0 - rssi )/(10 * N)) def eucV(p,b): """Euclidean distance between two points squared""" return (p[0]-b[0]) 2 + (p[1]-b[1]) 2 Redundant data does not affect the accuracy of machine learning training because detecting redundancy is part of the training. This is different if redundant data slows down the learning process or feeds in attributes with contradictory data. Later, I will cover other methods that do not simply delete attributes but try to combine them. Unsupervised Learning Until now, I have assumed that I know the location for each measurement. But what if I was careless when noting down the rooms? Unsupervised learning looks for statements that can be derived from the data without contradiction. In this case, unsupervised learning would group similar measurements together and assume a common origin. However, whether Tom's location is the kitchen or the living room remains undetermined. Like in Listings 1 and 3 using supervised learning, the data ends up in an array (Listing 8). To distinguish the data, I use Xu instead of X. The square brackets [:,:-1] delete the target size in the last column. To compare the data, I later resort to the pandas DataFrame df from supervised learning. Output Data Without a Location import numpy as np import pandas as pd import matplotlib.pyplot as plt fn = "images/wifi_localization.txt" #fn = "https://archive.ics.uci.edu/ml/machine-learning-databases/00422/wifi_localization.txt" Xu = np.loadtxt(fn)[:,:-1] In my experiments here, I am restricting myself to the K-Means classifier [4]. The letter K expresses the similarity to the k-nearest neighbor algorithm, which searches for the k nearest neighbors, where k stands for the number. K-Means divides the data into k classes and optimizes the number of k centroids such that the sum of the squared distances of the points to their respective centroids remains minimal. Although it sounds a bit abstract, this can be programmed with just a few lines of code thanks to scikit-learn [5] (Listing 9). 01 from sklearn.cluster import KMeans 02 clusters = 4 03 kmeans = KMeans(n_clusters=clusters, init='k-means++', max_iter=300, n_init=10, random_state=0) 04 kmeans.fit(Xu) 05 y_pred = kmeans.predict(Xu) 06 clusterCenters = kmeans.cluster_centers_ Line 1 in Listing 9 imports the classifier and line 3 sets up the hyperparameters. The classifier needs to know the number of clusters; I will choose 4 for now. The other parameters are default values. k-means++ helps the software find good initial values, which it optimizes in max_iter steps. It makes n_init attempts and selects the best solution. random_state starts the pseudorandom generator at a defined point, which means that each iteration of the computations returns an identical result. Line 5 shows the fruits of my labor: The trained method uses kmeans.predict(Xu) to assign the measurements to the clusters (i.e., in the case of four clusters, one of the numbers 0, 1, 2, or 3). The seven coordinates of the four clusters' focal points are stored in the method variable cluster_centers_. Listing 10 visualizes the result (Figure 10). Strictly speaking, Figure 10 is just a projection of the seven-dimensional property space onto a two-dimensional drawing plane. The choice of the and 4 columns in line 1 is not entirely accidental: They contain the high-priority features found during supervised learning. When I look at principal component analysis (PCA) [6] later, I will discover another – also unsupervised – selection method. Visualizing the K-Means Result 01 x1, x2 = 4, 0 02 colormap = np.array(['purple', 'green', 'blue', 'orange']) 03 plt.figure(figsize=(6,4), dpi=120) 04 plt.scatter(Xu[:,x1], Xu[:,x2], s= 10, c=colormap[y_pred]) 05 plt.scatter(clusterCenters[:, x1], clusterCenters[:, x2], s=180, c='red', marker = 'X') 06 for i, p in enumerate(clusterCenters): 07 plt.annotate(f'$\\bf{i}$', (p[x1]+1, p[x2]+3)) 08 plt.show() The plt.scatter instruction prints all the measured values, selecting the colors from the colormap (line 4). The index for the color is the y_pred set in Listing 9. The focal points are marked as red crosses by the second scatter command in line 5. Unsupervised learning divides the data into groups and chooses the assignments randomly. In Listing 9, if the initial value of the random random_state were not fixed, the groups would get different numbers each time they ran – I'll come back to that later. Hidden Spaces Listing 11 provides statistical information about the assignment's quality; Listing 12 shows a typical output. The output's inertia says something about a cluster's compactness. The points should be grouped as tightly as possible around the cluster's focal point: The smaller the value for the same number of points, the better. The output's silhouette takes into account the distance to the neighboring clusters. The farther away the neighboring clusters are, the clearer the delineation of a cluster. The silhouette value lies between 1 (optimal) and -1 (possibly wrongly set cluster focal points). Both values describe the tendency in comparison with different cluster sizes with identical initial data. from sklearn.metrics import silhouette_score print(f'Input data shape: {Xu.shape}') print(f'Inertia: {kmeans.inertia_:3.1f}') print(f'Silhouette: {silhouette_score(X, kmeans.labels_)}') print(f'New labels: {np.unique(kmeans.labels_)}') clusterCenters = kmeans.cluster_centers_ print(f'Center of gravity: {clusterCenters}' ) Input data shape: (2000, 7) Inertia: 246771.6 Silhouette: 0.41023382751730914 New labels: [0 1 2 3] Center of gravity: [[-35.43058824 ...] Listing 13 calculates inertias for different clusters and Figure 11 plots the values. The optimum result is a small inertia for the smallest possible cluster size. The elbow method looks for the point at which the inertia's steep slope changes to a shallow slope. With a little good luck, this point will be at a cluster number of 4 or 5. Listing 14 does a similar job for calculating the silhouette; Figure 12 shows the results. Again, the best values are at 4 and 5. Finding the Optimum Cluster Size wcss = [] for i in range(2, 11): kmeans = KMeans(n_clusters=i, init='k-means++', max_iter=300, n_init=10, random_state=0) plt.plot(range(2, 11), wcss) plt.title('Elbow Method') plt.xlabel('Number of clusters') Plotting the Mean Silhouette Value from sklearn.metrics import silhouette_score kmeansk = [KMeans(n_clusters=k, random_state=2).fit(Xu) for k in range(1, 10)] inertias = [model.inertia_ for model in kmeansk] silhouette_scores = [silhouette_score(Xu, model.labels_) for model in kmeansk[1:]] plt.figure(figsize=(4,3), dpi=120) plt.plot(range(2, 10), silhouette_scores, "bo-") plt.xlabel("Number of clusters") plt.ylabel("Silhouette score", fontsize=12) In Figure 13, each cluster group from k=3 to k=8 is given its own subplot. The mean values are indicated by a vertical red line. In addition, the silhouette value of each dataset appears as a horizontal bar, sorted by size. The more pointed the right end of the bar looks, the greater the variation of the values and the more nonuniform the cluster. With five clusters, the maximum values are at a uniform level of almost 0.6. The second narrow bar suggests that the data contains one small cluster in addition to the four large ones. After expanding the number of clusters to five in Listing 9 (i.e., from clusters = 4), K-Means identifies the set of points at the top of Figure 10 as a separate group (i.e., a fifth room). Unsupervised learning finds connections that would have been hidden in supervised learning. Using this method puts forth the suggestion that the data was recorded in five different rooms, not four. Buy this article as PDF Express-Checkout as PDF Price $2.95 (incl. VAT) comments powered by Disqus Support Our Work Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.
{"url":"https://www.linux-magazine.com/Issues/2022/255/Machine-Learning/(offset)/3/(tagID)/405","timestamp":"2024-11-10T21:27:02Z","content_type":"application/xhtml+xml","content_length":"62530","record_id":"<urn:uuid:69316f46-5d40-4aea-a5ca-1df0cfb20eda>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00151.warc.gz"}
Search result: Catalogue data in Spring Semester 2017 Physics Bachelor Bachelor Studies (Programme Regulations 2010) Compulsory Courses Second Year Compulsory Courses Examination Block II Number Title Type ECTS Hours Lecturers 402-0204-00L Electrodynamics O 7 credits 4V + M. Gaberdiel Derivation and discussion of Maxwell's equations, from the static limit to the full dynamical case. Wave equation, waveguides, cavities. Generation of electromagnetic radiation, Abstract scattering and diffraction of light. Structure of Maxwell's equations, relativity theory and covariance, Lagrangian formulation. Dynamics of relativistic particles in the presence of fields and radiation properties. Develop a physical understanding for static and dynamic phenomena related to (moving) charged objects and understand the structure of the classical field theory of electrodynamics Learning (transverse versus longitudinal physics, invariances (Lorentz-, gauge-)). Appreciate the interrelation between electric, magnetic, and optical phenomena and the influence of media. objective Understand a set of classic electrodynamical phenomena and develop the ability to solve simple problems independently. Apply previously learned mathematical concepts (vector analysis, complete systems of functions, Green's functions, co- and contravariant coordinates, etc.). Prepare for quantum mechanics (eigenvalue problems, wave guides and cavities). Classical field theory of electrodynamics: Derivation and discussion of Maxwell equations, starting from the static limit (electrostatics, magnetostatics, boundary value problems) in the vacuum and in media and subsequent generalization to the full dynamical case (Faraday's law, Ampere/Maxwell law; potentials and gauge invariance). Wave equation and solutions in Content full space, half-space (Snell's law), waveguides, cavities, generation of electromagnetic radiation, scattering and diffraction of light (optics). Application to various specific examples. Discussion of the structure of Maxwell's equations, Lorentz invariance, relativity theory and covariance, Lagrangian formulation. Dynamics of relativistic particles in the presence of fields and their radiation properties (synchrotron). J.D. Jackson, Classical Electrodynamics W.K.H Panovsky and M. Phillis, Classical electricity and magnetism L.D. Landau, E.M. Lifshitz, and L.P. Pitaevskii, Electrodynamics of continuus media Literature A. Sommerfeld, Electrodynamics / Optics (Lectures on Theoretical Physics) M. Born and E. Wolf, Principles of optics R. Feynman, R. Leighton, and M. Sands, The Feynman Lectures of Physics, Vol II W. Nolting, Elektrodynamik (Grundkurs Theoretische Physik 3) 401-2334-00L Methods of Mathematical Physics II O 6 credits 3V + H. Knörrer Abstract Group theory: groups, representation of groups, unitary and orthogonal groups, Lorentz group. Lie theory: Lie algebras and Lie groups. Representation theory: representation theory of finite groups, representations of Lie algebras and Lie groups, physical applications (eigenvalue problems with symmetry). Core Courses Core Courses in Experimental Physics Number Title Type ECTS Hours Lecturers 402-0266-00L Introduction to Nuclear and Particle Physics W 10 credits 3V + K. S. Kirch Abstract Introduction to the concepts of nuclear and particle physics. Introduction to the concepts of nuclear and particle physics. Learning Discussion of new theoretical concepts and important experiments, which objective brought about major breakthroughs in our understanding of the underlying physics. Applications of nuclear and particle physics. Links between particle physics and cosmology. - Building blocks of matter (quarks and leptons) and their interactions (QED, QCD, weak interaction) - The Standard Model of particle physics und open fundamental questions Content - Bound systems (nuclear forces, structure of nuclei, stability) - Applications of nuclear and particle physics (nuclear fusion and fission) - Nuclear physics, particle physics and cosmology Lecture notes More information and additional material concerning lecture and excersises are collected at Moodle, link to be published. - Povh et al.: Teilchen und Kerne, Springer Verlag 2009 - Henley, Garcia: Subatomic Physics, World Scientific 2007 Literature - Griffith: Introduction to Elementary Particles, Wiley VCH 2008 - Demtroeder: Experimentalphysik IV: Kern- Teilchen- und Astrophysik, Springer Verlag, 2009 See the web site for more suggestions 402-0275-00L Quantum Electronics W 10 credits 3V + S. Johnson Abstract Classical and semi-classical introduction to Quantum Electronics. Mandatory for further elective courses in Quantum Electronics. The field of Quantum Electronics describes propagation of light and its interaction with matter. The emphasis is set on linear pulse and beam propagation in dispersive media, optical anisotropic materials, and waveguides and lasers. Learning Teach the fundamental building blocks of Quantum Electronics. After taking this course students will be able to describe light propagation in dispersive and nonlinear media, as well as objective the operation of polarization optics and lasers. Propagation of light in dispersive media Light propagation through interfaces Interference and coherence Fourier Optics Content Beam propagation Optical resonators Laser fundamentals Polarization optics Nonlinear optics Lecture notes Scripts will be distributed in class (online) via moodle Literature Reference: Saleh, B.E.A., Teich, M.C.; Fundamentals of Photonics, John Wiley & Sons, Inc., newest edition Prerequisites Mandatory lecture for physics students / Notice Prerequisites (minimal): vector analysis, differential equations, Fourier transformation Core Courses: Theoretical Physics Recommended for the second year (4th semester): Theory of Heat Number Title Type ECTS Hours Lecturers 402-2214-00L Theory of Heat W 10 credits 3V + R. Renner Thermodynamics and its applications, and basics of the kinetic theory of gases and of statistical mechanics: equilibrium, work and heat, laws of thermodynamics, Carnot process, absolute Abstract temperature, entropy, ideal gas, thermodynamic potentials, phase transitions, multicomponent systems; Boltzmann equation, H-Theorem, Maxwell-Boltzmann distribution; statistical Develop a physical understanding for thermodynamic phenomena and first contact with statistical descriptions, e.g., transport described through Boltzmann equation or classical Learning statistical physics. Equilibrium thermodynamics as described via state variables as opposed to non-equilibrium transport phenomena. Phase transformations, such as liquid-gas or objective ferromagnetic-paramagnetic transition. Application of mathematical concepts such as theory of functions of many variables, Legendre transformation, statistical sums. Preparation for (quantum-)statistical mechanics. Thermodynamics and its applications, and basics of the kinetic theory of gases and of statistical mechanics: equilibrium, work and heat, laws of thermodynamics, Carnot process, absolute Content temperature, entropy, ideal gas, thermodynamic potentials, phase transitions, multicomponent systems; Boltzmann equation, H-Theorem, Maxwell-Boltzmann distribution; statistical 402-0234-00L Mechanics of Continua W 10 credits 3V + G. M. Graf Abstract Mechanics of Elastic Media and Hydrodynamics: Strain and stress tensor, field equations, equilibrium, waves and oscillations. Dynamics of fluids, Euler and Navier-Stokes equations, Bernoulli equation, vortices, waves, potential flows; viscous fluids, Reynolds number, Stokes drag, boundary layers, instabilities, turbulence. Learning Knowledge of the essential concepts and methods of theoretical mechanics of elastic media and hydrodynamics. Consolidation through examples and solution of exercise problems. Introduction to the concepts and methods of theoretical mechanics of elastic media and hyrdodynamics: relation between strain and stress tensor, balance equations, field equations of elastic media, elastostatics, waves and oscillations, lattice dislocations and plastic deformation. Dynamics of fluida, Euler equations of ideal fluida, Navier-Stokes equations of real Content fluids, Bernoulli equations, vortex theorems of Thomson and Helmholtz, dynamics of vortices, oscillation and waves in fluida, surface waves, two-dimensional potential flow, circulation, Magnus force, theorems of Kutta and Zhukovski, flow around profiles (cylinder, platte, aerofoil), Kutta condition. Incompressible viscos fluida, Reynolds number, Hagen-Poisseuille flow, Stokes law, Prandtl's boundary layer, Couette flow and Taylor instability. Turbulence, instability of laminary flows, Reynolds equations, development of turbulence, Kolmogorov scaling. Lecture notes Lecture notes (German) will be distributed. Prerequisites general / classical mechanics / Notice 402-0206-00L Quantum Mechanics II W 10 credits 3V + T. K. Gehrmann Abstract Introduction to many-particle quantum mechanics and quantum statistics. Basic concepts: symmetrised many-body wave functions for fermions and bosons, the Pauli principle, Bose- and Fermi-statistic and second quantisation. Applications include the description of atoms, and the interaction between radiation and matter. Learning Introduction to many-particle quantum mechanics and quantum statistics. In particular basic concepts such as symmetrised many-body wave functions for fermions and bosons, the Pauli objective principle, Bose- and Fermi-statistics and second quantisation will be discussed. Applications include the description of atoms, and the interaction between radiation and matter. The description of identical particles leads us to the introduction of symmetrised wave functions for fermions and bosons. We discuss simple few-body problems and Content proceed with a systematic description of fermionic many body problems in terms of second quantisation. We also discuss basic concepts of quantum statistics. Applications include the description of atoms, and the interaction between radiation and matter. F. Schwabl, Quantenmechanik (Springer) Literature F. Schwabl, Quantenmechanik fuer Fortgeschrittene (Springer) J.J. Sakurai, Advanced Quantum mechanics (Addison Wesley) Practical Courses Number Title Type ECTS Hours Lecturers 402-0000-04L Physics Lab II O 4 credits 1V + A. Biland, M. Doebeli, M. Kroner, S. P. Quanz Abstract Introductory lab course in experimental physics with accompanying lecture Übergeordnetes Thema des Praktikums und der Vorlesung ist die Auseinandersetzung mit den grundlegenden Herausforderungen eines physikalischen Experimentes. Am Beispiel einfacher experimenteller Aufbauten und Aufgaben stehen vor allem folgende Gesichtspunkte im Vordergrund: Learning - Motivation und Herangehensweise in der Experimentalphysik objective - Praktischer Aufbau von Experimenten und grundlegende Kenntnisse von Messmethoden und Instrumenten - Einführung in relevante statistische Methoden der Datenauswertung und Fehleranalyse - Kritische Beurteilung und Interpretation der Beobachtungen und Ergebnisse - Darstellen und Kommunizieren der Ergebnisse mit Graphiken und Text - Ethische Aspekte der experimentellen Forschung und wissenschaftlicher Kommunikation Content Versuche zu Themen aus den Bereichen der Mechanik, Optik, Wärme, Elektrizität und Kernphysik mit begleitender Vorlesung zur Vertiefung des Verständnisses der Datenanalyse und Lecture notes Anleitung zum Physikalischen Praktikum (siehe https://ap.phys.ethz.ch); Vorlesungsskript Aus einer Liste von 33 Experimenten müssen 8 Experiment ausgewählt und in Zweiergruppen durchgeführt werden. / Notice Voraussetzungen: - Physik I Advanced Physics Laboratory II Prerequiste: "Advanced Physics Laboratory I" completed. Before enroling in "Advanced Physics Laboratory II", please 402-0240-00L enrol in "Advanced Physics Laboratory I". W 9 credits 18P C. Grab, T. M. Ihn Enrol at most once in the course of the Bachelor programme! This laboratory course provides basic training of experimental skills. These are experimental design, implementation, measurement, data analysis and interpretation, as well as error Abstract analysis. The experimental work has to be complemented by a concise written report, which trains the scientific writing skills. Manuals for the individual experiments are available in English. Students learn to independently perform advanced experiments and document them scientifically correct. The following aspects are emphasized: - understanding complicated physical phenomena Learning - structured approach to experiments with complex instruments objective - various practical aspects of experimenting and determining uncertainties - learning the relevant statistical methods for data analysis - interpretation of measurements and uncertainties - describing the experiments and the results in a scientifically proper manner, in direct analogy to publishing - ethical aspects of experimental research and scientific communication We offer experiments covering the following topics: Content Basic topics from mechanics, optics, thermodynamics, electromagnetism and electronics; as well as central topics from nuclear and particle physics, quantum electronics, quantum mechanics, solid state physics and astrophysics. Lecture notes Instructions for experiments are available in English. Prerequisites From a variety of over 50 experiments, students have to perform 4 experiments covering different topics. The experimental work is complemented by writing a scientific report. / Notice Advanced Physics Laboratory I 402-0241-00L O 9 credits 18P C. Grab, T. M. Ihn IMPORTANT: You may not enrol repeatedly in the course of the Bachelor programme. This laboratory course provides basic training of experimental skills. These are experimental design, implementation, measurement, data analysis and interpretation, as well as error Abstract analysis. The experimental work has to be complemented by a concise written report, which trains the scientific writing skills. Manuals for the individual experiments are available in English. Students learn to independently perform advanced experiments and document them scientifically correct. The following aspects are emphasized: - understanding complicated physical phenomena Learning - structured approach to experiments with complex instruments objective - various practical aspects of experimenting and determining uncertainties - learning the relevant statistical methods for data analysis - interpretation of measurements and uncertainties - describing the experiments and the results in a scientifically proper manner, in direct analogy to publishing - ethical aspects of experimental research and scientific communication We offer experiments covering the following topics: Content Basic topics from mechanics, optics, thermodynamics, electromagnetism and electronics; as well as central topics from nuclear and particle physics, quantum electronics, quantum mechanics, solid state physics and astrophysics. Lecture notes Instructions for experiments are available in English. Prerequisites From a variety of over 50 experiments, students have to perform 4 experiments covering different topics. The experimental work is complemented by writing a scientific report. / Notice Proseminars, Experimental and Theoretical Semester Papers To organise a semester project take contact with one of the instructors. Not all lecturers are directly eligible in myStudies if "Professors" is the required type of lecturers. In such cases please take contact with the Study Administration ( Number Title Type ECTS Hours Lecturers Proseminar Theoretical Physics for Bachelor Students: Advanced Topics in Quantum Mechanics 402-0210-97L W 9 credits 4S G. Blatter Number of participants limited to 16. Abstract A guided self-study of original papers and of advanced textbooks in theoretical physics. Within the general topic, determined each semester, participants give a presentation on a particular subject and deliver a written report. Proseminar Theoretical Physics: The Theory of the Large Hadron Collider 402-0210-17L W 9 credits 4S C. Anastasiou Number of participants limited to 24. Abstract A guided self-study of original papers and of advanced textbooks in theoretical physics. Within the general topic, determined each semester, participants give a presentation on a particular subject and deliver a written report. Proseminar Theoretical Physics: Strong Correlations in One Dimension 402-0210-47L W 9 credits 4S O. Zilberberg Number of participants limited to 24. Abstract A guided self-study of original papers and of advanced textbooks in theoretical physics. Within the general topic, determined each semester, participants give a presentation on a particular theme. Proseminar Theoretical Physics: An Introduction to String Theory 402-0210-77L W 9 credits 4S C. A. Keller Number of participants limited to 24. Abstract A guided self-study of original papers and of advanced textbooks in theoretical physics. Within the general topic, determined each semester, participants give a presentation on a particular theme. 402-0217-BSL Semester Project in Theoretical Physics W 9 credits 18A Supervisors Abstract This course unit is an alternative if no suitable "Proseminar Theoretical Physics" is available of if the proseminar is already overbooked. 402-0215-BSL Experimental Semester Project in a Group of the Physics Department W 9 credits 18A Professors Abstract The aim of the project is to give the student experience in working in a research environment, carrying out physics experiments, analysing and interpreting the resulting data. Advanced Solid State Physics Experiments Supervisors for this experimental semester paper: Prof. Christian Degen Prof. Leonardo Degiorgi Prof. Klaus Ensslin 402-0510-BSL Prof. Thomas Ihn W 9 credits 18P Supervisors Prof. Joël Mesot Prof. Danilo Pescia Prof. Andreas Vaterlaus Prof. Andreas Wallraff Prof. Werner Wegscheider Prof. Andrey Zheludev Abstract Experiments in condensed matter physics. The work includes the planning, build-up, data taking and analysis, and interpretation of the experimental results. Learning Ziel ist das Entwickeln von Fähigkeiten, moderne Experimente in der Festkörperphysik durchzuführen. Dazu dienen experimentelle Arbeiten auf dem Gebiet der Festkörperphysik, meist in objective enger Zusammenarbeit mit laufenden Forschungsaktivitäten in den Forschungsgruppen. Content Durchführung von Experimenten aus dem Gebiet der Festkörperphysik. Planung, Aufbau, Durchführung, Auswertung und Interpretation der Experimente. Lecture notes n/a Prerequisites Arbeiten in einer Forschungsgruppe sind besonders gut geeignet, die Studierenden mit aktuellen Forschungsthemen und mit moderner Instrumentierung bekannt zu machen. / Notice Advanced Quantum Electronics Experiments Advisors for this experimental semester paper: Prof. Tilman Esslinger Prof. Jérôme Faist 402-0400-BSL Prof. Rachel Grange W 9 credits 18P Supervisors Prof. Jonathan Home Prof. Atac Imamoglu Prof. Steven Johnson Prof. Ursula Keller Abstract Implementation of experiments in quantum electronics. Planning, design, realisation, evaluation, and interpretation of the experiments. Content Durchführung von Versuchen im Gebiet der Optik, z.B. Holographie und Laserphysik. Planung, Aufbau, Durchführung, Auswertung und Interpretation der Experimente. 402-0719-BSL Particle Physics at PSI (Paul Scherrer Institute) W 9 credits 18P C. Grab Abstract During semester break in Summer 6-12 students stay for 3 weeks at PSI and participate in a hands-on course on experimental particle physics. A small real experiment is performed in common, including apparatus design, construction, running and data analysis. The course includes some lectures, but the focus lies on the practical aspects of experimenting. Learning Students learn all the different steps it takes to perform a complete particle physics experiment in a small team. They acquire skills to do this themselves in the team, including objective design, construction, data taking and data analysis. 402-0717-BSL Particle Physics at CERN W 9 credits 18P F. Nessi-Tedaldi, W. Lustermann Abstract During the semester break participating students stay for 4 weeks at CERN and perform experimental work relevant to our particle physics projects. Dates to be agreed upon. Learning Students learn the needed skills to, and perform a small particle physics experiment: setup, problem solving, data taking, analysis, interpretation and presentation in a written report objective of publication quality. Content Detailed information in: http://www@cmsdoc.cern.ch/~nessif/ETHTeilchenpraktikumCERN.html Prerequisites Language of instruction: English or German / Notice
{"url":"https://www.vorlesungen.ethz.ch/Vorlesungsverzeichnis/sucheLehrangebot.view?abschnittId=69871&semkez=2017S&ansicht=2&lang=en&seite=1","timestamp":"2024-11-08T03:22:48Z","content_type":"text/html","content_length":"56809","record_id":"<urn:uuid:2e730f1d-6b3e-4e2e-8cc0-dd873042907b>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00757.warc.gz"}
This module evaluates a pre-defined function for f0. This is an alternative to the use of an f0 table or a bi-Maxwellian approximation. This function can be used to define f0 for the integration or for the analytic continuation (or both). This function returns the pre-defined function as f0. Type Intent Optional Attributes Name integer, intent(in) :: is Index of species. double precision :: pperp Perpendicular momentum. double complex :: ppar Parallel momentum. Return Value doublecomplex
{"url":"https://danielver02.github.io/ALPS/module/alps_distribution_analyt.html","timestamp":"2024-11-11T19:16:44Z","content_type":"text/html","content_length":"16427","record_id":"<urn:uuid:02c32400-400c-41c9-b690-5b34881c371e>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00776.warc.gz"}
Math pizzazz worksheets Author Message vulgi82 Posted: Saturday 19th of Sep 11:16 Hi , Yesterday I began working on my mathematics assignment on the topic Algebra 1. I am currently not able to finish the same because I am not familiar with the fundamentals of dividing fractions, function composition and rational equations. Would it be possible for anyone to aid me with this? From: Germany kfir Posted: Monday 21st of Sep 10:30 I think I know what you are looking for. Check out Algebra Master. This is an awsome tool that helps you get your homework done faster and right. It can assist you with assignments in math pizzazz worksheets, algebraic signs and more. From: egypt cmithy_dnl Posted: Tuesday 22nd of Sep 10:32 Hello Dude, Algebra Master assisted me with my homework last week. I got the Algebra Master from https://algebra-test.com/company.html. Go ahead, try that and keep us updated about your opinion. I have even suggested Algebra Master to a couple of my friends at school . From: Australia Mr. Shalkej Posted: Wednesday 23rd of Sep 10:27 You mean it’s that easy ? Fantastic . Looks like just the one to end my search for a solution to my troubles. Where can I get this program? Please do let me know. Matdhejs Posted: Friday 25th of Sep 07:22 Sure, here it is: https://algebra-test.com/privacy.html. Good Luck with your exams. Oh, and before I forget , this company is also offering an unconditional money back guarantee, that just goes to show how confident they are about their product. I’m sure that you’ll love it . Cheers. From: The
{"url":"http://algebra-test.com/algebra-help/relations/math-pizzazz-worksheets.html","timestamp":"2024-11-08T02:24:58Z","content_type":"application/xhtml+xml","content_length":"19002","record_id":"<urn:uuid:bb180f1a-3e07-4d16-8bd0-e4aafecf0b5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00611.warc.gz"}
Program for lcm in C++ | Program for hcf in C++ | StudyMite Program for lcm in C++ | Program for hcf in C++ LCM Program in C++ | GCD Program in C++ The highest common factor is also known as GCD (Greatest common divisor). GCD is the largest possible integer which can be divided by the given numbers without a remainder. Note: GCD is also known as HCF(Highest Common Factor). LCM, lowest common multiple is the least possible integer which can be divided by the given numbers without a remainder. In the example given below, we will take two numbers and find their GCD and LCM. For GCD/HCF: We will take a number, check if it is perfectly divisible by both numbers. We store the value in a variable, and then, print the variable. For LCM: We use a formula here, LCM = Num1*Num2/GCD 1. Take two number’s as input. 2. Check if the given numbers are divisible by any number less than the number itself using for loop. 3. If yes, then store it (in gcd) and continue ahead. 4. After termination of the loop, the last updated value in gcd will be GCD. 5. To find LCM of the numbers apply the formula for lcm. 6. Now, Print the GCD and LCM using namespace std; int main() int fnum,snum,gcd,lcm; cout<<"Enter first number"; cout<<"\nEnter second number"; //find factors of both numbers for(int i=1;i<=fnum && i<=snum;i++) if(fnum%i==0 && snum%i==0) //find lcm of both numbers lcm = fnum*snum/gcd; cout<<"\n GCD of given numbers is:"<<gcd; cout<<"\n LCM of given numbers is:"<<lcm; return 0; Enter first number 10 Enter second number 5 GCD of given numbers is:5 LCM of given numbers is:10
{"url":"https://www.studymite.com/cpp/examples/gcd-hcf-lcm-program-in-cpp","timestamp":"2024-11-04T08:17:12Z","content_type":"text/html","content_length":"45639","record_id":"<urn:uuid:a7603a3c-9943-4054-b1e6-5988055075a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00471.warc.gz"}
Weekend Update 10/27/24 I’ve been running ops sessions on White Pine, making sure the JMRI switchlists are functioning. Three sessions in two weeks and everything is looking good. The L044 (EB White Pine Turn) was running a single GP35 this week, and the caboose isn’t a shove platform, it held the radio gear for the remote control. The White Pine Sub was used for testing the early equipment. Switching out Global Wood Sticks (yes it was a real business, that made popsicle sticks) [ Guests cannot view attachments ] Switching out the White Pine copper refinery [ Guests cannot view attachments ] Pulling the outbound loads back into WC’s White Pine Yard after weighing all the cars. [ Guests cannot view attachments ]
{"url":"https://www.therailwire.net/forum/index.php?topic=58679.45","timestamp":"2024-11-12T12:11:29Z","content_type":"application/xhtml+xml","content_length":"45892","record_id":"<urn:uuid:b7c64ea9-acba-41ef-a4d9-98a03f66b73c>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00436.warc.gz"}
How Long to Charge 1S-4S(3.7V-14.8V) RC LiPo Batteries Your RC is dying down quicker than you’d like. Well, you might not be charging your batteries properly which might be causing this. Hence, it’s crucial to charge it to the optimum level. So, how long to charge 1S-2S (3.7V-14.8V) LiPo battery? Normally, it takes 30-60 minutes to charge a LiPo battery. However, the charging time is reliant on the size of the battery or cell count. Your charger’s power output and capacity is also a determinant in this case. The battery type and the voltage play an important role as well. This is merely the essence of the whole topic. To learn more about the factors that determine the charging time follow along! Does the Charging Time of LiPo Depend on Voltage? Yes, the charging time of LiPo depends on voltage. Batteries charge quickly when there’s an adequate amount of current and voltages. But an unnecessary amount can cause harm to the batteries. Specifically, the current decides the charging period of a battery. You can charge an 8 Amp battery in 2 hours with a 4 Amp charger. It will only take an hour if you charge the same battery with an 8 Amp charger. It’s basically twice the previous charger Amp. To alter this input voltage, a DC-to-DC converter is often used. The in-built circuits basically regulate the pressure and oscillate the current flow. The device’s programming controls the amount of current that flows through. Moreover, you should take a glance at VBat to understand this more. What are Cell Count and Volt in LiPo batteries? The nominal voltage in a LiPo cell is 3.7V. The battery can be damaged if the voltage is below 3.7V. Or if it’s greater than 4.2V. You see, the quantity of voltage is determined by the number of cells. Hence, 2 cell LiPo batteries voltage is 7.4V. It’ll basically increase by 3.7V with an addition of each cell. 2 Kinds of Charging Rates for LiPo: Explained The charging rate and time vary with different types of chargers. LiPo batteries are complex and it’s important to know how to charge a LiPo battery. Here we’ve explained 2 kinds of charging methods for your LiPo batteries. Using other methods for recharging LiPo batteries might impact the rc battery health. So, this is a real concern for the rc users. Balanced Charging A balanced charger controls the voltages of particular cells in the series at a consequent rate. Thus your LiPo batteries in the series charge equally. So cells that require nominal charge need to be overcharged. This will also ensure the other cells’ are fully charged. This method is basically referred to as balanced charging. Your balanced charger can be set from 0.1-5 amps. Using the required amp you can balance charge a 2S 3000mAh battery in 2.5 hours. Note: No matter how much charge is left, charge it to its estimated capacity. This will ensure all the cells are charged. Fast Charging You can charge a rechargeable LiPo battery in 15-30 minutes using a superfast charger. This can deliver up to 3 amps and will charge your batteries quicker compared to 1A/700mA chargers. Moreover, we’ve seen this method reducing charging time for different capacity NiMH batteries. Nonetheless, this rechargeable battery might disintegrate quickly if charged with over 4.2V. In this case, your LiPo battery won’t last more than 50-75 recharges. This is compared to its average 500 recharge cycles. Charging Time for 1S/3.7V RC LiPo Batteries The pace of charging a 1S LiPo battery relies on the current delivered. So, let’s learn about the nominal and fully charged voltage of a 1S LiPo battery. Nominal and Fully Charged Voltage of 1S/3.7V LiPo Nominal voltage basically defines the particular voltage in a battery series. The nominal voltage of a 1s LiPo cell is 3.7V. When each LiPo cell attains 4.2V, it gets fully charged. How Long to Charge 1S/3.7V RC LiPo Batteries? Here we have a LiPo charge rate chart for a 1S/3.7V LiPo battery. Let’s have a look at it: Capacity Fast Charging Time Balanced Charging Time 800mAh 16 minutes 19.2 minutes 1200mAh 24 minutes 28.8 minutes 1500mAh 30 minutes 36 minutes 1800mAh 36 minutes 43.2 minutes 2100mAh 42 minutes 50.4 minutes 2500mAh 49 minutes 1 hour 3000mAh 1 hour 1 hour 12 minutes 3500mAh 1 hour 6 minutes 1 hour 24 minutes There you go! You now know how long does it take to charge a 7 4V LiPo battery. What Charger to Charge 1S/3.7V LiPo? We’ve got some charger recommendations for your 1S/3.7V LiPo. Browse the recommended ones down below: Image Title Price UP-S6AC 6-CH Intelligent Fast Balance Charger Check Price SUPUlSE Lipo Battery Charger 6-in-1 DC 3.7V 1S 1 Cell Micro 6 Ports Compact Charger Check Price These are some of the best 1S/3.7V chargers you can get! Charging Time for 2S/7.4V RC LiPo Batteries Compared to the 1S/3.7V batteries, the 2S/7.4V LiPo batteries require double the time. Let’s find out the reason below. Nominal and Fully Charged Voltage of 2S/7.4V LiPo A 2S LiPo battery series has 2 cells in the sequence. So the nominal voltage of 2S LiPo is (3.7×2) 7.4V. The fully charged voltage is (4.2×2) 8.4V. How Long to Charge 2S/7.4V RC LiPo Batteries? Here’s the time limit you’ll need to charge your 2S LiPo batteries: Capacity Fast Charging Time Balanced Charging Time 800mAh 32 minutes 38.4 minutes 1200mAh 48 minutes 57.6 minutes 1500mAh 1 hour 1 hour 12 minutes 1800mAh 1 hour 12 minutes 1 hour 26 minutes 2100mAh 1 hour 24 minutes 1 hour 41 minutes 2500mAh 1 hour 32 minutes 2 hours 3000mAh 2 hours 2 hours 24 minutes 3500mAh 2 hours 12 minutes 2 hours 48 minutes As you can see the 2S batteries takes twice as long to charge as 1S batteries. What Charger to Charge 2S/7.4V LiPo? Here are some charger recommendations for your 2S/7.4V LiPo batteries. Check the list below. From this list, you can purchase a decent quality charger for your 2S/7.4V. Image Title Price SUPULSE LiPo Battery Charger 2S-3S RC Balance Charger 7.4-11.1V B3AC Pro Compact Charger Check Price Ultra Power - UP-S4AC 2S LiPo / LiHV Four Channel AC/DC Charger Check Price Charging Time for 3S/11.1V RC LiPo Batteries The 3S RC LiPo batteries will need 3 times more charging time than 1S LiPo batteries. This is due to the 3 cell batteries compared to one cell. We’ve exemplified the matter below. Nominal and Fully Charged Voltage of 3S/11.1V LiPo A 3S LiPo battery bundle indicates there are 3 cells in sequence. So the nominal voltage of 3S LiPo is (3.7×3) 11.1V. 3s LiPo fully charged voltage is (4.2×3) 12.6V. How Long to Charge 3S/11.1V RC LiPo Batteries? Here the following list shows the different charging duration for your 3S LiPo batteries. Capacity Fast Charging Time Balanced Charging Time 800mAH 48 minutes 57.6 minutes 1200mAh 1 hour 12 minutes 1 hour 26.4 minutes 1500mAh 1 hour 30 minutes 1 hour 48 minutes 1800mAh 1 hour 48 minutes 2 hours 9.6 minutes 2100mAh 2 hours 6 minutes 2 hours 31.2 minutes 2500mAh 2 hours 27 minutes 3 hours 3000mAh 3 hours 3 hours 36 minutes 3500mAh 3 hours 18 minutes 4 hours 12 minutes This list will give you an idea about how long to charge 11.1V LiPo batteries. What Charger to Charge 3S/11.1V LiPo? Here’s an amazing list of charger recommendations to charge your 3S/11.1V LiPo batteries: Image Title Price ISDT D2 Mark 2 LiPo Battery Balance Charger Check Price HTRC LiPo Charger 2S-3S Balance Battery Charger Check Price Hope this helps you to find a great charger for your 3S LiPo batteries. Charging Time for 4S/14.8V RC LiPo Batteries Here we’ve demonstrated the nominal and fully charged voltage of 4S/14.8V LiPo batteries. Let’s check all the details of this right here! Nominal and Fully Charged Voltage of 4S/14.8V LiPo A 4S LiPo battery packet demonstrates there are 4 cells in line. Therefore, 4s LiPo minimum voltage is (3.7×4) 14.8V. The fully charged voltage is (4.2×4) 16.8V. How Long to Charge 4S/14.8V RC LiPo Batteries? The following table shows the required charging period for your 4S LiPo batteries. Here check it out: Capacity Fast Charging Time Balanced Charging Time 800mAh 1 hour 4 minutes 1 hour 16.8 minutes 1200mAh 1 hour 36 minutes 1 hour 55.2 minutes 1500mAh 2 hour 2 hour 24 minutes 1800mAh 2 hour 24 minutes 2 hour 52.8 minutes 2100mAh 2 hour 48 minutes 3 hour 21.6 minutes 2500mAh 3 hour 16 minutes 4 hour 3000mAh 4 hour 4 hour 48 minutes 3500mAh 4 hour 24 minutes 5 hour 36 minutes You can use the LiPo charge rate calculator online to calculate the charging time. What Charger to Charge 4s/14.8V LiPo? We’ve prepared a charger recommendation list for your 4S/14.8V LiPo. Check it out: Image Title Price SKYRC B6 AC V2 50W LiPo LiFe LiIon NiMH NiCd Battery Charger Check Price Blomiky 14.8V 4S 4 Cell Lipo Battery Charger and Balance Charger Check Price You can use either of these chargers for your LiPo batteries! However, if you’re charging your drone batteries then you can use a usb cable for this purpose too. Can you Overcharge LiPo Batteries? Yes, you can overcharge your LiPo batteries if you charge it for too long. The NiMh cell will start to deteriorate. When the charge rate goes over C/10 the cell starts to build up hydrogen inside it. This can even cause the battery to burst! That’s all! We’ve attempted to demonstrate everything about the charge time of LiPo batteries here. Can I charge NiMH battery with NiCd charger? No, you can’t charge a NiMh battery with a NiCd charger. These two batteries have unique and varied requirements. Thus, the NiMH batteries might get heated up with a NiCd charger. Can I charge NiMH batteries in series? Yes, you can charge your NiMH batteries in series. It’s relatively okay to charge the batteries for about 20 hours at C/20. Nonetheless, you should sometimes balance the batteries. How fast can I charge NiMH batteries? You can charge your NiMh batteries in an hour by adjusting the charge rate at 1C. You can adjust the rate from 1-10C and the charging time will be reduced accordingly. 10C being the fastest rate. Final Words We hope you got the whole picture of how long to charge 1S-4S(3.7V-14.8V) LiPo battery. One added tip, see if your charger gets hot too quickly. In this case, replace it with a new one. You’ll be able to charge your batteries quickly! Have a good day!
{"url":"https://majesticrc.com/how-long-does-it-take-to-charge-a-different-v-lipo-battery/","timestamp":"2024-11-08T17:15:48Z","content_type":"text/html","content_length":"99870","record_id":"<urn:uuid:f21f288a-6b25-47d3-8727-1efedf387f5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00588.warc.gz"}
What Does It Mean to Be “Good” at Math? by Pam Meader Last week I had to visit a dental lab to match a crown to my other teeth. During this visit, the technician asked what I did for work, and when I told her I was a math consultant she immediately said, “Oh, I am not good at math but I love science.” I wondered to myself how someone who loves science couldn’t be good at math? Science and all its data collection and analysis is clearly related to math. Mark Schwartz, a retired math professor from Southern Maine Community College, wrote an op ed piece in a Portland paper around students’ “can’t do math” attitude. He wrote that before he would start his developmental math class, he would have his students close their eyes and think about when they decided they were not good at math. Stories would pour forth from students of terrible experiences in middle or high school; stories of being confused when no one could explain a process to them or of being pushed through classes regardless of whether or not they gained any knowledge. These students, like the dental lab technician, believed that they weren’t good at math. So what does it mean to be good at math? Am I good at math because I can use the quadratic formula, factor a quadratic, or rationalize a denominator? In looking at rigor (as defined by the Career and College Readiness standards), I am probably good at procedural fluency but there’s more to rigor math and to math overall than that. As I shared in a previous blog, I learned math through memorizing procedures and I became pretty adept at that. But if a math problem was thrown at me where I would have to apply this knowledge, I would freeze. I remember memorizing the solutions to the famous train problems. You know the ones: Train A leaves some place at such-and-such a time, and Train B leaves another place at a different time, etc. and you have to figure out when they will meet. I could solve the problems, but didn’t have the conceptual understanding to back up my correct answers. It wasn’t until I (fortunately) fell into adult education and started taking courses on how to teach conceptually that the second part of rigor, conceptual understanding, became part of my teaching arsenal. I had thought I was doing a good job before then, allowing my students to discover math through manipulatives but soon realized there was more to conceptual understanding than that. Facilitating the Adult Numeracy Instruction (ANI) training and later piloting the SABES-sponsored Building a Solid Foundation course has since deepened my perception of developing conceptual understanding for myself and for my students. During my college years, I worked summers in an office of a local paper mill. My boss, knowing I was a math major and probably wanting to put my knowledge to the test, asked me to devise a scheduling system of workers rotating through 4 days on, 2 days off so there was constant coverage. After much trial and error, I used a tree diagram and proudly presented the schedule to my boss who was pleased but surprised probably that I had passed his “test”. For me it was an empowering experience to see this math that I had studied did have some use in the real world. The schedule problem exemplified the third part of rigor, which is application. When students can apply their math knowledge to real world problems, they enrich and deepen their understanding and feel inspired. Unfortunately, many students don’t feel inspired by math. Just this morning I read a blog by Karim Ani who shared some astounding statistics about middle school students. Forty four percent of the students polled preferred taking out the garbage to doing math homework. Ani went on to explain that math needs to be taught using world wide applications. “A math class without authentic applications is like an astronomy class where students spend the year calibrating a telescope but never actually look at the stars. Math allows us to better understand the world and to live more meaningfully in it”. So when are we “good at math?” When we see the beauty of the subject, how it works and connects, and most importantly its prominence in helping us make sense of the world and all of its complexities. Pam is a Senior Professional Development Specialist for the SABES Center for Mathematics and Adult Numeracy professional development initiative for Massachusetts. Pam is a former high school math teacher and has taught math in adult education for over 25 years. She helped co-develop Adults Reaching Algebra Readiness (AR)^2 with Donna Curry. She is a national trainer for LINCS and ANI (Adult Numeracy Instruction). Pam enjoys sharing techniques for teaching math conceptually from Basic Math through Algebra and has co-authored the Hands On Math series for Walch Publishing in Portland,
{"url":"https://www.terc.edu/adultnumeracycenter/what-does-it-mean-to-be-good-at-math/","timestamp":"2024-11-09T16:48:54Z","content_type":"text/html","content_length":"88701","record_id":"<urn:uuid:8fc097cb-584b-496c-87f1-cc05b48b46b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00554.warc.gz"}
[solidMechanics] Support thread for "Solid Mechanics Solvers added to OpenFOAM Extend" Hi Philip, Originally Posted by It makes sense that the lagged scheme is more stable; as regards coupling K changes within the time-step, I suppose you are already using Picard/Fixed-point iterations i.e. successive substitution. Under-relaxation of the K field will probably help, or more under-relaxation of the p and U fields (equations and fields). I found a quite stable Picard approach in suGWFoam, a saturated unsaturated solver by which could serve as template for a future implementation of coupled deformation saturated/unsaturated flow. I tested Liu's solver under quite mean conditions (flow with large gradients into extremely dry soil and it yielded excellent results (even commercial software would not resolve this problem). To be honest I presently do not feel experienced enough to effectively implement the Picard scheme (which is a time adaptive approach) into elastoPlastoBiotFoam. In view of the high non-linearity of the conductivity and storage properties in unsaturated flow considered by the Picard approach a segregated scheme to consider deformation should not represent a big issue. I played a little bit around trying to implement the pressure dependence of the Kfluid-field (stiffness of the water gas mixture within the pores) and I came to stable results with the fixed point approach (relaxing only p and U): fvm::ddt(p) == fvm::laplacian(coeff_Dp, p) - fvc::div(fvc::ddt(coeff_Dp2,U)) solverPerfP = pEqn.solve(); dp = p - p.oldTime(); Kfluid = 1.0/(Sr/K_ + (1.0-Sr)/(p0+psi*gamma+dp)); coeff_Dp = (kSat/gamma*Kfluid/n); coeff_Dp2 = Kfluid/n; This way the pressure changes dp are considered inside the iteration loop. The absolute pressure consists of the atmospheric (p0) and the hydrostatic (psi=pressure head below phreatic surface) Probably relaxing Kfluid (how ?), as you suggest, might enhance convergence. Now I tried to implement heterogeneity of the saturated hydraulic conductivity, which in natural settings is always an issue, by introducing the field kSat. in open water environments it is not uncommon to find conductivity contrasts spanning 4 orders of magnitude: sand 1^-04 m/s, silt 1^-8 m/s. The coeff_Dp field varies correspondingly over orders of magnitude. First I decided to adapt the laplacian scheme to harmonic, which makes more sense than a linear one from an geohydraulic point of view: laplacian(Dp,p) Gauss harmonic corrected; Unfortunately the solver now diverges during the first iterations. From the (very illustrative) convergence analysis in "On finite volume method implementation of poro-elastic-plasticity soil model" by Tang, Heledal & Cardiff it is clear, that a very large Kfluid (due to almost compressed gas in 20 m below phreatic surface) becomes stiffer than the bulk modulus of the soil and therefore mesh and time step sizes are determined by the bulk modulus, the water unit weight and the hydraulic conductivity! It turned out that a mesh size of approx. 0.2 m is necessary. I reduced the model to 2D (app. 120 m x 20 m) but could not reach convergence yet. I could imagine that my problem is not the first one to exhibit strongly varying laplacian terms. So I would appreciate any hints on strategies to overcome these kind of difficulties.
{"url":"https://www.cfd-online.com/Forums/openfoam-cc-toolkits-fluid-structure-interaction/126706-support-thread-solid-mechanics-solvers-added-openfoam-extend-19.html","timestamp":"2024-11-02T06:09:59Z","content_type":"application/xhtml+xml","content_length":"196916","record_id":"<urn:uuid:21493f22-274c-4b30-9803-4cebbd3a75a9>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00370.warc.gz"}
Compounding Interest: How 5% Interest Helps You Build WealthCompounding Interest: How 5% Interest Helps You Build Wealth Compounding interest is earning interest on previously earned interest. It’s also known as earning “interest on interest.” Over time you can build up your wealth by reinvesting the interest you’ve accumulated. You should understand the difference between simple and compounded interest to take full advantage of it. The interest rate earned on compound interest depends on how many times it is compounded. It also takes time to grow your wealth using compounded interest, usually many months or years. Interest is also applied when you’re borrowing money. This is how the bank or creditor will earn their money as you pay back the loan. When applying for the loan, you should avoid getting a loan with compounding interest since it means you’ll have to pay more on your loan with it. Just remember compounding interest can be lucrative for you if you’re investing, but it means you’ll pay more if you borrow with it. Simple Interest Investment To show you the difference between simple and compounded interest, you first need to know what both of them are. When you open an account that accrues simple interest, you can easily determine the interest you’ll earn with this simple interest formula: Interest = P x R x N. The principal amount is the initial amount you deposit into the account, and the letter P represents it. The letter R signifies the interest rate percentage in a decimal. Finally, the letter N is the time period. The time periods are usually expressed in months or years. Consider this scenario, simple interest earned on a principal amount of $10,000 will be $100 with a 1% interest rate over a year. The formula with these values plugged in is Interest = 10,000 x 0.01 x 1. The amount of return is going to be $100. The interest rate is only applied to the initial principal amount deposited with simple interest. That interest rate stays with the loan, and you cannot earn more interest on the new total amount unless you compound the interest rate by reinvesting the already accrued interest. Frequency of Compounding Interest Interest can be compounded daily, monthly, quarterly, or yearly. As noted, it will take time for you to grow your wealth a substantial amount. The total amount of interest you’ll accumulate depends on how often the interest is compounded and the amount of time your money is in the account. It is not a system where you can get rich quickly, but a system where you can get rich easily as you don’t have to do anything. The interest accumulates without additional work from you as long as it stays in the account for a substantial time. Compounding Interest Formula Compound interest can also be calculated with a formula. That formula is A = P(1 + r/n) (nt). The formula only looks complicated, but it’s actually not that difficult since all you need to do is input your values into it. In this formula, the letter A signifies the total amount. We already know the representation of the letters P, R, and N. The letter t represents the amount of time the money is put into the account, usually expressed in years. Compound Daily also has calculators available for you to use, including a compound interest calculator. You can simply input the values you’re looking to invest in and the interest rate offered. Then you’ll be able to determine how much money you can make with that investment offer. It’s also easier to compare multiple offers with this calculator. For example, you can input the values for one investment opportunity and the values of another opportunity to determine which is the better offer. The formula for compound interest is A = P(1 + r/n)(nt). So, let’s say you want to invest $10,000 at a rate of 5% interest that is compounded daily for five years. You plug the numbers into the formula and get this 10,000(1 + 0.05/365)(365 * 5). Your answer is $12,840.03432147 or $12,840.03. That’s $2,840.03 in accumulated interest. If you had the same scenario but with simple interest, you would’ve only earned $2,500 and a total amount of $12,500.00. As you can see, by compounding the interest you earned, you can make over $300 more than with the simple interest and the same monetary With compounding interest, you have the chance of earning more money in interest than with only investing with simple interest. However, you also should remember to only seek compounding interest with investments. For example, when you take out a loan and have compounding interest, you’ll be paying more than you would be with a simple interest loan. Compound Daily is here for you when you’re deciding which loan or investment opportunity is right for you.
{"url":"https://compounddaily.org/compounding-interest-5-interest-build-wealth/","timestamp":"2024-11-11T22:32:18Z","content_type":"text/html","content_length":"379503","record_id":"<urn:uuid:a8db26db-0b24-4921-af2e-9ed6cad067a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00486.warc.gz"}
How do I Rotate 90 degrees Counterclockwise about the origin Rotate 90 degrees Counterclockwise or 270 degrees clockwise about the origin Earlier we learned about 90 degrees clockwise rotation, and now we are going to talk about 90 degrees counterclockwise rotation. Before we go ahead and explain, please note that 90 degrees counterclockwise and 270 degrees clockwise are the same things and you need to use the same formula for both. What is the Formula to find out 90 degrees Counterclockwise rotations We swap the value of x and y and negate the value of y. So the value of x becomes the value of y and value of y becomes the value of x. so the formula is: (x, y) –> (-y, x) Before Rotation After Rotation (x, y) (-y, x) Question: Rotate 90 degrees counterclockwise about the origin C(2,1), B(3,7), and A(-5,6) Calculation Explanation: The Rule we used to get value Applying (x, y) –> (-y, x) formula to all values to get the new value. Before Rotation After Rotation C(2,1) C'(-1,2) B(3,7) B'(-7,3) A(-5,6) A'(-6,-5) A Quick Video That will Teach you the 90 degrees Clockwise Rotation Rule
{"url":"https://maths.forkids.education/90-degrees-counterclockwise-rotation-rule/","timestamp":"2024-11-13T04:19:30Z","content_type":"text/html","content_length":"85502","record_id":"<urn:uuid:5f3311e5-15f8-446c-a692-4c68595e54ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00113.warc.gz"}
Blog | Open & Closed Loop Buck Converter | MATLAB Helper A Buck converter is a powerful tool in the world of power electronics, allowing for efficient conversion of high input voltages to low output voltages through the use of controlled pulses. This type of converter is made up of a switch and energy storage elements, such as an inductor and capacitor, which work together to regulate the voltage supplied to the load. One of the key features of the Buck converter is its ability to act as a voltage regulator, using a transistor switch to control the flow of current to the load. It is widely used in applications that require a steady, reliable output voltage. There are two main types of Buck converters: open loop and closed loop. Open loop converters rely on the inductor to store energy and supply it to the load when the switch is opened, while closed loop converters use feedback to continuously adjust the output voltage. Both have their own advantages and disadvantages and are used in different applications. In this blog post, we will be discussing the differences between open loop and closed loop Buck converters, and how they are used in different applications. Circuit Configuration The circuit comprises a battery that provides input voltage, a switch for controlling output voltage, two energy storing elements inductor and capacitor for its main circuitry process and a load across which output voltage is measured. It has two states, on and off states. In the on-state, the inductor stores the energy and in the off-state, the capacitor stores and supplies energy where then only a calculated portion of the supply voltage and current is allowed for the load, instead of the relatively bigger peak voltage input source. This is a brief of the process of the buck Controlling Measures of Buck Converter In the power electronics domain or even in real life, controlling aspects are one of the most important aspects always. Things need to be controlled for the safe working of our life or even the circuits before, which might cause some other side effects that might not be good. Here, the value of the output is controlled according to our needs or the controlled voltage value we need across the load for its proper working. There are two types of control measures, i.e., open loop-controlled system and closed loop-controlled system. These two controlling measures will be used in the modelling of the DC-DC buck converter, and the best controlling measure will be found for this circuit. Open loop Controlled DC-DC Buck Converter modelling What is an Open loop controlling system? An open loop-controlled system is a control loop system where a group of elements are connected in a sequence to perform a specified function (or task) where the output is controlled without using any feedback and has no influence or effect on the control action of the input signal. There is no chance to correct the transition errors in open loop systems, or there is more chance to occur errors. But still, these controlling systems are widely used in many domains because of the input being free of the output values. Modelling of Open-loop Buck converter in MATLAB Simulink Step into the exciting world of Simulink with MATLAB Helper! Discover the basics of this powerful tool through our complimentary course of Simulink Fundamentals. Ready to show off your Simulink skills? Take our Simulink Fundamentals course quiz and earn a certificate of proficiency from MATLAB Helper. Score 60% or higher and book your certificate now! The open loop model of buck converter should be made under some considerations where it will be a discrete mode of sample time Setups before modelling- Select a new model in Simulink and follow the mentioned below, • A powergui block is taken from the library browser and select the simulation type discrete and sample time 1e-5. Click the configuration parameters tool or press Ctrl+e and select the solver type Fixed-step, Solver to discrete (no continuous states) and the Fixed-step size 1e-5 The theory behind selecting the parameters of components: - There are numerous buck converters made in power electronics of different ratings. Every Converter has a different combination of values of inductor and capacitor according to the ratings required in a particular circuit. Here in this model, a certain combination of inductor and capacitor is taken for which the voltage output is uniform and has fewer ripples. You can also choose a combination of inductor and capacitor for which the voltage output has fewer ripples and is uniform. The formula of the Buck converter The output voltage of the buck converter is the input voltage times the duty cycle. Duty Cycle (D) The duty cycle is the ratio of the signal on time to its total time period. In this model, D is taken 0.6 and the RLC branch where, Modelling: - • Search the following components in the library and select them for the blank model. Connect them same as the circuit configuration of buck converter (given above in circuit configuration). The pulse generator output is connected with the input port of the IGBT input port to control the switch through pulses, and the voltage measurement block is attached parallel to the resistive load to measure the voltage and give the output to the scope to see the results in a graph, and the rest connection is made same as a simple buck converter circuit (please refer to the circuit configuration above). Run the model after putting the value of time one second. The result after the simulation in the scope is – After zooming the scope, the graph shows the clear output after setting the voltage to 50; it isn't 50 but near 51 Volts • This is the output of an open loop controlled buck converter which, after setting reference voltage, doesn't give voltage perfectly 50, and in the power electronics domain, even a minor fluctuation of voltage is very sensitive to the circuits. • Let's check out if the closed-loop Converter gives any different results. Get Access to Models & Report! In an open loop buck converter, there is no feedback from output to input, contrary to the closed loop, which has a feedback circuit. Study the comparison of two models with simulation; Developed in MATLAB R2021a with Simulink, Simscape and Simscape Electrical Libraries. Closed loop Controlled DC-DC Buck Converter modelling In the closed loop converter, the IGBT switch is controlled by the pulses generated from the feedback of the output, which is compared with the input and the error or difference is compensated by controllers called PID (Proportional Integral Derivative). Usually, PID is used for very complex circuits; in this model, we will use PI (Proportional Integral) to get a good output. What is PID Controller? PID controller is a controller used in a closed loop feedback system where it takes feedback from the output and compares it with the input, and then compensates or aims to reduce the difference to zero through its proportional, integral and derivative algorithms. Let's see the three terms' characteristics. Proportional: - • This is the parameter that determines how fast the system responds, for controller term 'Gain' used for proportional. • The more the P value, the fast the system will respond and the more sensitive and less stable it will become. Integral: - • The parameter determines how fast, steady-state error is removed. • Smaller minutes per repeat will create larger integral actions, or larger values in repeats per minute measurement will create larger action. Derivative: - • The derivative constant is for predicting change, or the rate of change measured in the process variable or how far in the future you want to predict the rate of change. • It is the rate of change in the process variable, and the process variable must be a very clean signal; hence no noise within the signal; that's why the derivative is not often used. Modelling of Closed loop Buck converter in MATLAB Simulink The theory behind selecting the parameters of components: - To tune a model through PI, we need to get the transfer function of the model to be tuned. The transfer function of the buck converter is, After putting the values of L, C and R same as taken in the open loop model, the transfer function looks something like this, Modelling: - Take a transfer function with initial states and put the values the same as calculated. Take the PID controller, select the controller to PI, Time domain to discrete model, sample time to two, then click tune. Before tuning, connect the circuit using a scope, sum and constant as shown, After clicking tune in PI controller, a step plot reference track will open. You can tune the model by adjusting the response time and transient behaviour. On increasing the response, the signal responds fast and improving the transient behaviour to robust the transient in signal reduces. Then click the update block. Copy the PI tuned block to the open loop model made and take a constant, sum, add and repeating sequence and connect something like this, Give feedback of output to negative of sum and set the constant to 50 and make a repeating sequence with such values and generate a pulse that is given to IGBT switch. The final design of the model looks something like this. Run the model for one second and check the result in scope. The zoomed output scope shows how precise a closed-loop converter can produce after tuning it perfectly. Still, this model can be tuned more, but for now, this blog is to show the comparison of open loop buck converter output and closed loop buck converter output. Check out the zoomed scope producing voltage nearest to 50 Volts. In conclusion, the Buck converter is a widely used DC-DC step-down converter in power electronics, that efficiently converts high input voltages to low output voltages through the use of controlled pulses. The open-loop and closed-loop Buck converters are the two main types of Buck converters, where open-loop converters rely on the inductor to store energy and supply it to the load when the switch is opened and closed-loop converters use feedback to continuously adjust the output voltage. When comparing the output of the open loop buck converter and closed-loop converter, it is clear that the closed-loop converter gives a better output with precise results due to its feedback system. The closed-loop converter is more accurate and reliable, which makes it an ideal choice for applications that require a steady output voltage. I am very impressed with your post because this post is very beneficial for me and provide a new knowledge… Thanks. This Helped 🙂 Very educative article Where did you get the transfer function of the buck converter from? Yleft ( s right )= frac{frac{V_{i}}{LC}}{s^{2}+frac{1}{RC}s+frac{1}{LC}} {"email":"Email address invalid","url":"Website address invalid","required":"Required field missing"}
{"url":"https://matlabhelper.com/blog/simulink/open-loop-closed-loop-buck-converter/","timestamp":"2024-11-04T01:22:33Z","content_type":"text/html","content_length":"755623","record_id":"<urn:uuid:5c69ae6b-1ee3-41aa-b576-eacd576b6a44>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00040.warc.gz"}
aqa-synergy-paper-3-and-4-solution Archives - Fridge Physics When a force acts on a spring it is stretched or compressed, its length will changes by an amount e from its original length. In this tutorial you will learn how to calculate the force on a spring given its extension and spring constant. Note-Some mobile devices may require you to tap full screen during playback to view video content. The equation for this calculation is written like this: $F = { \text k \; \text x \; \text e}$ Chilled practice question Copy out the question and attempt to calculate the answer before watching the solution. Write down the equation and show all of your working, remember to add the units to your answer, this routine will guarantee you maximum marks in an exam. Mark your solution and correct if needed. A spring is stretched 0.25 m, it has a spring constant of 20 N/m, calculate the force on the spring. Frozen practice question Copy out the question and attempt to calculate the answer before watching the solution. Write down the equation and show all of your working, remember to add the units to your answer, this routine will guarantee you maximum marks in an exam. Mark your solution and correct if needed.. A force of 10 N is applied to a spring and it extends by 20 cm, calculate its spring constant. Science in context When a spring is stretched or compressed, its length will changes by an amount x from its original length, force F = ke. The force the spring exerts is its restoring force, it acts to restore the spring to its original length. Millie’s Master Methods The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s… Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation… The Fridge Physics Store Feedback to students in seconds – Voice to label thermal bluetooth technology… Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!… Force, Mass and Acceleration An object of constant mass accelerates in proportion to the force applied. In this tutorial you will learn how to calculate the force applied to an object if you are given its mass and acceleration. The equation for this calculation is written like this: $F = { \text m\; \text x \; \text a}$ Chilled practice question Calculate the force applied to cannon ball which accelerates at 7m/s^2 which has a mass of 5 Kg. Frozen practice question A rocket has a driving force of 15 N and a mass of 250 g. Calculate the acceleration of the rocket. Science in context Force (N) = mass (kg) × acceleration (m/s²). Therefore, an object of constant mass accelerates in proportion to the force applied. Millie’s Master Methods Millie’s Magic Triangle The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s… Calculation Master Method Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation… The Fridge Physics Store Teacher Fast Feedback Feedback to students in seconds – Voice to label thermal bluetooth technology… Get Fridge Physics Merch Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!… Kinetic Energy In physics, the kinetic energy (KE) of an object is the energy that it has due to its motion. It is defined as the work needed for an object of a known mass to accelerate to a given velocity. What is Kinetic energy? In physics, the kinetic energy (KE) of an object is the energy that it has due to its motion. It is defined as the work needed for an object of a known mass to accelerate to a given velocity. Kinetic energy equation To calculate Kinetic energy we write the equation like this. $E_k = { 1 \over2} mv {^2}$ Kinetic energy demo In this tutorial you will learn how to calculate the energy stored in a moving object. Chilled practice question Copy out the question and attempt to calculate the answer before watching the solution. Write down the equation and show all of your working, remember to add the units to your answer, this routine will guarantee you maximum marks in an exam. Mark your solution and correct if needed. Calculate the Kinetic energy store in a car with a mass of 850 Kg moving at a velocity of 3.5 m/s. Frozen practice question Copy out the question and attempt to calculate the answer before watching the solution. Write down the equation and show all of your working, remember to add the units to your answer, this routine will guarantee you maximum marks in an exam. Mark your solution and correct if needed. A meteorite has 8000 J of Kinetic energy, calculate its mass if it has a velocity of 20 m/s Science in context Anything that is moving has energy in its kinetic energy store. Millie’s Master Methods Millie’s Magic Triangle The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s… Calculation Master Method Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation… The Fridge Physics Store Teacher Fast Feedback Feedback to students in seconds – Voice to label thermal bluetooth technology… Get Fridge Physics Merch Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!… The quantity power is the rate at which work is done. The quicker work is done the greater the power. What is Power? The quantity power is the rate at which work is done. The quicker work is done the greater the power. The standard metric unit of power is the Watt. The calculation of power is the work done in Joules divided by a unit of time seconds. 1 Watt is equivalent to 1 joule of work done per second. Many machines have their power rating displayed upon them in Watts, the higher the value the greater the power of the machine and the faster the rate of energy transfer. Power equation To calculate Power we write the equation like this. $P = { W \over \text{t}} = { E \over \text{t}}$ Power demo In this tutorial you will learn how to calculate power, the rate of doing work during an energy transfer. Chilled practice question Calculate the power of an electric fire if it transfers 950 KJ of energy in 5 minutes 30 seconds. Frozen practice question An electric heater has a power rating of 2200 W. How long to the nearest second does it take to transfer 500 KJ of energy. Science in context Power is the rate of energy transfer per second. Millie’s Master Methods Millie’s Magic Triangle The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s… Calculation Master Method Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation… The Fridge Physics Store Teacher Fast Feedback Feedback to students in seconds – Voice to label thermal bluetooth technology… Get Fridge Physics Merch Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!… Elastic Potential Energy Stretching or squashing an object can transfer energy into its elastic potential energy store. What is Elastic potential energy? Stretching or squashing an object can transfer energy into its elastic potential energy store. Elastic energy is the mechanical potential energy stored in the material. Elastic energy occurs when objects are compressed, stretched or deformed. Elastic potential energy equation To calculate Elastic potential energy we use this equation. $E_e = { 1 \over2} ke {^2}$ Elastic potential energy demo In this tutorial you will learn how to calculate the energy stored in a stretched spring. Chilled practice question A spring is stretched 200 cm. The spring has a spring constant of 30 N/M. Calculate the stored elastic potential energy of the stretched spring. Frozen practice question Calculate the extension of a spring if it has a stored elastic potential energy of 540 J and a spring constant of 30 N/M. Science in context Stretching or squashing an object can transfer energy into its elastic potential energy store. Millie’s Master Methods Millie’s Magic Triangle The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s… Calculation Master Method Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation… The Fridge Physics Store Teacher Fast Feedback Feedback to students in seconds – Voice to label thermal bluetooth technology… Get Fridge Physics Merch Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!… The size of the current is the rate of flow of charge. Electrons are negatively charged particles which transfer energy through wires as electricity. What is Charge? The size of the current is the rate of flow of charge. Electrons are negatively charged particles which transfer energy through wires as electricity. Charge is measured in coulombs (C). Electrons are really small and the effect of one electron would be really difficult to measure, It is easier to measure the effect of a large number of electrons. One Coulomb of charge contains 6 × 10^18 Charge equation To calculate Charge we use this equation. $Q = { \mathit I \, \mathit t} $ Charge demo In this tutorial you will learn how to calculate the the charge flowing in an electrical circuit. Chilled practice question Calculate the charge when a current of 16 A flows for 2 minutes. Frozen practice question How long must a current of 26 A flow to transfer 936 KC. Science in context The size of the current is the rate of flow of charge. Millie’s Master Methods Millie’s Magic Triangle The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s… Calculation Master Method Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation… The Fridge Physics Store Teacher Fast Feedback Feedback to students in seconds – Voice to label thermal bluetooth technology… Get Fridge Physics Merch Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!… Resistance is an electrical quantity that measures how a device or material reduces the electrical current flow through it. What is Resistance? Resistance is an electrical quantity that measures how the device or material reduces the electrical current flow through it. Resistance is measured in ohms (Ω).If we make a comparison of resistance to water flow in pipes, the resistance is greater when the pipe is thinner, so the water flow is decreased It slows down which also happens to the flow of electricity. Resistance equation To calculate Resistance we write the equation like this. $V = { \mathit I \, \mathit R} $ Resistance demo In this tutorial you will learn how to calculate the resistance in an electrical circuit. Chilled practice question Calculate the resistance of a bulb supplied with 8 V and a current flow of 2 A. Frozen practice question Calculate the current in a circuit which has a resistance of 16 Ω and a potential difference of 8 V. Science in context Resistance reduces the flow of electricity. Millie’s Master Methods Millie’s Magic Triangle The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s… Calculation Master Method Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation… The Fridge Physics Store Teacher Fast Feedback Feedback to students in seconds – Voice to label thermal bluetooth technology… Get Fridge Physics Merch Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!… Resistance in Series and Parallel Resistors in series and in parallel can change the total resistance in a circuit. What is Resistance in series and parallel? Resistors in series and in parallel change the total resistance in a circuit. Special components called resistors are made especially for creating a precise quantity of resistance in a circuit. They are generally made of metal wire or carbon and constructed to maintain a stable steady amount of resistance over a wide range of environmental conditions. Resistance in series and parallel equation To calculate Resistance in series we use this equation. $R_t = {\mathit R_1 \text + \mathit R_2}$ Resistance in series and parallel demo In this tutorial you will learn how to calculate the total resistance of resistors in series and parallel. Chilled practice question Calculate the current in a circuit if it is supplied with a voltage of 36 V and has two resistors in series one 6 Ω and the other 3 Ω. Frozen practice question Three resistors are connected in parallel, resistor A is 2 Ω , resistor B is 4 Ω, and resistor C is 5 Ω . What is the total resistance in the circuit ? Science in context Resistance in series and parallel change the total resistance in a circuit Millie’s Master Methods Millie’s Magic Triangle The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s… Calculation Master Method Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation… The Fridge Physics Store Teacher Fast Feedback Feedback to students in seconds – Voice to label thermal bluetooth technology… Get Fridge Physics Merch Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!… Electrical Power The power of an appliance is the energy that is transferred per second. Electric power is the rate, per unit time at which electrical energy is transferred by an electric circuit. What is Electrical power? The power of an appliance is the energy that is transferred per second. Electric power is the rate, per unit time, at which electrical energy is transferred by an electric circuit. The unit for power is the watt, which is the transfer of energy at the rate of one joule per second. Electric power can be produced by electric generators and batteries. Electrical power equation To calculate Electrical power we use this equation. $ {\mathit P \, \text = \mathit V \mathit I}$ Electrical power demo In this tutorial you will learn how to calculate the electrical power of an electrical appliance. Chilled practice question Calculate the power in a circuit when a p.d of 18 V and a current of 4 A is measured. Frozen practice question Find the current flowing through an appliance which has a power rating of 14 KW and a p.d of 230 V. Science in context The power of an appliance is the energy that is transferred per second Millie’s Master Methods Millie’s Magic Triangle The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s… Calculation Master Method Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation… The Fridge Physics Store Teacher Fast Feedback Feedback to students in seconds – Voice to label thermal bluetooth technology… Get Fridge Physics Merch Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!… Gravitational Potential Energy Lifting an object in a gravitational field transfers energy into the objects gravitational energy store. Gravitational potential energy is the energy an object has due to its height above Earth. What is Gravitational potential energy? Lifting an object in a gravitational field transfers energy into the objects gravitational energy store. Gravitational potential energy is the energy an object has due to its height above Earth. The equation for gravitational potential energy is GPE = mgh, where m is the mass in kilograms, g is the gravitational field strength (9.8 N/Kg on Earth), and h is the height above the ground in meters. Gravitational potential energy equation To calculate Gravitational potential energy we use this equation. $E_p = mgh$ Gravitational potential energy demo In this tutorial you will learn how to calculate the energy stored in an elevated object. Chilled practice question Copy out the question and attempt to calculate the answer before watching the solution. Write down the equation and show all of your working, remember to add the units to your answer, this routine will guarantee you maximum marks in an exam. Mark your solution and correct if needed. A barrel is lifted onto a shelf 3.5 m from the ground. The barrel has a mass of 22 Kg. Calculate the energy in its G.P.E store. Frozen practice question Copy out the question and attempt to calculate the answer before watching the solution. Write down the equation and show all of your working, remember to add the units to your answer, this routine will guarantee you maximum marks in an exam. Mark your solution and correct if needed. A ski lift transfers 11 KJ of energy into a mans G.P.E energy store. The man has a mass of 55 Kg, calculate the height he was elevated. Science in context Lifting an object in a gravitational field transfers energy into the objects gravitational energy store. Millie’s Master Methods Millie’s Magic Triangle The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s… Calculation Master Method Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation… The Fridge Physics Store Teacher Fast Feedback Feedback to students in seconds – Voice to label thermal bluetooth technology… Get Fridge Physics Merch Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!…
{"url":"https://fridgephysics.com/tag/aqa-synergy-paper-3-and-4-solution/","timestamp":"2024-11-08T11:07:32Z","content_type":"text/html","content_length":"531482","record_id":"<urn:uuid:b904ecb0-42ee-47af-98df-9742f9af426d>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00323.warc.gz"}
[Solved] Define a two-headed finite automaton (2DF | SolutionInn Define a two-headed finite automaton (2DFA) to be a deterministic finite automaton that has two read-only, bidirectional Define a two-headed finite automaton (2DFA) to be a deterministic finite automaton that has two read-only, bidirectional heads that start at the left-hand end of the input tape and can be independently controlled to move in either direction. The tape of a 2DFA is finite and is just large enough to contain the input plus two additional blank tape cells, one on the left-hand end and one on the right-hand end, that serve as delimiters. A 2DFA accepts its input by entering a special accept state. For example, a 2DFA can recognize the language {a^nb^nc^n| n ≥ 0}. a. Let A[2DFA] = {〈M, x〉| M is a 2DFA and M accepts x}. Show that A[2DFA] is decidable. b. Let E[2DFA] = {〈M〉| M is a 2DFA and L(M) = ⌀}. Show that E2DFA is not decidable. Fantastic news! We've Found the answer you've been seeking!
{"url":"https://www.solutioninn.com/study-help/introduction-theory-computation/define-a-twoheaded-finite-automaton-2dfa-to-be-a-deterministic-840337","timestamp":"2024-11-12T04:04:14Z","content_type":"text/html","content_length":"81683","record_id":"<urn:uuid:fe2c03e7-1ee8-4c3a-be23-4b5bc90a9fee>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00723.warc.gz"}
Meta-analysis: Principles and procedures Education And Debate Meta-analysis: Principles and procedures BMJ 1997 315 doi: https://doi.org/10.1136/bmj.315.7121.1533 (Published 06 December 1997) Cite this as: BMJ 1997;315:1533 1. Matthias Egger, reader in social medicine and epidemiology (egger{at}bristol.ac.uk)a, 2. George Davey Smith, professor of clinical epidemiologya, 3. Andrew N Phillips, professor of epidemiology and biostatisticsb This is the second in a series of seven articles examining the procedures in conducting reliable meta-analysis in medical research 1. Correspondence to: Dr Eggerm Meta-analysis is a statistical procedure that integrates the results of several independent studies considered to be “combinable.”1 Well conducted meta-analyses allow a more objective appraisal of the evidence than traditional narrative reviews, provide a more precise estimate of a treatment effect, and may explain heterogeneity between the results of individual studies.2 Ill conducted meta-analyses, on the other hand, may be biased owing to exclusion of relevant studies or inclusion of inadequate studies.3 Misleading analyses can generally be avoided if a few basic principles are observed. In this article we discuss these principles, along with the practical steps in performing meta-analysis. Observational study of evidence Meta-analysis should be viewed as an observational study of the evidence. The steps involved are similar to any other research undertaking: formulation of the problem to be addressed, collection and analysis of the data, and reporting of the results. Researchers should write in advance a detailed research protocol that clearly states the objectives, the hypotheses to be tested, the subgroups of interest, and the proposed methods and criteria for identifying and selecting relevant studies and extracting and analysing information. As with criteria for including and excluding patients in clinical studies, eligibility criteria have to be defined for the data to be included. Criteria relate to the quality of trials and to the combinability of treatments, patients, outcomes, and lengths of follow up. Quality and design features of a study can influence the results.4 5 Ideally, researchers should consider including only controlled trials with proper randomisation of patients that report on all initially included patients according to the intention to treat principle and with an objective, preferably blinded, outcome assessment.6 Assessing the quality of a study can be a subjective process, however, especially since the information reported is often inadequate for this purpose.7 It is therefore preferable to define only basic inclusion criteria and to perform a thorough sensitivity analysis (see below). The strategy for identifying the relevant studies should be clearly delineated. In particular, it has to be decided whether the search will be extended to include unpublished studies, as their results may systematically differ from published trials. As will be discussed in later articles, a meta-analysis that is restricted to published evidence may produce distorted results owing to such publication bias. For locating published studies, electronic databases are useful,8 but, used alone, they may miss a substantial proportion of relevant studies.9 10 In an attempt to identify all published controlled trials, the Cochrane Collaboration has embarked on an extensive manual search of medical journals published in English and many other languages.11 The Cochrane Controlled Trials Register12 is probably the best single electronic source of trials; however, citation indices and the bibliographies of review articles, monographs, and the located studies should also be Summary points Meta-analysis should be as carefully planned as any other research project, with a detailed written protocol being prepared in advance The a priori definition of eligibility criteria for studies to be included and a comprehensive search for such studies are central to high quality meta-analysis The graphical display of results from individual studies on a common scale is an important intermediate step, which allows a visual examination of the degree of heterogeneity between studies Different statistical methods exist for combining the data, but there is no single “correct” method A thorough sensitivity analysis is essential to assess the robustness of combined estimates to different assumptions and inclusion criteria A standardised record form is needed for data collection. It is useful if two independent observers extract the data, to avoid errors. At this stage the quality of the studies may be rated, with one of several specially designed scales.13 14 Blinding observers to the names of the authors and their institutions, the names of the journals, sources of funding, and acknowledgments leads to more consistent scores.14 This entails either photocopying papers, removing the title page, and concealing journal identifications and other characteristics with a black marker, or scanning the text of papers into a computer and preparing standardised formats.15 16 Standardised outcome measure Individual results have to be expressed in a standardised format to allow for comparison between studies. If the end point is continuous—for example, blood pressure—the mean difference between the treatment and control groups is used. The size of a difference, however, is influenced by the underlying population value. An antihypertensive drug, for example, is likely to have a greater absolute effect on blood pressure in overtly hypertensive patients than in borderline hypertensive patients. Differences are therefore often presented in units of standard deviation. If the end point is binary—for example, disease versus no disease, or dead versus alive) then odds ratios or relative risks are often calculated (box). The odds ratio has convenient mathematical properties, which allow for ease in combining data and testing the overall effect for significance. Absolute measures, such as the absolute risk reduction or the number of patients needed to be treated to prevent one event, 17 are more helpful when applying results in clinical practice (see below). Odds ratio or relative risk? Odds and odds ratio The odds is the number of patients who fulfil the criteria for a given endpoint divided by the number of patients who do not. For example, the odds of diarrhoea during treatment with an antibiotic in a group of 10 patients may be 4 to 6 (4 with diarrhoea divided by 6 without, 0.66); in a control group the odds may be 1 to 9 (0.11) (a bookmaker would refer to this as 9 to 1). The odds ratio of treatment to control group would be 6 (0.66÷0.11). Risk and relative risk The risk is the number of patients who fulfil the criteria for a given end point divided by the total number of patients. In the example above the risks would be 4 in 10 in the treatment group and 1 in 10 in the control group, giving a risk ratio, or relative risk, of 4 (0.4÷0.1). Statistical methods for calculating overall effect The last step consists in calculating the overall effect by combining the data. A simple arithmetic average of the results from all the trials would give misleading results. The results from small studies are more subject to the play of chance and should therefore be given less weight. Methods used for meta-analysis use a weighted average of the results, in which the larger trials have more influence than the smaller ones. The statistical techniques to do this can be broadly classified into two models,18 the difference consisting in the way the variability of the results between the studies is treated. The “fixed effects” model considers, often unreasonably, that this variability is exclusively due to random variation.19 Therefore, if all the studies were infinitely large they would give identical results. The “random effects” model20 assumes a different underlying effect for each study and takes this into consideration as an additional source of variation, which leads to somewhat wider confidence intervals than the fixed effects model. Effects are assumed to be randomly distributed, and the central point of this distribution is the focus of the combined effect estimate. Although neither of two models can be said to be “correct,” a substantial difference in the combined effect calculated by the fixed and random effects models will be seen only if studies are markedly heterogeneous.18 Bayesian meta-analysis Some statisticians feel that other statistical approaches are more appropriate than either of the above. One approach uses Bayes's theorem, named after an 18th century English clergyman.21 Bayesian statisticians express their belief about the size of an effect by specifying some prior probability distribution before seeing the data, and then they update that belief by deriving a posterior probability distribution, taking the data into account.22 Bayesian models are available under both the fixed and random effects assumption.23 The confidence interval (or more correctly in bayesian terminology, the 95% credible interval, which covers 95% of the posterior probability distribution) will often be wider than that derived from using the conventional models because another component of variability, the prior distribution, is introduced. Bayesian approaches are controversial because the definition of prior probability will often be based on subjective assessments and opinion. Heterogeneity between study results If the results of the studies differ greatly then it may not be appropriate to combine the results. How to ascertain whether it is appropriate, however, is unclear. One approach is to examine statistically the degree of similarity in the studies' outcomes—in other words, to test for heterogeneity across studies. In such procedures, whether the results of a study reflect a single underlying effect, rather than a distribution of effects, is assessed. If this test shows homogeneous results then the differences between studies are assumed to be a consequence of sampling variation, and a fixed effects model is appropriate. If, however, the test shows that significant heterogeneity exists between study results then a random effects model is advocated. A major limitation with this approach is that the statistical tests lack power—they often fail to reject the null hypothesis of homogeneous results even if substantial differences between studies exist. Although there is no statistical solution to this issue, heterogeneity between study results should not be seen as purely a problem for meta-analysis—it also provides an opportunity for examining why treatment effects differ in different circumstances. Heterogeneity should not simply be ignored after a statistical test is applied; rather, it should be scrutinised, with an attempt to explain it.24 Graphic display Results from each trial are usefully graphically displayed, together with their confidence intervals. Figure 3 represents a meta-analysis of 17 trials of ß blockers in secondary prevention after myocardial infarction. Each study is represented by a black square and a horizontal line, which correspond to the point estimate and the 95% confidence intervals of the odds ratio. The 95% confidence intervals would contain the true underlying effect in 95% of the occasions if the study was repeated again and again. The solid vertical line corresponds to no effect of treatment (odds ratio 1.0). If the confidence interval includes 1, then the difference in the effect of experimental and control treatment is not significant at conventional levels (P>0.05). The area of the black squares reflects the weight of the study in the meta-analysis. The confidence interval of all but two studies cross this line, indicating that the effect estimates were non-significant (P>0.05). The diamond represents the combined odds ratio, calculated using a fixed effects model, with its 95% confidence interval. The combined odds ratio shows that oral ß blockade starting a few days to a few weeks after the acute phase reduces subsequent mortality by an estimated 22% (odds ratio 0.78; 95% confidence interval 0.71 to 0.87). A dashed line is plotted vertically through the combined odds ratio. This line crosses the horizontal lines of all individual studies except one (N). This indicates a fairly homogenous set of studies. Indeed, the test for heterogeneity gives a non-significant P value of 0.2. A logarithmic scale was used for plotting the odds ratios in figure 3. There are several reasons that ratio measures are best plotted on logarithmic scales.25 Most importantly, the value of an odds ratio and its reciprocal—for example, 0.5 and 2—which represent odds ratios of the same magnitude but opposite directions, will be equidistant from 1.0. Studies with odds ratios below and above 1.0 will take up equal space on the graph and thus look equally important. Also, confidence intervals will be symmetrical around the point estimate. Relative and absolute measures of effect Repeating the analysis by using relative risk instead of the odds ratio gives an overall relative risk of 0.80 (95% confidence interval 0.73 to 0.88). The odds ratio is thus close to the relative risk, as expected when the outcome is relatively uncommon (see box). The relative risk reduction, obtained by subtracting the relative risk from 1 and expressing the result as a percentage, is 20% (12% to 27%). The relative measures used in this analysis ignore the absolute underlying risk. The risk of death among patients who have survived the acute phase of myocardial infarction, however, varies widely.26 For example, among patients with three or more cardiac risk factors the probability of death at two years after discharge ranged from 24% to 60%.26 Conversely, two year mortality among patients with no risk factors was less than 3%. The absolute risk reduction or risk difference reflects both the underlying risk without treatment and the risk reduction associated with treatment. Taking the reciprocal of the risk difference gives the “number needed to treat” (the number of patients needed to be treated to prevent one event).17 For a baseline risk of 1% a year, the absolute risk difference shows that two deaths are prevented per 1000 patients treated (1). This corresponds to 500 patients (1÷0.002) treated for one year to prevent one death. Conversely, if the risk is above 10%, less than 50 patients have to be treated to prevent one death. Many clinicians would probably decide not to treat patients at very low risk, given the large number of patients that have to be exposed to the adverse effects of ß blockade to prevent one death. Appraising the number needed to treat from a patient's estimated risk without treatment and the relative risk reduction with treatment is a helpful aid when making a decision in an individual patient. A nomogram that facilitates calculation of the number needed to treat at the bedside has recently been published.27 Meta-analysis using absolute effect measures such as the risk difference may be useful to illustrate the range of absolute effects across studies. The combined risk difference (and the number needed to treat calculated from it) will, however, be essentially determined by the number and size of trials in patients at low, intermediate, or high risk. Combined results will thus be applicable only to patients at levels of risk corresponding to the average risk of the trials included. It is therefore generally more meaningful to use relative effect measures for summarising the evidence and absolute measures for applying it to a concrete clinical or public health situation. Sensitivity analysis Opinions will often diverge on the correct method for performing a particular meta-analysis. The robustness of the findings to different assumptions should therefore always be examined in a thorough sensitivity analysis. This is illustrated in figure 4 for the meta-analysis of ß blockade after myocardial infarction. Firstly, the overall effect was calculated by different statistical methods, by using both a fixed and a random effects model. The 4 shows that the overall estimates are virtually identical and that confidence intervals are only slightly wider with the random effects model. This is explained by the relatively small amount of variation between trials in this meta-analysis. Secondly, methodological quality was assessed in terms of how patients were allocated to active treatment or control groups, how outcome was assessed, and how the data were analysed.6 The maximum credit of nine points was given if patient allocation was truly random, if assessment of vital status was independent of treatment group, and if data from all patients initially included were analysed according to the intention to treat principle. Figure 4 shows that the three low quality studies (≤7 points) showed more benefit than the high quality trials. Exclusion of these three studies, however, leaves the overall effect and the confidence intervals practically unchanged. Thirdly, significant results are more likely to get published than non-significant findings,28 and this can distort the findings of meta-analyses. The presence of such publication bias can be identified by stratifying the analysis by study size—smaller effects can be significant in larger studies. If publication bias is present, it is expected that, of published studies, the largest ones will report the smallest effects. Figure 4 shows that this is indeed the case, with the smallest trials (50 or fewer deaths) showing the largest effect. However, exclusion of the smallest studies has little effect on the overall estimate. Finally, two studies (J and N; see 3) were stopped earlier than anticipated on the grounds of the results from interim analyses. Estimates of treatment effects from trials that were stopped early are liable to be biased away from the null value. Bias may thus be introduced in a meta-analysis that includes such trials.29 Exclusion of these trials, however, affects the overall estimate only The sensitivity analysis thus shows that the results from this meta-analysis are robust to the choice of the statistical method and to the exclusion of trials of poorer quality or of studies stopped early. It also suggests that publication bias is unlikely to have distorted its findings. Meta-analysis should be seen as structuring the processes through which a thorough review of previous research is carried out. The issues of completeness and combinability of evidence, which need to be considered in any review,30 are made explicit. Was it sensible to have combined the individual trials that comprise the meta-analysis? How robust is the result to changes in assumptions? Does the conclusion reached make clinical and pathophysiological sense? Finally, has the analysis contributed to the process of making rational decisions about the management of patients? It is these issues that we explore further in later articles in this series. Funding: ME was supported by the Swiss National Science Foundation. The department of social medicine at the University of Bristol and the department of primary care and population sciences at the Royal Free Hospital School of Medicine, London, are part of the Medical Research Council's health services research collaboration.
{"url":"https://www.bmj.com/content/315/7121/1533?ijkey=89cee01be887c4e603fae7c7da28cc26cbab30f7&keytype2=tf_ipsecsha","timestamp":"2024-11-13T22:32:24Z","content_type":"text/html","content_length":"166827","record_id":"<urn:uuid:a7cca55b-f29c-473d-b1f0-4fa6abeec2e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00787.warc.gz"}
What Does Delta Mean In Finance? - commons-credit-portal.orgWhat Does Delta Mean In Finance? What Does Delta Mean In Finance? What Exactly Is Delta? Delta () is a risk statistic that calculates how much a derivative, such as an options contract, will vary in price if the underlying securities changes by $1. underpinning safety A derivative product, such as futures, ETFs, and options, is based on an underlying asset, which is a stock or bond. In most circumstances, the underlying security is the object that one party in a derivative contract must supply and the other party must accept. underlying-security (https://www.investopedia.com) Investopedia’s definition of underlying security. Option traders may also use the delta to calculate the hedging ratio needed to become delta neutral. Similarly, What does delta mean in a budget? The ratio of an option’s price change to the price change of the underlying asset. The hedging ratio is also known as the risk-to-reward ratio. Also, it is asked, What is a delta value? The delta is a theoretical estimate of how much an option’s value would vary if the underlying securities moves $1 up or down. The Delta values vary from -1 to +1, with 0 denoting an option where the premium scarcely varies in relation to the underlying stock’s price fluctuations. Secondly, How do you calculate delta? Simply subtract the smaller number from the bigger one to get the delta – or difference – between them. The delta between 3 and 6 is, for example, (6 – 3) = 3. Add the two integers together if one of them is negative. Also, What does delta mean on a balance sheet? The price sensitivity of a derivative to the value of the underlying asset. People also ask, What is delta mean in business? Delta () is a risk statistic that calculates how much a derivative, such as an options contract, will vary in price if the underlying securities changes by $1. Option traders may also use the delta to calculate the hedging ratio needed to become delta neutral. Related Questions and Answers What does delta mean in payroll? In math and physics, an incremental change in a measure is referred to as “a delta.” A question concerning your delta is, “How much will your salary grow next year?” What is a good delta in stocks? A delta of about 0.5 or -0.5 is typical for an at-the-money option. The effect of a change in volatility is measured. Is delta a percentage? Delta is a metric that is expressed as a percentage. The delta of calls is always positive between 0 and 1.00, whereas the delta of puts is always negative between 0 and -1.00. A futures contract’s delta is 1.00. The delta is often referred to sans the decimal point by traders. Does delta mean average? the average y per unit x change (i.e. the change of y over the change of x). Delta is the first letter in the Greek term o diaphorá, which means “different.” (Derivatives and differentials, which similarly express change by tiny quantities, are notated with the little Latin letter d in a similar fashion.) What does delta mean in forecasting? Delta is a ratio that illustrates how much the price of a derivative will shift if the price of an underlying asset changes. Investors use delta to forecast derivatives movement and evaluate the risks associated with their investments. What does a high delta mean in options? Call options have a positive delta, whereas put options have a negative delta. Because an increase in the stock price is beneficial for call options but negative for put options, this is the case. A positive delta indicates that you are long on the market, whereas a negative delta indicates that you are short. What is the delta in mortgage terms? The delta ratio compares the price change of an underlying asset to the price change of a derivative. What is a delta raise? Delta Air Lines said on Thursday that the majority of its 75,000 workers would get a 4% wage hike, their first since the autumn of 2019, before the Covid epidemic. As travel demand dried up during Covid, airlines were among the most impacted, with all of the main carriers reporting record losses. What does delta mean in manufacturing? Numbers have three dimensions or aspects: 1) their quantitative value (probably the least relevant to the business owner), 2) the delta (change) they reflect in the amount being measured, and 3) the vector or direction of the difference between that change and the previous value or objective (benchmark) Does delta increase with price? The impact of stock price change on delta. As an option becomes more in-the-money, the likelihood that it will be in-the-money upon expiry rises. As a result, the option’s delta will rise. The likelihood of an option being in the money at expiry reduces as it moves farther out of the money. What is delta option example? For starters, delta refers to how much an option’s price changes for every $1 movement in the underlying asset. A delta of 0.6, for example, indicates that for every $1 that the underlying stock rises or falls, the option rises or falls by $0.60. How do you hedge a delta? Delta hedging techniques aim to lower the directional risk of a stock or option investment. An investor who buys or sells options and then balances the delta risk by purchasing or selling an equal quantity of stock or ETF shares is the most basic sort of delta hedging. Is delta absolute value? Delta: values that are continually increasing or decreasing. Absolute values are those that cannot change. Why is delta used for change? It’s utilized since the word difference has a latin derivation that likewise begins with the letter “d.” Differ is derived from the Latin word differre, which is derived from the verb ferre, which is sometimes interpreted as “carry.” This is the source of almost every word that includes the word “differ.” Is delta the same as derivative? Typically, d is the complete differential (endlessly tiny change) of a parameter, delta is its finite change, small delta may describe the infinitely small variation of a parameter, partial derivative illustrates the change in the value of a thermodynamic function when one of its parameters is changed What does delta mean in Excel? The DELTA function compares two numeric numbers to see whether they are equal. DELTA yields 1 when all values are equal. DELTA yields 0 when the values are different. DELTA may therefore be used to quickly count pairs of equal integers. What is delta hedging in finance? Delta hedging is an options trading method aimed at mitigating, or hedging, the directional risk associated with price swings in the underlying asset. The strategy use options to mitigate the risk of a single other option holding or a portfolio of assets. Is Lower delta better options? Remember that the lesser the delta, the less the option’s price will be influenced by stock movement. As a result, even if the underlying stock trades higher over time, the call option with a low delta may still trade down, even to zero, even if the stock trades higher. Does delta increase closer to expiration? For near or at-the-money options, Delta tends to rise closer to expiry. Gamma, a measure of delta’s rate of change, is used to further assess delta. Delta may also alter in response to variations in implied volatility. Why delta means difference? Both meanings start with the letter D, the same as the Greek letter, which should make them easier to remember: The most prevalent meaning of the uppercase delta is difference. It is just the difference in an amount, or change in a quantity. When we say delta y, we’re referring to the change in y or the amount by which y changes. Is delta the same as variance? The delta technique is a generic way to calculate the variance of a function of asymptotically normal random variables with known variance. What is lender paid PMI? Lender-paid private mortgage insurance (LPMI) is a sort of PMI that your mortgage lender arranges and pays for. You’ll usually have to pay a higher interest rate for this service. Delta is a measure of the rate of change in an investment. It compares how much the price has changed with the price at some previous time. Delta is positive when the price has gone up, and negative when it’s gone down. Delta can also be used to compare two different investments. This Video Should Help: Delta is the difference between an option’s strike price and its current price. For example, if a stock has a $10 strike price and it is currently trading at $15, then the delta would be +5. Delta can also be used to measure how much an option’s value changes with respect to the underlying asset’s movement. Reference: what is delta in options. Related Tags • price delta meaning • delta formula • how to calculate delta of an option • option chain with delta values • delta trading strategy
{"url":"https://commons-credit-portal.org/what-does-delta-mean-in-finance/","timestamp":"2024-11-09T09:43:02Z","content_type":"text/html","content_length":"98721","record_id":"<urn:uuid:cd82f0b0-c544-4e19-aa60-d69c11e2c1d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00031.warc.gz"}
What are some examples of non-uniform motion? | SolidWorks Assignment Help What are some examples of non-uniform motion? Is this a term you might use, which makes no sense, but I know that motion is a process by definition, rather than being represented by a fixed segment of a curved surface? (Surely this needs an explanation.) Is it just a motion or is it even non-uniform? Yes, your choice would have to be either “differencing without an acceleration” or “motion” or “variando il non avverso” (because there are quite a lot of other examples out there), or something similar. Shameless me! Would it be the same if both were accelerating and not in one direction and moving/no acceleration? So that how? (That is why I added the acceleration to remove the “no velocity” I’m referring to.) If no velocity then what are you saying is velocity is not just an acceleration? Is it that velocity is not an acceleration in one direction? There’s no way you could include acceleration without a velocity component, since velocity is a shape (you’re trying to use a shape component for simplicity). Also, the rule does not mean that when velocity is an axis function is a rule different than acceleration. The rest is just velocity, not acceleration. There’s no way you could include velocity without an acceleration component because velocity is a shape (${\mathbf{U}}_1$). I think the only way you could do that would be if you added an acceleration component from the direction you want, but it’s too shallow. So that would still mean velerind. The rule would say which direction the event was going, not which direction it would take if and how. We can also use velocity as a shorthand for acceleration, but which direction of velocity it should be in. (Like some of these problems used “in” – in which direction should you use it?) Gaius, if we can define a function as the function that graphically indicates the motion of a point in 3D space, then the velocity function you have is gakk Âs that is this – £Ã2Ão¾Ão·1£g And that is exactly the law you talked about (you said something about friction you said). You might need some more understanding, after all, why friction doesn’t have a definition in Physics of Particles: it’s just an intrinsic property of the material surrounding a point (because it’s the surface itself). If it were true, then why did friction to get a meaning in Physics of Particles? Now, if velerind can’t be used to describe the motion of a point in 3D space but just means that you can use an acceleration component, then 3D’s acceleration would have to be there. But that’s essentially what friction does for a point in 3D-space. There’s also the issue of theWhat are some examples of non-uniform motion? The problem is as easy as picking up a uniform surface and moving it on a horizontal strip. Think of a line segment in space. This is extremely useful if one wants to draw square grid lines on straight lines – if only a few points lie on that grid, then it really is pretty much obvious to just move the grid (eg, moving so much that it becomes impossible to draw square grid lines) to the right with the right hand line, like on a diagonal line. That’s where the extra “theory” is employed, where different “ideas” (like “real” motions) need to be applied. Note that this new physics term “nondecision” has yet to be incorporated into the definition. Pay Someone To Do My Online Class What’s the relation between this term “nondecision” and other tools for understanding and modeling moving? I find it hard to believe that any nonderfore seen physics could describe this situation. But I need some help. If we imagine one set of particles emitting their particles to their desired positions, the equation describes the motion of the particles to specific locations. Without it the interaction term would of course still be force-free and no longer be an intrinsic property. What it does to the particles is something called inertia, the principle of which is that someone does not move to an empty place and then moving back to find another. I don’t find it helpful to say yes, if it wasn’t so simple to explain why the particles would gravitate to certain locations. But it just tells you that something else is moving up or down or sideways. That’s it! (Or is it 2 dimensions? How many systems are here now?) So what does it mean for a particle to move on a horizontal strip, just like that? Another obvious way to look at these structures would be as for individual steps we could move the particle at a rate with its time. This number goes up and down with some time, or even until it needs a bit of breathing. If time goes down, it would become something of this scale and how will it react, moving with equal time? Or if time decreases, it tends to increase to some extent? Or maybe, at what point does it become as simple as a two dimension system? Another question: can the particle’s “energy transfer” is that it keeps it moving up and down, or should the particles be changing position? Maybe. Maybe (no need for an explicit definition) there is a way to figure out where the particles are displaced far enough downstream to form these particles, where I see them move in the direction of the strip? We need to know how much of this energy is coming from how long we have (or how many people have moved or will move through it) to begin moving the particles. If we were to think that moving the particles at one rate, before 2, we get the energy transfer rate (the first time the particles become a particle) of zero. This means, for a classical computer, informative post if we have 2 computers only to move a particle at speed 1 of some given speed and then try to move it at a speed 2 of another computer over that amount of time, the entire time of the computer’s execution becomes zero. Or when with a real particle problem we have 2 computers, whose speed cannot be changed without some change in the position of a second computer, the energy transfer rate falls off. But the algorithm for moving particles up and down in a strip is the same, the paper is different, the particle problem is more simple to solve. It is slightly more complicated but interesting to me. I think the point is that there is a “moving action” – what happen if you do not live in a world like 4 dimensions? The problem is that there is a great deal of nonunWhat are some examples of non-uniform motion? What are some examples of non-uniform motion in a free space. As the linear algebra “H” now stands for Hamming volume, so does this notation. In some discussions throughout the paper we ask about a particular non-uniform motion: Given the potential: f(x,y) is called a uniform motion iff it is uniformly consistent with the corresponding distance. It is not necessarily a diffusion however, as the free space (i. How Do You Get Your Homework Done? e. a compact space) does not have a closed, constant function, hence the equilibrium is not uniform however. To see – how “deterministic” is it’s not that, but can someone tell me if the word “uniform” holds at all? What are some examples of non-uniform motion in a free space. As the linear algebra “H” now stands for Hamming volume, so does this notation. Let us turn… to another reference that i don’t realize that i describe as.. its behavior and is there anywhere Its not a variable outside the domain when we say something we mean to understand…. Maybe they could define it to be something less then the time it takes for it to change… something that is very like the time it takes to move from one state to another…. Online Class Tutors Review Something like an instant. There can also be non-uniform motions. What is a non-uniform motion? If there are numbers of linearly independent states in a given state space we can say on each level, the least amount moves we can make (and we know many ways to do it) so in a non-uniform sense. And why are we doing this if we are in a sense letting people learn to say it “is there any state with properties we don’t know anymore” can we have a way to do that? Some points. 1. If we didn’t have continuous actions we would need to introduce discrete sets to manage the movement. These can be formulated in terms of the dynamic map, on which we are going to define a concept using the dynamic map as a time slice. We’ll try to avoid language-difficulty when thinking about moving from one state to another, because of the time complexity of this idea. 2. We will need to drop a common language-language overhead here – the space of linear operators has the essence of a linear programming language, where all of these operators are in the Euclidean space. Because of these differences the actual concept isn’t very relevant, but just a little bit more of a problem for the general case of linear programming, so a little bit more detail is needed. 3. The more precise we are, i.e. what we call its “hype” is about the way in which that language is presented. Does it have the essence of a discrete-time Turing machine? Not sure if it’s a good idea or not. In fact we simply expect it to use a dynamic language if we attempt to understand a concrete algorithm trying to run it. 4. We will try to define “weak convergence” better. Since to get real-time convergence, we need to have the property that each control in the solution is locally-zero free. Help Me With My Homework Please In this case i don’t imagine a problem involving the solution to ‘loop’ by control of the map. Though it could be that we’re asking for $\left\lbrace \varphi\right\rbrace$ = 0 when the solution does not exist, or the boundary condition on the solution does. Are the speed of light getting faster than what we think it would have been going today? It seems to me that weak convergence isn’t the way we think it’ll be described, although it might indeed seem a small leap from other sense phenomena like Lyapunov exponent. 5. This idea is not so hard. Let’s view it realistically. A state (a topological space) with $dim \mathbb{T}= N \times 1$ is a bounded linear block of linear operators, where $dim \mathbb{T}= \mathrm{dim}\mathcal{T}$ is the matrix’s dimension and $\mathrm{dim}\mathcal {T}$, its submatrix. A set $L \subset \mathrm{span}\mathcal{S}(\mathbb{T})$, has dimension $\mathrm{dim}\mathcal{T}$ if every linear element of it vanishes rapidly. Because of its direct relation on all of $\mathbb{T}$, this one block doesn’t support one of the two topologies of an entire language $\mathcal{T}$. 6. Why is it that they don’t have a dual meaning to this idea? Because we are already talking about linear
{"url":"https://solidworksaid.com/what-are-some-examples-of-non-uniform-motion-24966","timestamp":"2024-11-02T04:28:12Z","content_type":"text/html","content_length":"159360","record_id":"<urn:uuid:c887951e-0ccc-4549-bacd-7fc739dfc575>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00853.warc.gz"}
How Economic Order Quantity model can help inventory? - Supply Brain What is Economic Order Quantity? Economic Order Quantity is basically the ideal quantity you must order when trying to minimize inventory costs. These costs include holding inventory, shortage and costs involving lead time, such as transportation and logistics. The model is used at the continuous review inventory policy, in which the level of inventory is always monitored, and the order is made at specific times. EOQ assumes a constant demand and a fixed deplete rate. In short terms, the EOQ calculate the proper reorder point and quantity, to make sure an optimal replenishment with no shortages. EOQ was developed in 1913 by Ford W. Harris and is used until now. The purpose of Economic Order Quantity The EOQ is an optimization model that will try to get in the most economic scenario of order. Trying to equalize holding stock and order. So, it’s a model of optimization for order and production It is a model that will help you set how much you should order and how many times. It usually considers a long cycle, such as the annual revenue of a company. The reason to apply optimization models is to arrive at the best possible scenario. EOQ seeks to find the optimal order costs. At the same time, you may economize buying more items, you should for instance consider the holding costs. You can face, for example, a risk of not being able to sell all of these items. Or the risk of loosing items for obsolescence. A series of costs related to maintain items and making order. Therefore, what EOQ does is finding the balance between those points, cutting costs, and increasing profit in each order. So, it will look at the annual demand to find those optimized values. Economic Order Quantity Formula As we established, EOQ is calculated by minimizing total cost per order. If we think in the first order as derivative of zero, other components are costs of holding inventory and ordering. So, let’s pass for each element that composes the EOQ formula before explaining more about the concept: D: Annual Quantity Demanded Q: Volume per Order S: Ordering Cost C: Unit Cost H: Holding Cost I: Carrying Cost K: Transaction Cost At the end we have a basic formula for EOQ: Q = √2DS/H EOQ’s graphic representation When trying to answer the question about the assumption of costs estimations, the specialist in supply chain Nicolas Vandeput proposes two ways: the first one is a graphic representation of supply chain costs and the second a mathematical model. The graph would show that when you are close to the optimal order quantity, the total cost curve is rather flat. So, if you stay close to Q, you’ll get a total cost close to the optimal: EOQ trough time Although the first EOQ model was developed in 1913, a lot of extensions were proposed afterwards. It’s a model that’s been thoroughly discussed in the academic literature and because of that is a very solid model. One of the extensions is the Production Rate will adapt EOQ for a internal production process, instead of considering an external supplier. For this model this model, Vandeput concludes that producing too much and too fast doesn’t help reducing overall costs. Another extension is the Backorders. It proposes that allowing some backorders are actually optimal for supply chains, instead of aiming for 100% service level. In other words, having backorders are good are better than serving every order from on-hand inventory. The extension of discounts incorporates the discounts offered by suppliers. The good thing about this extension is that it will optimize considering lower prices from larger orders. The EOQ model is a good model for small business with a large and variable inventory. It is one of many ways to a company make itself more efficient and profitable. A bad side of EOQ is that it assumes that the demand will be the same over time. Also, the calculation assumes that ordering prices and costs remain constant. With an artificial intelligence tool, the definition of the Economic Order Quantity aligned to the stock policy can be more assertive. With Supply Brain we apply a daily review methodology and optimize the SLA. Thus, we avoid excess and rupture of stock, taking into consideration freight costs and costs to keep a product in stock. Want to know our tool? Get in touch with us!
{"url":"https://supplybrain.ai/en/how-economic-order-quantity-model-can-help-inventory/","timestamp":"2024-11-10T20:44:51Z","content_type":"text/html","content_length":"74845","record_id":"<urn:uuid:7111b8c7-67c5-4ea6-b0f3-e640ee5cc1a7>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00592.warc.gz"}
The Math Behind the DRS Stars @vRobM and I were discussing an unrelated topic when DRS and the mysterious stars came up. (**)? Our first instinct was this was the “special sauce” between the two all beef patties. However after some creative KB searching we came across a formula that describes it quite well: DRS Recommendations To understand the stars, you need to understand where to find them and what they mean. The following is taken from the VI3 docs: Priority for this recommendation, as a number of stars. Five stars, the maximum, indicate a mandatory move because of a host entering maintenance mode or affinity rule violations. Other ratings denote how much the recommendation would improve the cluster’s performance; from four stars (significant improvement) to one star (slight). So that explains what the stars indicate as far as recommendations go. But what makes a 3 star and what makes a 4 star recommendation? Calculating the Priority Turns out that this isn’t exactly the special sauce we thought it was. Rather, there is some carefully reasoned out math that goes into this. This KB article spells it out in detail, but we’ll hit the high points. First the formula: 6 – ceil(LoadImbalanceMetric / 0.1 * sqrt(NumberOfHostsInCluster)). There are two variables in there and only one of which is obvious, “LoadImbalanceMetric” however can be interesting: LoadImbalanceMetric is the current host load standard deviation shown on the cluster’s Summary page of the vSphere Client. For each host, compute the load on the host as sum(expected VM loads)) / (capacity of host). Then compute the standard deviation of the host load metric across all hosts to determine So where does one find this standard deviation? Select your cluster, then summary and look for the following section: In our particular case, not much to look at, as well, she is seemingly a well balanced cluster. However let’s work through the formula with the assumption that we have a 2 node cluster and a standard deviation of 0.282 (the “target” from above): 6 – ceil(0.282 / 0.1 * sqrt(2)). What is Ceilias brother Ceil doing with my numbers? Turns out, it is not a family of mathematicians busily flicking their abacus’. ‘Ceil’ here, represents the Ceiling function, which I’m sure you’ve heard of before. According to Wikipedia the Ceiling function is: In mathematics and computer science, the floor and ceiling functions map a real number to the largest previous or the smallest following integer, respectively. More precisely, floor(x) = ⌊x⌋ is the largest integer not greater than x and ceiling(x) = ⌈x⌉ is the smallest integer not less than x. 6 – the smallest number not less than (0.282 / .1 * sqrt(2)) which equals 2, hence, a 2 star recommendation. (**)! More info: Duncan Epping @ Yellow-Bricks has put together an excellent Deep-dive page for DRS, where you can get into much greater detail. WolframAlpha to the Rescue While this can easily be done in any random calculator (or in the heads of some folk) I used WolframAlpha to good avail. It’s quick and gives you a graphical breakdown of the formula. Just plug in new values and go! The recommendations and mathematical bits take place behind the scenes seamlessly. After all, that is the magic of DRS. However, it helps to have an understanding of the actual logic and math that goes into those recommendations so you can better understand your cluster, and better plan for new hosts and workloads. As always drop a line in the comments or tweet to either @vRobM or myself on Update: Thanks for the excellent deep-dive page Duncan! 11 thoughts on “The Math Behind the DRS Stars” • Pingback: Tweets that mention The Math Behind the DRS Stars -- Topsy.com • Wrote a couple of words around that bit as well in my deepdive. • Pingback: All things virtual XI « TheSaffaGeek • Excellent page. Not sure how I missed it the first time around. I've updated my post to include a link. Thanks! • Perfect punch! • Pingback: Technology Short Takes #1 - blog.scottlowe.org - The weblog of an IT pro specializing in virtualization, storage, and servers • Pingback: VCAP5-DCA – Objective 3.3 – Implement and Maintain Complex DRS Solutions – Skills and Abilities | VCDX or Bust • Excellent article and great research. In your calculations did you intend to use the “target host load standard deviation” being 0.282, on purpose? From my understanding the definition of the LoadImbalanceMetric is “current host load standard deviation” which from the image is 0? Or I am I missing something? • hi I want to work in DRS Algorithm for my thesis, I need the source code to simulate this algorithm in cloudsim. if you have any idea about this ,please help me • Hello, first of all – applause.. tremendous one indeed. May be I am too late, but I just am curious about – 1 – how to calculate a VM load & 2 – how to calculate a host’s capacity
{"url":"https://vbrownbag.com/2010/06/the-math-behind-the-drs-stars/","timestamp":"2024-11-13T11:46:27Z","content_type":"text/html","content_length":"81289","record_id":"<urn:uuid:413a0f25-d9a7-446e-9da7-d2a74431a662>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00539.warc.gz"}
3.2.2: Measures of Central Location - Three Kinds of Averages Last updated Page ID \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) ( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\) \( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\) \( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\) \( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\) \( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\) \( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\) \( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\) \( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vectorC}[1]{\textbf{#1}} \) \( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \) \( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \) \( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \) \( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \) \( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \) \(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\ evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\ newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y} \) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real} {\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec} [3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array} {r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp #2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\ wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\ newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var} {\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\ bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\ widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\) • To learn the concept of the “center” of a data set. • To learn the meaning of each of three measures of the center of a data set—the mean, the median, and the mode—and how to compute each one. This section is be titled “three kinds of averages” because any kind of average could be used to answer the question "where is the center of the data?". We will see that the nature of the data set, as indicated by a relative frequency histogram, will determine what constitutes a good answer. Different shapes of the histogram call for different measures of central location. The Mean The first measure of central location is the usual “average” that is familiar to everyone: add up all the values, then divide by the number of values. Before writing a formula for the mean let us introduce some handy mathematical notation. The Greek letter \(\sum \), pronounced "sigma", is a handy mathematical shorthand that stands for "add up all the values" or "sum". For example \(\sum x\) means "add up all the values of \(x\)", and \(\sum x^2\) means "add up all the values of \(x^2\)". In these expressions \(x\) usually stands for a value of the data, so \(\sum x\) stands for "the sum of all the data values" and \(\sum x^2\) means "the sum of the squares of all the data values". \(\mathbf{n}\) stands for the sample size, the number of data values. An example will help make this clear. Find \(n\), \(\sum x\), \(\sum x^2\) and \(\sum (x - 1)^2\) for the data: \[1,\, 3,\, 4 \nonumber \] \[\begin{array}{rcl} n & = & 3 \quad \mbox{ because there are three data values} \\ \sum x & = & 1 + 3 + 4 = 8 \\ \sum x^2 & = & 1^2 + 3^2 + 4^2 = 1 + 9 + 16 = 26 \\ \sum {(x - 1)}^2 & = & {(1 - 1)}^ 2 + {(3 - 1)}^2 + {(4 - 1)}^2 = 0^2 + 2^2 + 3^2 = 13\end{array} \nonumber \] Using these handy notations it's easy to write a formula defining the mean \(\bar{x}\) of a sample. The sample mean of a set of \(n\) sample data values is the number \(\bar x\) defined by the formula \[\bar x = \dfrac{\sum x}{n} \label{samplemean} \] Find the mean of the following sample data: \(2\), \(-1\), \(0\), \(2\) This is a application of Equation \ref{samplemean}: \[\bar x = \dfrac{\sum x}{n} = \dfrac{2 + (-1) + 0 + 2}{4} = \dfrac{3}{4} = 0.75 \nonumber \] A random sample of ten students is taken from the student body of a college and their GPAs are recorded as follows: \[1.90, 3.00, 2.53, 3.71, 2.12, 1.76, 2.71, 1.39, 4.00, 3.33\nonumber \] Find the mean. This is a application of Equation \ref{samplemean}: \[\begin{array}{rcl}\bar x = \dfrac{\sum x}{n} = \dfrac{1.90 + 3.00 + 2.53 + 3.71 + 2.12 + 1.76 + 2.71 + 1.39 + 4.00 + 3.33}{10} = \dfrac{26.45}{10} = 2.645\end{array} \nonumber \] A random sample of \(19\) women beyond child-bearing age gave the following data, where \(x\) is the number of children and \(f\) is the frequency, or the number of times it occurred in the data set. \[\begin{array}{c|cccc}x & 0 & 1 & 2 & 3 & 4 \\ \hline f & 3 & 6 & 6 & 3 & 1\end{array} \nonumber \] Find the sample mean. In this example the data are presented by means of a data frequency table, introduced in Chapter 1. Each number in the first line of the table is a number that appears in the data set; the number below it is how many times it occurs. Thus the value \(0\) is observed three times, that is, three of the measurements in the data set are \(0\), the value \(1\) is observed six times, and so on. In the context of the problem this means that three women in the sample have had no children, six have had exactly one child, and so on. The explicit list of all the observations in this data set is \[0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 4 \nonumber \] The sample size can be read directly from the table, without first listing the entire data set, as the sum of the frequencies: \(n = 3 + 6 + 6 + 3 + 1 = 19\). The sample mean can be computed directly from the table as well: \[\bar x = \dfrac{\sum x}{n} = \dfrac{0 \times 3 + 1 \times 6 + 2 \times 6 + 3 \times 3 + 4 \times 1}{19} = \dfrac{31}{19} = 1.6316 \nonumber \] In the examples above the data sets were described as samples. Therefore the means were sample means \(\bar x\). If the data come from a census, so that there is a measurement for every element of the population, then the mean is calculated by exactly the same process of summing all the measurements and dividing by how many of them there are, but it is now the population mean and is denoted by \(\mu\), the lower case Greek letter mu. The population mean of a set of \(N\) population data is the number \(\mu\) defined by the formula: \[\displaystyle \mu=\frac{\sum x}{N}. \nonumber \] The mean of two numbers is the number that is halfway between them. For example, the average of the numbers \(5\) and \(17\) is \((5 + 17) ∕ 2 = 11\), which is \(6\) units above \(5\) and \(6\) units below \(17\). In this sense the average \(11\) is the “center” of the data set \(\{5,17\}\). For larger data sets the mean can similarly be regarded as the “center” of the data. The Median To see why another concept of average is needed, consider the following situation. Suppose we are interested in the average yearly income of employees at a large corporation. We take a random sample of seven employees, obtaining the sample data (rounded to the nearest hundred dollars, and expressed in thousands of dollars). \[24.8, 22.8, 24.6, 192.5, 25.2, 18.5, 23.7 \nonumber \] The mean (rounded to one decimal place) is \(\bar x = 47.4\), but the statement “the average income of employees at this corporation is \($47,400\)” is surely misleading. It is approximately twice what six of the seven employees in the sample make and is nowhere near what any of them makes. It is easy to see what went wrong: the presence of the one executive in the sample, whose salary is so large compared to everyone else’s, caused the numerator in the formula for the sample mean to be far too large, pulling the mean far to the right of where we think that the average “ought” to be, namely around \($24,000\) or \($25,000\). The number \(192.5\) in our data set is called an outlier, a number that is far removed from most or all of the remaining measurements. Many times an outlier is the result of some sort of error, but not always, as is the case here. We would get a better measure of the “center” of the data if we were to arrange the data in numerical order: \[18.5, 22.8, 23.7, 24.6, 24.8, 25.2, 192.5 \nonumber \] then select the middle number in the list, in this case \(24.6\). The result is called the median of the data set, and has the property that roughly half of the measurements are larger than it is, and roughly half are smaller. In this sense it locates the center of the data. If there are an even number of measurements in the data set, then there will be two middle elements when all are lined up in order, so we take the mean of the middle two as the median. Thus we have the following definition. The sample median\(\tilde{x}\) of a set of sample data for which there are an odd number of measurements is the middle measurement when the data are arranged in numerical order. The sample median of a set of sample data for which there are an even number of measurements, is the mean of the two middle measurements when the data are arranged in numerical order. The population medianis defined in the same way as the sample median except for the entire population. The median is a value that divides the observations in a data set so that \(50\%\) of the data are on its left and the other \(50\%\) on its right. In accordance with Figure \(\PageIndex{7}\), therefore, in the curve that represents the distribution of the data, a vertical line drawn at the median divides the area in two, area \(0.5\) (\(50\%\) of the total area \(1\)) to the left and area \(0.5\) (\(50\%\) of the total area \(1\)) to the right, as shown in Figure \(\PageIndex{1}\). In our income example the median, \($24,600\), clearly gave a much better measure of the middle of the data set than did the mean \($47,400\). This is typical for situations in which the distribution is skewed. (Skewness and symmetry of distributions are discussed at the end of this subsection.) Compute the sample median for the data from Example \(\PageIndex{2}\) The data in numerical order are \(−1, 0, 2, 2\). The two middle measurements are \(0\) and \(2\), so \(\tilde{x}= (0+2)/2 = 1\). Compute the sample median for the data from Example \(\PageIndex{3}\) The data in numerical order are \[1.39, 1.76, 1.90, 2.12, 2.53, 2.71, 3.00, 3.33, 3.71, 4.00 \nonumber \] The number of observations is ten, which is even, so there are two middle measurements, the fifth and sixth, which are \(2.53\) and \(2.71\). Therefore the median of these data is \(\tilde{x} = (2.53+2.71)/2 = 2.62\). Compute the sample median for the data from Example \(\PageIndex{4}\) The data in numerical order are: \[0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 4 \nonumber \] The number of observations is \(19\), which is odd, so there is one middle measurement, the tenth. Since the tenth measurement is \(2\), the median is \(\tilde{x} = 2\). In the last example it is important to note that we could have computed the median directly from the frequency table, without first explicitly listing all the observations in the data set. We already saw in Example \(\PageIndex{4}\) how to find the number of observations directly from the frequencies listed in the table \(n = 3+6+6+3+1 = 19\). Thus the median is the tenth observation. The second line of the table in Example \(\PageIndex{4}\) shows that when the data are listed in order there will be three \(0s\) followed by six \(1s\), so the tenth observation, the median, is \(2\). The relationship between the mean and the median for several common shapes of distributions is shown in Figure \(\PageIndex{2}\). The distributions in panels (a) and (b) are said to be symmetric because of the symmetry that they exhibit. The distributions in the remaining two panels are said to be skewed. In each distribution we have drawn a vertical line that divides the area under the curve in half, which in accordance with Figure \(\PageIndex{1}\) is located at the median. The following facts are true in general: • When the distribution is symmetric, as in panels (a) and (b) of Figure \(\PageIndex{2}\), the mean and the median are equal. • When the distribution is as shown in panel (c), it is said to be skewed right. The mean has been pulled to the right of the median by the long “right tail” of the distribution, the few relatively large data values. • When the distribution is as shown in panel (d), it is said to be skewed left. The mean has been pulled to the left of the median by the long “left tail” of the distribution, the few relatively small data values. The Mode Perhaps you have heard a statement like “The average number of automobiles owned by households in the United States is \(1.37\),” and have been amused at the thought of a fraction of an automobile sitting in a driveway. In such a context the following measure for central location might make more sense. The sample mode of a set of sample data is the most frequently occurring value. On a relative frequency histogram, the highest point of the histogram corresponds to the mode of the data set. Figure \(\PageIndex{3}\) illustrates the mode. Figure \(\PageIndex{3}\): Mode For any data set there is always exactly one mean and exactly one median. This need not be true of the mode; several different values could occur with the highest frequency, as we will see. It could even happen that every value occurs with the same frequency, in which case the concept of the mode does not make much sense. Find the mode of the following data set: \(-1,\; 0,\; 2,\; 0\). The value \(0\) is most frequently observed in the data set, so the mode is \(0\). Compute the sample mode for the data of Example \(\PageIndex{4}\) The two most frequently observed values in the data set are \(1\) and \(2\). Therefore mode is a set of two values: \(\{1,2\}\). The mode is a measure of central location since most real-life data sets have more observations near the center of the data range and fewer observations on the lower and upper ends. The value with the highest frequency is often in the middle of the data range. Key Takeaway • The mean, the median, and the mode each answer the question “Where is the center of the data set?” The nature of the data set, as indicated by a relative frequency histogram, determines which one gives the best answer.
{"url":"https://socialsci.libretexts.org/Workbench/Linguistics_as_LabScience/03%3A_How_do_we_make_sense_of_the_data_we_collect/3.02%3A_Descriptive_Statistics/3.2.02%3A_Measures_of_Central_Location_-_Three_Kinds_of_Averages","timestamp":"2024-11-13T03:14:13Z","content_type":"text/html","content_length":"149488","record_id":"<urn:uuid:61f24b35-5c40-4c7b-8a8f-799769ba1bf4>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00181.warc.gz"}
Linear Constraints Linear Constraints Many methods can make use of linear equality or inequality constraints. As the name implies, linear constraints are constraints that are linear functions of the variables. Constraints that are nonlinear functions of variables are specified using the topic-nonlinear_constraints family of keywords. From a Dakota usage point of view, the most important difference between linear and nonlinear constraints is that the former are specified entirely within the Dakota input file and calculated by Dakota itself, while the latter must be calculated by the user’s simulation and returned as responses to Dakota. The Optimization chapter of the User’s Manual states which methods support linear constraints. Of those methods, a subset strictly obey linear constraints; that is, no candidate points are generated by the optimizer that violate the constraints. These include method-asynch_pattern_search, the optpp_* family of optimizers (with the exception of optpp_fd_newton), and method-npsol_sqp. The other methods seek feasible solutions (i.e. solutions that satisfy the linear constraints), but may violate the constraints as they run. Linear constraints may also be violated, even when using an optimizer that itself strictly respects them, if responses-numerical_gradients are used. In this case, Dakota may request evaluations that lie outside of the feasible region when computing a gradient near the boundary. One final limitation that bears mentioning is that linear constraints are compatible only with continuous variables. No discrete types are permitted when using linear constraints.
{"url":"https://snl-dakota.github.io/docs/6.20.0/users/usingdakota/topics/linear_constraints.html","timestamp":"2024-11-02T08:00:15Z","content_type":"text/html","content_length":"16314","record_id":"<urn:uuid:d00b326a-f27e-4ea5-a2a4-2edb9826f5a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00827.warc.gz"}
Sensitivity Analysis Sensitivity Analysis quantifies the impact of variable changes on a specific outcome within a model. Employed across various disciplines, it aids in risk assessment, model validation, and decision-making, offering metrics to represent sensitivity. Quantitative technique to assess how different values of an independent variable impact a particular dependent variable under a given set of assumptions. Types of Variables • Input Variables: Factors being manipulated in the analysis. • Output Variables: Resultant variables that change based on input variations. Types of Sensitivity Analysis • Local Sensitivity Analysis: Examines change over a small range of parameter values. • Global Sensitivity Analysis: Covers a wide range of parameter values. • Deterministic: Uses specific set values for inputs. • Probabilistic: Incorporates randomness in inputs and/or outputs. • One-at-a-Time (OAT): Changes one variable while keeping others constant. • Monte Carlo Simulation: Uses random sampling to obtain numerical results. • Factorial Analysis: Investigates the effects of multiple variables at once. • Risk Assessment: Used to estimate uncertainties in outcomes. • Model Validation: Helps in refining models by comparing results to real-world data. • Decision Support: Assists in choosing between different strategies or scenarios. Metrics and Indicators • Elasticity: Measure of sensitivity, often expressed as a percentage change. • Tornado Diagrams: Graphical representation ranking variables by their impact. • Sobol Indices: Quantify the contribution of each input to the output variance. • Computational Complexity: Especially relevant for high-dimensional models. • Assumptions: Results as good as the assumptions they are based on. Interdisciplinary Usage • Finance: Option pricing, portfolio optimization. • Engineering: System reliability, material selection. • Medicine: Epidemiological models, treatment effectiveness. Ethical Considerations • Transparency: Clear methodology essential for validity. • Misuse: Risk of cherry-picking data or manipulating variables for desired outcomes.
{"url":"https://thebasics.guide/sensitivity-analysis/","timestamp":"2024-11-07T20:20:17Z","content_type":"text/html","content_length":"84995","record_id":"<urn:uuid:5ef6a0f3-5ceb-4d2b-addc-8a923ed42f3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00886.warc.gz"}
5 Binomial Logistic Regression for Binary Outcomes | Handbook of Regression Modeling in People Analytics: With Examples in R, Python and Julia 5 Binomial Logistic Regression for Binary Outcomes In the previous chapter we looked at how to explain outcomes that have continuous scale, such as quantity, money, height or weight. While there are a number of typical outcomes of this type in the people analytics domain, they are not the most common form of outcomes that are typically modeled. Much more common are situations where the outcome of interest takes the form of a limited set of classes. Binary (two class) problems are very common. Hiring, promotion and attrition are often modeled as binary outcomes: for example ‘Promoted’ or ‘Not promoted’. Multi-class outcomes like performance ratings on an ordinal scale, or survey responses on a Likert scale are often converted to binary outcomes by dividing the ratings into two groups, for example ‘High’ and ‘Not High’. In any situation where our outcome is binary, we are effectively working with likelihoods. These are not generally linear in nature, and so we no longer have the comfort of our inputs being directly linearly related to our outcome. Therefore direct linear regression methods such as Ordinary Least Squares regression are not well suited to outcomes of this type. Instead, linear relationships can be inferred on transformations of the outcome variable, which gives us a path to building interpretable models. Hence, binomial logistic regression is said to be in a class of generalized linear models or GLMs. Understanding logistic regression and using it reliably in practice is not straightforward, but it is an invaluable skill to have in the people analytics domain. The mathematics of this chapter is a little more involved but worth the time investment in order to build a competent understanding of how to interpret these types of models. 5.1 When to use it 5.1.1 Origins and intuition of binomial logistic regression The logistic function was first introduced by the Belgian mathematician Pierre François Verhulst in the mid-1800s as a tool for modeling population growth for humans, animals and certain species of plants and fruits. By this time, it was generally accepted that population growth could not continue exponentially forever, and that there were environmental and resource limits which place a maximum limit on the size of a population. The formula for Verhulst’s function was: \[ y = \frac{L}{1 + e^{-k(x - x_0)}} \] where \(e\) is the exponential constant, \(x_0\) is the value of \(x\) at the midpoint, \(L\) is the maximum value of \(y\) (known as the ‘carrying capacity’) and \(k\) is the maximum gradient of the curve. The logistic function, as shown in Figure 5.1, was felt to accurately capture the theorized stages of population growth, with slower growth in the initial stage, moving to exponential growth during the intermediate stage and then to slower growth as the population approaches its carrying capacity. In the early 20th century, starting with applications in economics and in chemistry, the logistic function was adopted in a wide array of fields as a useful tool for modeling phenomena. In statistics, it was observed that the logistic function has a similar S-shape (or sigmoid) to a cumulative normal distribution of probability, as depicted in Figure 5.2^25, where the \(x\) scale represents standard deviations around a mean. As we will learn, the logistic function gives rise to a mathematical model where the coefficients are easily interpreted in terms of likelihood of the outcome. Unsurprisingly, therefore, the logistic model soon became a common approach to modeling probabilistic phenomena. 5.1.2 Use cases for binomial logistic regression Binomial logistic regression can be used when the outcome of interest is binary or dichotomous in nature. That is, it takes one of two values. For example, one or zero, true or false, yes or no. These classes are commonly described as ‘positive’ and ‘negative’ classes. There is an underlying assumption that the cumulative probability of the outcome takes a shape similar to a cumulative normal distribution. Here are some example questions that could be approached using binomial logistic regression: • Given a set of data about sales managers in an organization, including performance against targets, team size, tenure in the organization and other factors, what influence do these factors have on the likelihood of the individual receiving a high performance rating? • Given a set of demographic, income and location data, what influence does each have on the likelihood of an individual voting in an election? • Given a set of statistics about the in-game activity of soccer players, what relationship does each statistic have with the likelihood of a player scoring a goal? 5.1.3 Walkthrough example You are an analyst for a large company consisting of regional sales teams across the country. Twice every year, this company promotes some of its salespeople. Promotion is at the discretion of the head of each regional sales team, taking into consideration financial performance, customer satisfaction ratings, recent performance ratings and personal judgment. You are asked by the management of the company to conduct an analysis to determine how the factors of financial performance, customer ratings and performance ratings influence the likelihood of a given salesperson being promoted. You are provided with a data set containing data for the last three years of salespeople considered for promotion. The salespeople data set contains the following • promoted: A binary value indicating 1 if the individual was promoted and 0 if not • sales: the sales (in thousands of dollars) attributed to the individual in the period of the promotion • customer_rate: the average satisfaction rating from a survey of the individual’s customers during the promotion period • performance: the most recent performance rating prior to promotion, from 1 (lowest) to 4 (highest) Let’s take a quick look at the data. # if needed, download salespeople data url <- "http://peopleanalytics-regression-book.org/data/salespeople.csv" salespeople <- read.csv(url) # look at the first few rows of data ## promoted sales customer_rate performance ## 1 0 594 3.94 2 ## 2 0 446 4.06 3 ## 3 1 674 3.83 4 ## 4 0 525 3.62 2 ## 5 1 657 4.40 3 ## 6 1 918 4.54 2 The data looks as expected. Let’s get a summary of the data. ## promoted sales customer_rate performance ## Min. :0.0000 Min. :151.0 Min. :1.000 Min. :1.0 ## 1st Qu.:0.0000 1st Qu.:389.2 1st Qu.:3.000 1st Qu.:2.0 ## Median :0.0000 Median :475.0 Median :3.620 Median :3.0 ## Mean :0.3219 Mean :527.0 Mean :3.608 Mean :2.5 ## 3rd Qu.:1.0000 3rd Qu.:667.2 3rd Qu.:4.290 3rd Qu.:3.0 ## Max. :1.0000 Max. :945.0 Max. :5.000 Max. :4.0 ## NA's :1 NA's :1 NA's :1 First we see a small number of missing values, and we should remove those observations. We see that about a third of individuals were promoted, that sales ranged from $151k to $945k, that as expected the average satisfaction ratings range from 1 to 5, and finally we see four performance ratings, although the performance categories are numeric when they should be an ordered factor, and promoted is numeric when it should be categorical. Let’s convert these, and then let’s do a pairplot to get a quick view on some possible underlying relationships, as in Figure 5.3. # remove NAs salespeople <- salespeople[complete.cases(salespeople), ] # convert performance to ordered factor and promoted to categorical salespeople$performance <- ordered(salespeople$performance, levels = 1:4) salespeople$promoted <- as.factor(salespeople$promoted) # generate pairplot We can see from this pairplot that there are clearly higher sales for those who are promoted versus those who are not. We also see a moderate relationship between customer rating and sales, which is intuitive (if the customer doesn’t think much of you, sales wouldn’t likely be very high). So we can see that some relationships with our outcome may exist here, but it’s not clear how to tease them out and quantify them relative to each other. Let’s explore how binomial logistic regression can help us do this. 5.2 Modeling probabilistic outcomes using a logistic function Imagine that you have an outcome event \(y\) which either occurs or does not occur. The probability of \(y\) occurring, or \(P(y = 1)\), obviously takes a value between 0 and 1. Now imagine that some input variable \(x\) has a positive effect on the probability of the event occurring. Then you would naturally expect \(P(y = 1)\) to increase as \(x\) increases. In our salespeople data set, let’s plot our promotion outcome against the sales input. This can be seen in Figure 5.4. It’s clear that promotion is more likely with higher sales levels. As we move along the \(x\) axis from left to right and gradually include more and more individuals with higher sales, we know that the probability of promotion is gradually increasing overall. We could try to model this probability using our logistic function, which we learned about in Section 5.1.1. For example, let’s plot the logistic function \[ P(y = 1) = \frac{1}{1 + e^{-k(x - x_{0})}} \] on this data, where we set \(x_0\) to the mean of sales and \(k\) to be some maximum gradient value. In Figure 5.5 we can see these logistic functions for different values of \(k\). All of these seem to reflect the pattern we are observing to some extent, but how do we determine the best-fitting logistic function? 5.2.1 Deriving the concept of log odds Let’s look more carefully at the index of the exponential constant \(e\) in the denominator of our logistic function. Note that, because \(x_{0}\) is a constant, we have: \[ -k(x - x_{0}) = -(-kx_{0} + kx) = -(\beta_{0} + \beta_1x) \] where \(\beta_0 = -kx_0\) and \(\beta_{1} = k\). Therefore, \[ P(y = 1) = \frac{1}{1 + e^{-(\beta_0 + \beta_1x)}} \] This equation makes intuitive sense. As the value of \(x\) increases, the value \(e^{-(\beta_0 + \beta_1x)}\) gets smaller and smaller towards zero, and thus \(P(y = 1)\) approaches its theoretical maximum value of 1. As the value of \(x\) decreases towards zero, we see that the value of \(P(y = 1)\) approaches a minimum value of \(\frac{1}{1 + e^{-\beta_0}}\). Referring back to our salespeople example, we can thus see that \(\beta_0\) helps determine the baseline probability of promotion assuming no sales at all. If \(\beta_0\) has an extremely negative value, this baseline probability will approach its theoretical minimum of zero. Let’s formalize the role of \(\beta_0\) and \(\beta_1\) in the likelihood of a positive outcome. We know that for any binary event \(y\), \(P(y = 0)\) is equal to \(1 - P(y = 1)\), so \[ \begin{aligned} P(y = 0) &= 1 - \frac{1}{1 + e^{-(\beta_0 + \beta_1x)}} \\ &= \frac{1 + e^{-(\beta_0 + \beta_1x)} - 1}{1 + e^{-(\beta_0 + \beta_1x)}} \\ &= \frac{e^{-(\beta_0 + \beta_1x)}}{1 + e^ {-(\beta_0 + \beta_1x)}} \end{aligned} \] Putting these together, we find that \[ \begin{aligned} \frac{P(y = 1)}{P(y = 0)} &= \frac{\frac{1}{1 + e^{-(\beta_0 + \beta_1x)}}}{\frac{e^{-(\beta_0 + \beta_1x)}}{1 + e^{-(\beta_0 + \beta_1x)}}} \\ &= \frac{1}{e^{-(\beta_0 + \ beta_1x)}} \\ &= e^{\beta_0 + \beta_1x} \end{aligned} \] or alternatively, if we apply the natural logarithm to both sides \[ \ln\left(\frac{P(y = 1)}{P(y = 0)}\right) = \beta_0 + \beta_1x \] The right-hand side should look familiar from the previous chapter on linear regression, meaning there is something here we can model linearly. But what is the left-hand side? \(P(y = 1)\) is the probability that the event will occur, while \(P(y = 0)\) is the probability that the event will not occur. You may be familiar from sports like horse racing or other gambling situations that the ratio of these two represents the odds of an event. For example, if a given horse has odds of 1:4, this means that there is a 20% probability they will win and an 80% probability they will not^26. Therefore we can conclude that the natural logarithm of the odds of \(y\)—usually termed the log odds of \(y\)—is linear in \(x\), and therefore we can model the log odds of \(y\) using similar linear regression methods to those studied in Chapter 4^27. 5.2.2 Modeling the log odds and interpreting the coefficients Let’s take our simple case of regressing the promoted outcome against sales. We use a standard binomial GLM function and our standard formula notation which we learned in the previous chapter. # run a binomial model sales_model <- glm(formula = promoted ~ sales, data = salespeople, family = "binomial") # view the coefficients ## (Intercept) sales ## -21.77642020 0.03675848 We can interpret the coefficients as follows: 1. The (Intercept) coefficient is the value of the log odds with zero input value of \(x\)—it is the log odds of promotion if you made no sales. 2. The sales coefficient represents the increase in the log odds of promotion associated with each unit increase in sales. We can convert these coefficients from log odds to odds by applying the exponent function, to return to the identity we had previously \[ \frac{P(y = 1)}{P(y = 0)} = e^{\beta_0 + \beta_1x} = e^{\beta_0}(e^{\beta_1})^x \] From this, we can interpret that \(e^{\beta_0}\) represents the base odds of promotion assuming no sales, and that for every additional unit sales, those base odds are multiplied by \(e^{\beta_1}\). Given this multiplicative effect that \(e^{\beta_1}\) has on the odds, it is known as an odds ratio. # convert log odds to base odds and odds ratio ## (Intercept) sales ## 3.488357e-10 1.037442e+00 So we can see that the base odds of promotion with zero sales is very close to zero, which makes sense. Note that odds can only be precisely zero in a situation where it is impossible to be in the positive class (that is, nobody gets promoted). We can also see that each unit (that is, every $1000) of sales multiplies the base odds by approximately 1.04—in other words, it increases the odds of promotion by 4%. 5.2.3 Odds versus probability It is worth spending a little time understanding the concept of odds and how it relates to probability. It is extremely common for these two terms to be used synonymously, and this can lead to serious misunderstandings when interpreting a logistic regression model. If a certain event has a probability of 0.1, then this means that its odds are 1:9, or 0.111. If the probability is 0.5, then the odds are 1, if the probability is 0.9, then the odds are 9, and if the probability is 0.99, the odds are 99. As we approach a probability of 1, the odds become exponentially large, as illustrated in Figure 5.6: The consequence of this is that a given increase in odds can have a different effect on probability depending on what the original probability was in the first place. If the probability was already quite low, for example 0.1, then a 4% increase in odds translates to odds of 0.116, which translates to a new probability of 0.103586, representing an increase in probability of 3.59%, which is very close to the increase in odds. If the probability was already high, say 0.9, then a 4% increase in odds translates to odds of 9.36, which translates to a new probability of 0.903475 representing an increase in probability of 0.39%, which is very different from the increase in odds. Figure 5.7 shows the impact of a 4% increase in odds according to the original probability of the event. We can see that the closer the base probability is to zero, the similar the effect of the increase on both odds and on probability. However, the higher the probability of the event, the less impact the increase in odds has. In any case, it’s useful to remember the formulas for converting odds to probability and vice versa. If \(O\) represents odds and \(P\) represents probability then we have: \[ \begin{aligned} O &= \frac{P}{1 - P} \\ P &= \frac{O}{1 + O} \end{aligned} \] 5.3 Running a multivariate binomial logistic regression model The derivations in the previous section extend to multivariate data. Let \(y\) be a dichotomous outcome, and let \(x_1, x_2, \dots, x_p\) be our input variables. Then \[ \ln\left(\frac{P(y = 1)}{P(y = 0)}\right) = \beta_0 + \beta_1x_1 + \beta_2x_2 + \dots + \beta_px_p \] for coefficients \(\beta_0, \beta_1,\dots, \beta_p\). As before: • \(\beta_0\) represents the log odds of our outcome when all inputs are zero • Each \(\beta_i\) represents the increase in the log odds of our outcome associated with a unit change in \(x_i\), assuming no change in other inputs. Applying an exponent as before, we have \[ \begin{aligned} \frac{P(y = 1)}{P(y = 0)} &= e^{\beta_0 + \beta_1x_1 + \beta_2x_2 + \dots + \beta_px_p} \\ &= e^{\beta_0}(e^{\beta_1})^{x_1}(e^{\beta_2})^{x_2}\dots(e^{\beta_p})^{x_p} \end {aligned} \] Therefore we can conclude that: • \(e^{\beta_0}\) represents the odds of the outcome when all inputs are zero. • Each \(e^{\beta_i}\) represents the odds ratio associated with a unit increase in \(x_i\) assuming no change in the other inputs (that is, a unit increase in \(x_i\) multiplies the odds of our outcome by \(e^{\beta_i}\)). Let’s put this into practice. 5.3.1 Running and interpreting a multivariate binomial logistic regression model Let’s use a binomial logistic regression model to understand how each of the three inputs in our salespeople data set influence the likelihood of promotion. First, as we learned previously, it is good practice to convert the categorical performance variable to a dummy variable^28. ## promoted sales customer_rate performance_2 performance_3 performance_4 ## 1 0 594 3.94 1 0 0 ## 2 0 446 4.06 0 1 0 ## 3 1 674 3.83 0 0 1 ## 4 0 525 3.62 1 0 0 ## 5 1 657 4.40 0 1 0 ## 6 1 918 4.54 1 0 0 Now we can run our model (using the formula promoted ~ . to mean regressing promoted against everything else) and view our coefficients. # run binomial glm full_model <- glm(formula = "promoted ~ .", family = "binomial", data = salespeople_dummies) # get coefficient summary (coefs <- summary(full_model)$coefficients) ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -19.85893195 3.444078811 -5.7661085 8.112287e-09 ## sales 0.04012425 0.006576429 6.1012212 1.052611e-09 ## customer_rate -1.11213130 0.482681585 -2.3040682 2.121881e-02 ## performance_2 0.26299953 1.021980179 0.2573431 7.969139e-01 ## performance_3 0.68495453 0.982166998 0.6973911 4.855581e-01 ## performance_4 0.73449340 1.071963758 0.6851849 4.932272e-01 Note how only three of the performance dummies have displayed. This is because everyone is in one of the four performance categories, so the model is using performance_1 as the reference case. We can interpret each performance coefficient as the effect of a move to that performance category from performance_1. We can already see from the last column of our coefficient summary—the coefficient p-values—that only sales and customer_rate meet the significance threshold of less than 0.05. Interestingly, it appears from the Estimate column that customer_rate has a negative effect on the log odds of promotion. For convenience, we can add an extra column to our coefficient summary to create the exponents of our estimated coefficients so that we can see the odds ratios. We can also remove columns that are less useful to us if we wish. # create coefficient table with estimates, p-values and odds ratios (full_coefs <- cbind(coefs[ ,c("Estimate", "Pr(>|z|)")], odds_ratio = exp(full_model$coefficients))) ## Estimate Pr(>|z|) odds_ratio ## (Intercept) -19.85893195 8.112287e-09 2.373425e-09 ## sales 0.04012425 1.052611e-09 1.040940e+00 ## customer_rate -1.11213130 2.121881e-02 3.288573e-01 ## performance_2 0.26299953 7.969139e-01 1.300826e+00 ## performance_3 0.68495453 4.855581e-01 1.983682e+00 ## performance_4 0.73449340 4.932272e-01 2.084426e+00 Now we can interpret our model as follows: • All else being equal, sales have a significant positive effect on the likelihood of promotion, with each additional thousand dollars of sales increasing the odds of promotion by 4% • All else being equal, customer ratings have a significant negative effect on the likelihood of promotion, with one full rating higher associated with 67% lower odds of promotion • All else being equal, performance ratings have no significant effect on the likelihood of promotion The second conclusion may appear counter-intuitive, but remember from our pairplot in Section 5.1.3 that there is already moderate correlation between sales and customer ratings, and this model will be controlling for that relationship. Recall that our odds ratios act assuming all other variables are the same. Therefore, if two individuals have the same sales and performance ratings, the one with the lower customer rating is more likely to have been promoted. Similarly, if two individuals have the same level of sales and the same customer rating, their performance rating will have no significant bearing on the likelihood of promotion. Many analysts will feel uncomfortable with stating these conclusions with too much precision, and therefore exponent confidence intervals can be calculated to provide a range for the odds ratios. ## 2.5 % 97.5 % ## (Intercept) 7.879943e-13 7.385387e-07 ## sales 1.029762e+00 1.057214e+00 ## customer_rate 1.141645e-01 7.793018e-01 ## performance_2 1.800447e-01 1.061602e+01 ## performance_3 3.060299e-01 1.547188e+01 ## performance_4 2.614852e-01 1.870827e+01 Therefore we can say that—all else being equal—every additional unit of sales increases the odds of promotion by between 3.0% and 5.7%, and every additional point in customer rating decreases the odds of promotion by between 22% and 89%. Similar to other regression models, the unit scale needs to be taken into consideration during interpretation. On first sight, a decrease of up to 89% in odds seems a lot more important than an increase of up to 5.7% in odds. However, the increase of up to 5.7% is for one unit ($1000) in many thousands of sales units, and over 10 or 100 additional units can have a substantial compound effect on odds of promotion. The decrease of up to 89% is on a full customer rating point on a scale of only 4 full points. 5.3.2 Understanding the fit and goodness-of-fit of a binomial logistic regression model Understanding the fit of a binomial logistic regression model is not straightforward and is sometimes controversial. Before we discuss this, let’s simplify our model based on our learning that the performance data has no significant effect on the outcome. # simplify model simpler_model <- glm(formula = promoted ~ sales + customer_rate, family = "binomial", data = salespeople) As in the previous chapter, again we have the luxury of a three-dimensional model, so we can visualize it in Interactive Figure 5.8, revealing a 3D sigmoid curve which ‘twists’ to reflect the relative influence of sales and customer_rate on the outcome. Now let’s look at the summary of our simpler_model. ## Call: ## glm(formula = promoted ~ sales + customer_rate, family = "binomial", ## data = salespeople) ## Coefficients: ## Estimate Std. Error z value Pr(>|z|) ## (Intercept) -19.517689 3.346762 -5.832 5.48e-09 *** ## sales 0.040389 0.006525 6.190 6.03e-10 *** ## customer_rate -1.122064 0.466958 -2.403 0.0163 * ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## (Dispersion parameter for binomial family taken to be 1) ## Null deviance: 440.303 on 349 degrees of freedom ## Residual deviance: 65.131 on 347 degrees of freedom ## AIC: 71.131 ## Number of Fisher Scoring iterations: 8 Note that, unlike what we saw for linear regression in Section 4.3.3, our summary does not provide a statistic on overall model fit or goodness-of-fit. The main reason for this is that there is no clear unified point of view in the statistics community on a single appropriate measure for model fit in the case of logistic regression. Nevertheless, a number of options are available to analysts for estimating fit and goodness-of-fit for these models. Pseudo-\(R^2\) measures are attempts to estimate the amount of variance in the outcome that is explained by the fitted model, analogous to the \(R^2\) in linear regression. There are numerous variants of pseudo-\(R^2\) with some of the most common listed here: • McFadden’s \(R^2\) works by comparing the likelihood function of the fitted model with that of a random model and using this to estimate the explained variance in the outcome. • Cox and Snell’s \(R^2\) works by applying a ‘sum of squares’ analogy to the likelihood functions to align more closely with the precise methodology for calculating \(R^2\) in linear regression. However, this usually means that the maximum value is less than 1 and in certain circumstances substantially less than 1, which can be problematic and unintuitive for an \(R^2\). • Nagelkerke’s \(R^2\) resolves the issue with the upper bound for Cox and Snell by dividing Cox and Snell’s \(R^2\) by its upper bound. This restores an intuitive scale with a maximum of 1, but is considered somewhat arbitrary with limited theoretical foundation. • Tjur’s \(R^2\) is a more recent and simpler concept. It is defined as simply the absolute difference between the predicted probabilities of the positive observations and those of the negative Standard modeling functions generally do not offer the calculation of pseudo-\(R^2\) as standard, but numerous methods are available for their calculation. For example: ## McFadden CoxSnell Nagelkerke Tjur ## 0.8520759 0.6576490 0.9187858 0.8784834 We see that the Cox and Snell variant is notably lower than the other estimates, which is consistent with the known issues with its upper bound. However, the other estimates are reasonably aligned and suggest a strong fit. Goodness-of-fit tests for logistic regression models compare the predictions to the observed outcome and test the null hypothesis that they are similar. This means that, unlike in linear regression, a low p-value indicates a poor fit. One commonly used method is the Hosmer-Lemeshow test, which divides the observations into a number of groups (usually 10) according to their fitted probabilities, calculates the proportion of each group that is positive and then compares this to the expected proportions based on the model prediction using a Chi-squared test. However, this method has limitations. It is particularly problematic for situations where there is a low sample size and can return highly varied results based on the number of groups used. It is therefore recommended to use a range of goodness-of-fit tests, and not rely entirely on any one specific approach. In R, the generalhoslem package can perform the popular Hosmer-Lemeshow test of goodness of fit for logistic regression models, and is recommended for exploration. Here is an example using the logitgof() function for assessing goodness-of-fit, which uses 10 groups as default. # run Hosmer-Lemeshow GOF test on observed versus fitted values simpler_model_diagnostics <- generalhoslem::logitgof( # view results ## Hosmer and Lemeshow test (binary model) ## data: salespeople$promoted, fitted(simpler_model) ## X-squared = 3.4458, df = 8, p-value = 0.9034 The non-significant result of the Hosmer-Lemeshow test suggests a good fit for our model. Various measures of predictive accuracy can also be used to assess a binomial logistic regression model in a predictive context, such as precision, recall and ROC-curve analysis. These are particularly suited for implementations of logistic regression models as predictive classifiers in a Machine Learning context, a topic which is outside the scope of this book. However, a recommended source for a deeper treatment of goodness-of-fit tests for logistic regression models is Hosmer, Lemeshow, and Sturdivant (2013). 5.3.3 Model parsimony We saw that in both our linear regression and our logistic regression approach, we decided to drop variables from our model when we determined that they had no significant effect on the outcome. The principle of Occam’s Razor states that—all else being equal—the simplest explanation is the best. In this sense, a model that contains information that does not contribute to its primary inference objective is more complex than it needs to be. Such a model increases the communication burden in explaining its results to others, with no notable analytic benefit in return. Parsimony describes the concept of being careful with resources or with information. A model could be described as more parsimonious if it can achieve the same (or very close to the same) fit with a smaller number of inputs. The Akaike Information Criterion or AIC is a measure of model parsimony that is computed for log-likelihood models like logistic regression models, with a lower AIC indicating a more parsimonious model. AIC is often calculated as standard in summary reports of logistic regression models but can also be calculated independently. Let’s compare the different iterations of our model in this chapter using AIC. # sales only model ## [1] 76.49508 # sales and customer rating model ## [1] 71.13145 # model with all inputs ## [1] 76.37433 We can see that the model which is limited to our two significant inputs—sales and customer rating—is determined to be the most parsimonious model according to the AIC. Note that the AIC should not be used to interpret model quality or confidence—it is possible that the lowest AIC might still be a very poor fit. Model parsimony becomes a substantial concern when there is a large number of input variables. As a general rule, the more input variables there are in a model the greater the chance that the model will be difficult to interpret clearly, and the greater the risk of measurement problems, such as multicollinearity. Analysts who are eager to please their customers, clients, professors or bosses can easily be tempted to think up new potential inputs to their model, often derived mathematically from measures that are already inputs in the model. Before long the model is too complex, and in extreme cases there are more inputs than there are observations. The primary way to manage model complexity is to exercise caution in selecting model inputs. When large numbers of inputs are unavoidable, coefficient regularization methods such as LASSO regression can help with model parsimony. 5.4 Other considerations in binomial logistic regression To predict from new data, just use the predict() function as in the previous chapter. This function recognizes the type of model being used—in this case a generalized linear model—and adjusts its prediction approach accordingly. In particular, if you want to return the probability of the new observations being promoted, you need to use type = "response" as an argument. # define new observations (new_data <- data.frame(sales = c(420, 510, 710), customer_rate = c(3.4, 2.3, 4.2))) ## sales customer_rate ## 1 420 3.4 ## 2 510 2.3 ## 3 710 4.2 # predict probability of promotion predict(simpler_model, new_data, type = "response") ## 1 2 3 ## 0.00171007 0.18238565 0.98840506 Many of the principles covered in the previous chapter on linear regression are equally important in logistic regression. For example, input variables should be managed in a similar way. Collinearity and multicollinearity should be of concern. Interaction of input variables can be modeled. For the most part, analysts should be aware of the fundamental mathematical transformations which take place in a logistic regression model when they consider some of these issues (another reason to ensure that the mathematics covered earlier in this chapter is well understood). For example, while coefficients in linear regression have a direct additive impact on \(y\), in logistic regression they have a direct additive impact on the log odds of \(y\), or alternatively their exponents have a direct multiplicative impact on the odds of \(y\). Therefore coefficient overestimation such as that which can occur when collinearity is not managed can result in inferences that could substantially overstate the importance or effect of input variables. Because of the binary nature of our outcome variable, the residuals of a logistic regression model have limited direct application to the problem being studied. In practical contexts the residuals of logistic regression models are rarely examined, but they can be useful in identifying outliers or particularly influential observations and in assessing goodness-of-fit. When residuals are examined, they need to be transformed in order to be analyzed appropriately. For example, the Pearson residual is a standardized form of residual from logistic regression which can be expected to have a normal distribution over large-enough samples. We can see in Figure 5.9 that this is the case for our simpler_model, but that there are a small number of substantial underestimates in our model. A good source of further learning on diagnostics of logistic regression models is Menard (2010). 5.5 Learning exercises 5.5.1 Discussion questions 1. Draw the shape of a logistic function. Describe the three population growth phases it was originally intended to model. 2. Explain why the logistic function is useful to statisticians in modeling. 3. In the formula for the logistic function in Section 5.1.1, what might be a common value for \(L\) in probabilistic applications? Why? 4. What types of problems are suitable for logistic regression modeling? 5. Can you think of some modeling scenarios in your work or studies that could use a logistic regression approach? 6. Explain the concept of odds. How do odds differ from probability? How do odds change as probability increases? 7. Complete the following: 1. If an event has a 1% probability of occurring, a 10% increase in odds results in an almost __% increase in probability. 2. If an event has a 99% probability of occurring, a 10% increase in odds results in an almost __% increase in probability. 8. Describe how the coefficients of a logistic regression model affect the fitted outcome. If \(\beta\) is a coefficient estimate, how is the odds ratio associated with \(\beta\) calculated and what does it mean? 9. What are some of the options for determining the fit of a binomial logistic regression model? 10. Describe the concept of model parsimony. What measure is commonly used to determine the most parsimonious logistic regression model? 5.5.2 Data exercises A nature preservation charity has asked you to analyze some data to help them understand the features of those members of the public who donated in a given month. Load the charity_donation data set via the peopleanalyticsdata package or download it from the internet^29. It contains the following data: • n_donations: The total number of times the individual donated previous to the month being studied. • total_donations: The total amount of money donated by the individual previous to the month being studied • time_donating: The number of months between the first donation and the month being studied • recent_donation: Whether or not the individual donated in the month being studied • last_donation: The number of months between the most recent previous donation and the month being studied • gender: The gender of the individual • reside: Whether the person resides in an Urban or Rural Domestic location or Overseas • age: The age of the individual 1. View the data and obtain statistical summaries. Ensure data types are appropriate and there is no missing data. Determine the outcome and input variables. 2. Using a pairplot or by plotting or correlating selected fields, try to hypothesize which variables may be significant in explaining who recently donated. 3. Run a binomial logistic regression model using all input fields. Determine which input variables have a significant effect on the outcome and the direction of that effect. 4. Calculate the odds ratios for the significant variables and explain their impact on the outcome. 5. Check for collinearity or multicollinearity in your model using methods from previous chapters. 6. Experiment with model parsimony by reducing input variables that do not have a significant impact on the outcome. Decide on the most parsimonious model. 7. Calculate a variety of Pseudo-\(R^2\) variants for your model. How would you explain these to someone with no statistics expertise? 8. Report the conclusions of your modeling exercise to the charity by writing a simple explanation that assumes no knowledge of statistics. 9. Extension: Using a variety of methods of your choice, test the hypothesis that your model fits the data. How conclusive are your tests?
{"url":"https://peopleanalytics-regression-book.org/bin-log-reg.html","timestamp":"2024-11-07T09:05:06Z","content_type":"text/html","content_length":"104196","record_id":"<urn:uuid:04a9a157-547e-4686-afb8-a14bad9fdf9d>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00142.warc.gz"}
Forecasting Methods This section explains the forecasting methods used by PROC FORECAST. In the STEPAR method, PROC FORECAST first fits a time trend model to the series and takes the difference between each value and the estimated trend. (This process is called detrending.) Then, the remaining variation is fit by using an autoregressive model. The STEPAR method fits the autoregressive process to the residuals of the trend model by using a backwards-stepping method to select parameters. Because the trend and autoregressive parameters are fit in sequence rather than simultaneously, the parameter estimates are not optimal in a statistical sense. However, the estimates are usually close to optimal, and the method is computationally The STEPAR method consists of the following computational steps: 1. Fit the trend model as specified by the TREND= option by using ordinary least-squares regression. This step detrends the data. The default trend model for the STEPAR method is TREND=2, a linear trend model. 2. Take the residuals from step 1 and compute the autocovariances to the number of lags specified by the NLAGS= option. 3. Regress the current values against the lags, using the autocovariances from step 2 in a Yule-Walker framework. Do not bring in any autoregressive parameter that is not significant at the level specified by the SLENTRY= option. (The default is SLENTRY=0.20.) Do not bring in any autoregressive parameter that results in a nonpositive-definite Toeplitz matrix. 4. Find the autoregressive parameter that is least significant. If the significance level is greater than the SLSTAY= value, remove the parameter from the model. (The default is SLSTAY=0.05.) Continue this process until only significant autoregressive parameters remain. If the OUTEST= option is specified, write the estimates to the OUTEST= data set. 5. Generate the forecasts by using the estimated model and output to the OUT= data set. Form the confidence limits by combining the trend variances with the autoregressive variances. Missing values are tolerated in the series; the autocorrelations are estimated from the available data and tapered if necessary. This method requires at least three passes through the data: two passes to fit the model and a third pass to initialize the autoregressive process and write to the output data set. Default Value of the NLAGS= Option If the NLAGS= option is not specified, the default value of the NLAGS= option is chosen based on the data frequency specified by the INTERVAL= option and on the number of observations in the input data set, if this can be determined in advance. (PROC FORECAST cannot determine the number of input observations before reading the data when a BY statement or a WHERE statement is used or if the data are from a tape format SAS data set or external database. The NLAGS= value must be fixed before the data are processed.) If the INTERVAL= option is specified, the default NLAGS= value includes lags for up to three years plus one, subject to the maximum of 13 lags or one-third of the number of observations in your data set, whichever is less. If the number of observations in the input data set cannot be determined, the maximum NLAGS= default value is 13. If the INTERVAL= option is not specified, the default is NLAGS=13 or one-third the number of input observations, whichever is less. If the Toeplitz matrix formed by the autocovariance matrix at a given step is not positive definite, the maximal number of autoregressive lags is reduced. For example, for INTERVAL=QTR, the default is NLAGS=13 (that is, ) provided that there are at least 39 observations. The NLAGS= option default is always at least 3. Exponential smoothing is used when the METHOD=EXPO option is specified. The term exponential smoothing is derived from the computational scheme developed by Brown and others (Brown and Meyers 1961; Brown 1962). Estimates are computed with updating formulas that are developed across time series in a manner similar to smoothing. The EXPO method fits a trend model such that the most recent data are weighted more heavily than data in the early part of the series. The weight of an observation is a geometric (exponential) function of the number of periods that the observation extends into the past relative to the current period. The weight function is where is the observation number of the past observation, t is the current observation number, and is the weighting constant specified with the WEIGHT= option. You specify the model with the TREND= option as follows: • TREND=1 specifies single exponential smoothing (a constant model) • TREND=2 specifies double exponential smoothing (a linear trend model) • TREND=3 specifies triple exponential smoothing (a quadratic trend model) The single exponential smoothing operation is expressed by the formula where is the smoothed value at the current period, t is the time index of the current period, and is the current actual value of the series. The smoothed value is the forecast of and is calculated as the smoothing constant times the value of the series, , in the current period plus () times the previous smoothed value , which is the forecast of computed at time . Double and triple exponential smoothing are derived by applying exponential smoothing to the smoothed series, obtaining smoothed values as follows: Missing values after the start of the series are replaced with one-step-ahead predicted values, and the predicted value is then applied to the smoothing equations. The polynomial time trend parameters CONSTANT, LINEAR, and QUAD in the OUTEST= data set are computed from , , and , the final smoothed values at observation T, the last observation used to fit the model. In the OUTEST= data set, the values of , , and are identified by _TYPE_=S1, _TYPE_=S2, and _TYPE_=S3, respectively. Exponential smoothing forecasts are forecasts for an integrated moving-average process; however, the weighting parameter is specified by the user rather than estimated from the data. Experience has shown that good values for the WEIGHT= option are between 0.05 and 0.3. As a general rule, smaller smoothing weights are appropriate for series with a slowly changing trend, while larger weights are appropriate for volatile series with a rapidly changing trend. If unspecified, the weight defaults to , where trend is the value of the TREND= option. This produces defaults of WEIGHT=0.2 for TREND= 1, WEIGHT=0.10557 for TREND=2, and WEIGHT=0.07168 for TREND=3. The ESM procedure can be used to forecast time series by using exponential smoothing with smoothing weights that are optimized automatically. See Chapter 14: The ESM Procedure. The Time Series Forecasting System provides for exponential smoothing models and enables you to either specify or optimize the smoothing weights. See Chapter 45: Getting Started with Time Series Forecasting, for details. The confidence limits for exponential smoothing forecasts are calculated as they would be for an exponentially weighted time trend regression, using the simplifying assumption of an infinite number of observations. The variance estimate is computed by using the mean square of the unweighted one-step-ahead forecast residuals. More detailed descriptions of the forecast computations can be found in Montgomery and Johnson (1976) and Brown (1962). The WINTERS method uses updating equations similar to exponential smoothing to fit parameters for the model where a and b are the trend parameters and the function s (t ) selects the seasonal parameter for the season that corresponds to time t. The WINTERS method assumes that the series values are positive. If negative or zero values are found in the series, a warning is printed and the values are treated as missing. The preceding standard WINTERS model uses a linear trend. However, PROC FORECAST can also fit a version of the WINTERS method that uses a quadratic trend. When TREND=3 is specified for METHOD= WINTERS, PROC FORECAST fits the following model: The quadratic trend version of the Winters method is often unstable, and its use is not recommended. When TREND=1 is specified, the following constant trend version is fit: The default for the WINTERS method is TREND=2, which produces the standard linear trend model. The notation s (t) represents the selection of the seasonal factor used for different time periods. For example, if INTERVAL=DAY and SEASONS=MONTH, there are 12 seasonal factors, one for each month in the year, and the time index t is measured in days. For any observation, t is determined by the ID variable and s (t) selects the seasonal factor for the month that t falls in. For example, if t is 9 February 1993 then s (t) is the seasonal parameter for February. When there are multiple seasons specified, s (t) is the product of the parameters for the seasons. For example, if SEASONS=(MONTH DAY), then s (t) is the product of the seasonal parameter for the month that corresponds to the period t and the seasonal parameter for the day of the week that corresponds to period t. When the SEASONS= option is not specified, the seasonal factors s (t) are not included in the model. See the section Specifying Seasonality for more information about specifying multiple seasonal factors. This section shows the updating equations for the Winters method. In the following formula, is the actual value of the series at time t; is the smoothed value of the series at time t; is the smoothed trend at time t; is the smoothed quadratic trend at time t; selects the old value of the seasonal factor that corresponds to time t before the seasonal factors are updated. The estimates of the constant, linear, and quadratic trend parameters are updated by using the following equations: For TREND=3, For TREND=2, For TREND=1, In this updating system, the trend polynomial is always centered at the current period so that the intercept parameter of the trend polynomial for predicted values at times after t is always the updated intercept parameter . The predicted value for periods ahead is The seasonal parameters are updated when the season changes in the data, using the mean of the ratios of the actual to the predicted values for the season. For example, if SEASONS=MONTH and INTERVAL= DAY, then when the observation for the first of February is encountered, the seasonal parameter for January is updated by using the formula where t is February 1 of the current year, is the seasonal parameter for January updated with the data available at time t, and is the seasonal parameter for January of the previous year. When multiple seasons are used, is a product of seasonal factors. For example, if SEASONS=(MONTH DAY) then is the product of the seasonal factors for the month and for the day of the week: . The factor is updated at the start of each month by using a modification of the preceding formula that adjusts for the presence of the other seasonal by dividing the summands by the that corresponds to day of the week effect . Similarly, the factor is updated by using the following formula: where is the seasonal factor for the same day of the previous week. Missing values after the start of the series are replaced with one-step-ahead predicted values, and the predicted value is substituted for x and applied to the updating equations. The parameters are normalized so that the seasonal factors for each cycle have a mean of 1.0. This normalization is performed after each complete cycle and at the end of the data. Thus, if INTERVAL= MONTH and SEASONS=MONTH are specified and a series begins with a July value, then the seasonal factors for the series are normalized at each observation for July and at the last observation in the data set. The normalization is performed by dividing each of the seasonal parameters, and multiplying each of the trend parameters, by the mean of the unnormalized seasonal parameters. The weight for updating the seasonal factors, , is given by the third value specified in the WEIGHT= option. If the WEIGHT= option is not used, then defaults to 0.25; if the WEIGHT= option is used but does not specify a third value, then defaults to . The weight for updating the linear and quadratic trend parameters, , is given by the second value specified in the WEIGHT= option; if the WEIGHT = option does not specify a second value, then defaults to . The updating weight for the constant parameter, , is given by the first value specified in the WEIGHT= option. As a general rule, smaller smoothing weights are appropriate for series with a slowly changing trend, while larger weights are appropriate for volatile series with a rapidly changing trend. If the WEIGHT= option is not used, then defaults to (), where trend is the value of the TREND= option. This produces defaults of WEIGHT=0.2 for TREND=1, WEIGHT=0.10557 for TREND=2, and WEIGHT=0.07168 for TREND=3. The ESM procedure and the Time Series Forecasting System provide for generating forecast models that use Winters Method and enable you to specify or optimize the weights. (See Chapter 14: The ESM Procedure, and Chapter 45: Getting Started with Time Series Forecasting, for details.) A method for calculating exact forecast confidence limits for the WINTERS method is not available. Therefore, the approach taken in PROC FORECAST is to assume that the true seasonal factors have small variability about a set of fixed seasonal factors and that the remaining variation of the series is small relative to the mean level of the series. The equations are written where is the mean level and I(t ) are the fixed seasonal factors. Assuming that and are small, the forecast equations can be linearized and only first-order terms in and kept. In terms of forecasts for , this linearized system is equivalent to a seasonal ARIMA model. Confidence limits for are based on this ARIMA model and converted into confidence limits for using as estimates of I(t ). The exponential smoothing confidence limits are based on an approximation to a weighted regression model, whereas the preceding Winters confidence limits are based on an approximation to an ARIMA model. You can use METHOD=WINTERS without the SEASONS= option to do exponential smoothing and get confidence limits for the EXPO forecasts based on the ARIMA model approximation. These are generally more pessimistic than the weighted regression confidence limits produced by METHOD=EXPO. The ADDWINTERS method is like the WINTERS method except that the seasonal parameters are added to the trend instead of multiplied with the trend. The default TREND=2 model is as follows: The WINTERS method for updating equation and confidence limits calculations described in the preceding section are modified accordingly for the additive version. Holt Two-Parameter Exponential Smoothing If the seasonal factors are omitted (that is, if the SEASONS= option is not specified), the WINTERS (and ADDWINTERS) method reduces to the Holt two-parameter version of exponential smoothing. Thus, the WINTERS method is often referred to as the Holt-Winters method. Double exponential smoothing is a special case of the Holt two-parameter smoother. The double exponential smoothing results can be duplicated with METHOD=WINTERS by omitting the SEASONS= option and appropriately setting the WEIGHT= option. Letting and , the following statements produce the same forecasts: proc forecast method=expo trend=2 weight= …; proc forecast method=winters trend=2 weight=( , ) …; Although the forecasts are the same, the confidence limits are computed differently. Choice of Weights for EXPO, WINTERS, and ADDWINTERS Methods For the EXPO, WINTERS, and ADDWINTERS methods, properly chosen smoothing weights are of critical importance in generating reasonable results. There are several factors to consider in choosing the The noisier the data, the lower should be the weight given to the most recent observation. Another factor to consider is how quickly the mean of the time series is changing. If the mean of the series is changing rapidly, relatively more weight should be given to the most recent observation. The more stable the series over time, the lower should be the weight given to the most recent observation. Note that the smoothing weights should be set separately for each series; weights that produce good results for one series might be poor for another series. Since PROC FORECAST does not have a feature to use different weights for different series, when forecasting multiple series with the EXPO, WINTERS, or ADDWINTERS method it might be desirable to use different PROC FORECAST steps with different WEIGHT= options. For the Winters method, many combinations of weight values might produce unstable noninvertible models, even though all three weights are between 0 and 1. When the model is noninvertible, the forecasts depend strongly on values in the distant past, and predictions are determined largely by the starting values. Unstable models usually produce poor forecasts. The Winters model can be unstable even if the weights are optimally chosen to minimize the in-sample MSE. See Archibald (1990) for a detailed discussion of the unstable region of the parameter space of the Winters model. Optimal weights and forecasts for exponential smoothing models can be computed by using the ESM and ARIMA procedures and by the Time Series Forecasting System. Starting Values for EXPO, WINTERS, and ADDWINTERS Methods The exponential smoothing method requires starting values for the smoothed values , , and . The Winters and additive Winters methods require starting values for the trend coefficients and seasonal By default, starting values for the trend parameters are computed by a time trend regression over the first few observations for the series. Alternatively, you can specify the starting value for the trend parameters with the ASTART=, BSTART=, and CSTART= options. The number of observations used in the time trend regression for starting values depends on the NSTART= option. For METHOD=EXPO, NSTART= beginning values of the series are used, and the coefficients of the time trend regression are then used to form the initial smoothed values , , and . For METHOD=WINTERS or METHOD=ADDWINTERS, n complete seasonal cycles are used to compute starting values for the trend parameter, where n is the value of the NSTART= option. For example, for monthly data the seasonal cycle is one year, so NSTART=2 specifies that the first 24 observations at the beginning of each series are used for the time trend regression used to calculate starting values. The starting values for the seasonal factors for the WINTERS and ADDWINTERS methods are computed from seasonal averages over the first few complete seasonal cycles at the beginning of the series. The number of seasonal cycles averaged to compute starting seasonal factors is controlled by the NSSTART= option. For example, for monthly data with SEASONS=12 or SEASONS=MONTH, the first n January values are averaged to get the starting value for the January seasonal parameter, where n is the value of the NSSTART= option. The seasonal parameters are set to the ratio (for WINTERS) or difference (for ADDWINTERS) of the mean for the season to the overall mean for the observations used to compute seasonal starting values. For example, if METHOD=WINTERS, INTERVAL=DAY, SEASON=(MONTH DAY), and NSTART=2 (the default), the initial seasonal parameter for January is the ratio of the mean value over days in the first two Januarys after the start of the series (that is, after the first nonmissing value) to the mean value for all days read for initialization of the seasonal factors. Likewise, the initial factor for Sundays is the ratio of the mean value for Sundays to the mean of all days read. For the ASTART=, BSTART=, and CSTART= options, the values specified are associated with the variables in the VAR statement in the order in which the variables are listed (the first value with the first variable, the second value with the second variable, and so on). If there are fewer values than variables, default starting values are used for the later variables. If there are more values than variables, the extra values are ignored.
{"url":"http://support.sas.com/documentation/cdl/en/etsug/65545/HTML/default/etsug_forecast_details03.htm","timestamp":"2024-11-10T05:42:20Z","content_type":"application/xhtml+xml","content_length":"68724","record_id":"<urn:uuid:4ad94a07-260b-4ef2-a2af-ab88ba7e0bd0>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00192.warc.gz"}
Hopf-like bifurcation in mixed mode oscillation in a fractional-order FitzHugh-Nagumo model Since two decades, Fractional Differential Equations (FDE) are more and more used to model a large variety of phenomena in the nature. Their ability to model better than Ordinary Differential Equations (ODE) is due in particular to the ‘memory’ of the initial conditions. The counterpart of this ‘memory’ is that FDE cannot exhibit exact periodic solutions and hence Hopf bifurcation. However, in some situations numerical simulations show similarities with such bifurcation. Therefore we introduce the concept of Hopf-like bifurcation to study the emergence of mixed-mode oscillations and canard explosion, in a planar fractional order FitzHugh-Nagumo model (FFHN). In this aim an algorithm, called Global-Local Canard Explosion Search Algorithm (GLCESA) is developed and used to investigate the existence of canard oscillations in the neighborhoods of Hopf-like bifurcation points. The appearance of various patterns of solutions is revealed, with an increasing number of small-amplitude oscillations when two key parameters of the FFHN model are varied. The numbers of such oscillations versus the two parameters, respectively, are perfectly fitted using exponential functions. Finally, it is conjectured that chaos could occur in a 2-dimensional fractional-order autonomous dynamical system, with a fractional order close to one.
{"url":"http://www.atomosyd.net/index.php/%29http:/www.davidsauzay.com/www.davidsauzay.com/%3Ca%20href=%22https:/www.researchgate.net/profile/Christophe_Letellier%22%3EChristophe%20Letellier%20on%20ResearchGate%3C/spip.php?article247","timestamp":"2024-11-06T08:55:31Z","content_type":"text/html","content_length":"12613","record_id":"<urn:uuid:e0d625ee-e472-4b67-b842-a14435780402>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00384.warc.gz"}
Three real numbers $x$, $y$, and $z$ are chosen independently on the interval from $[0,n]$. Write some code that will find the smallest $n$ that will make the probability of 0.5 that: no two of the three numbers are within 1 unit of one another. Use to find a random number from 0 to $n$. (Ans: 10) (Ref: ACM 2012 10A, #25.)
{"url":"https://www.codebymath.com/index.php/welcome/challenge/three-random-one-unit","timestamp":"2024-11-09T10:45:29Z","content_type":"text/html","content_length":"10983","record_id":"<urn:uuid:e01e246f-e6b4-4f81-b766-719f3fbc0000>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00281.warc.gz"}
Is It Worth Buying Points For Mortgage It's easy to get fooled by super low interest rates only to discover that you're paying more for the low rate by purchasing points. This is a sales tactic used. Each point is equal to 1 percent of the loan amount, for instance 2 points on a $, loan would cost $ You can buy up to 5 points. Interest Rate with. Paying points on a mortgage means that if you plan on living in your new home for a long time, you will most likely save money over the life of the loan. · It. Q: Is it worth it to buy points on a mortgage? A: Maybe — it just depends on your situation. Do you have available cash up front to purchase mortgage points? Discount points are a form of prepaid interest that you can buy to lower your interest rate. · Discount points are a one-time fee, paid up front when a mortgage. Technically, you can buy as many as you want. However, the more you buy the more they cost and the less the interest rate drops. For example, one point might. But each "point" will cost you 1% of your mortgage balance. The mortgage points calculator helps you determine if you should pay for points, or use the money to. Mortgage points, also known as discount points, are fees a homebuyer pays directly to the lender (usually a bank) in exchange for a reduced interest rate. I wouldn't buy points now, but I wouldn't count on mortgage rates going down either. Points usually pay off something like 5 to 10 years, if you. Use the mortgage points calculator to see how buying points can reduce your interest rate, which in turn reduces your monthly payment. The idea behind mortgage points is that you pay a one-time and usually optional fee to reduce the rate. That way, you pay less in the long run. The amount you can save on your interest rate by paying for points will vary by lender. However, for each loan point you purchase, you can typically reduce the. When you buy points (also known as discount points), you're paying your way to a lower mortgage interest rate. Think of it as pre-paid interest. Buying mortgage points when you close can reduce the interest rate, which in turn reduces the monthly payment. But each point will cost 1 percent of your. Buying mortgage points can help you earn a lower interest rate on your mortgage. Having a lower rate, in turn, helps you save money over the life of the loan. Typically, 1 point will cost about 1% of the loan amount—so buying a point on a $, loan would cost about $3,—but the exact financial impact of points. You can think of points as a way of paying some interest up-front in exchange for a lower interest rate over the life of your loan. The longer you plan to own. Mortgage lenders benefit from discount points by receiving cash up front rather than waiting, thus making their loans more profitable. Cash payments also. The idea behind mortgage points is that you pay a one-time and usually optional fee to reduce the rate. That way, you pay less in the long run. If you intend to live in the house until the mortgage is paid off, then buying points might be a good idea since it will save you interest over. Mortgage points are a way to save on your monthly payments by putting up more money than required towards interest during closing. You pay these fees directly. Key facts about mortgage points · The lender and marketplace determine the interest rate reduction you receive for purchasing points so it's never fixed. When you buy points (also known as discount points), you're paying your way to a lower mortgage interest rate. Think of it as pre-paid interest. Buying mortgage points—also called “discount points”—is a simple way to potentially save thousands over the life of your loan. Here's why it could make sense to. The amount you can save on your interest rate by paying for points will vary by lender. However, for each loan point you purchase, you can typically reduce the. Mortgage lenders benefit from discount points by receiving cash up front rather than waiting, thus making their loans more profitable. Cash payments also. But each "point" will cost you 1% of your mortgage balance. The mortgage points calculator helps you determine if you should pay for points, or use the money to. If you are buying points to refinance your home, the IRS considers this prepaid interest. That means you will have to deduct them over the life of the loan. % and%. It's also worth keeping in mind that mortgages with points carry a lower interest rate but have higher closing costs since points are paid at. Should you buy points? Use the mortgage points calculator to see how buying points can reduce your interest rate, which in turn reduces your monthly payment. Buying mortgage points—also called “discount points”—is a simple way to potentially save thousands over the life of your loan. Here's why it could make sense to. While it is a great way to knock down the interest rate, it does require more upfront money at closing, which is already a sore spot for many home buyers. Buying mortgage points can help you earn a lower interest rate on your mortgage. Having a lower rate, in turn, helps you save money over the life of the loan. Should you buy points? Use the mortgage points calculator to see how buying points can reduce your interest rate, which in turn reduces your monthly payment. For some people, buying mortgage points can be a great way to reduce long-term interest costs. However, it's essential to consider your break-even point to see. Mortgage points can lower your interest rate, reducing monthly payments and total cost of the loan. · To negotiate lower points and closing costs, apply for. A mortgage point equals 1 percent of your total loan amount — for example, on a $, loan, one point would be $1, Mortgage points are essentially a. Each mortgage discount point usually costs one percent of your total loan amount, and lowers the interest rate on your monthly payments by percent. For. If you are buying points to refinance your home, the IRS considers this prepaid interest. That means you will have to deduct them over the life of the loan. Mortgage points — also known as discount points — are upfront fees you pay to your lender to “buy” a lower interest rate. Even if buying points makes sense over the life of the loan, it requires extra cash up front. Do you have that extra cash available? Do you expect to sell. Each point is equal to 1 percent of the loan amount, for instance 2 points on a $, loan would cost $ You can buy up to 5 points. Interest Rate with. Buying points is a great way to get a better interest rate and more manageable monthly payments, but if you're currently in the home purchase process and. Your loan officer can help you determine whether or not points are worth purchasing. To do this, you'll need to have a good idea of how long you plan on staying. Buying points is a great way to get a better interest rate and more manageable monthly payments, but if you're currently in the home purchase process and. In a nutshell, mortgage points are something you pay for upfront in order to save yourself money down the line. It works because your overall interest rate will. But each "point" will cost you 1% of your mortgage balance. The mortgage points calculator helps you determine if you should pay for points, or use the money to. It is a cost due at the closing of the loan used to buy a lower interest rate. The “point” itself represents a percentage of the loan amount. Calculating that. You agree to pay more in points for a lower interest rate. A point is 1% of the loan amount. By “buying” points, you agree to pay the lender the. Is It Worth It? While some mortgage experts say that the cash you would spend on discount points should be used instead to invest in something with a better. % and%. It's also worth keeping in mind that mortgages with points carry a lower interest rate but have higher closing costs since points are paid at. Technically, you can buy as many as you want. However, the more you buy the more they cost and the less the interest rate drops. For example, one point might. There are two kinds of mortgage points: origination points and discount points. · Buyers pay origination points to the lender as a type of fee for processing the. With a larger down payment, the income is the reduction in monthly payment that results from the smaller loan and mortgage insurance premium. With points, the. A mortgage point equals 1 percent of your total loan amount — for example, on a $, loan, one point would be $1, Mortgage points are essentially a. Typically, 1 point will cost about 1% of the loan amount—so buying a point on a $, loan would cost about $3,—but the exact financial impact of points. By purchasing that discount point, you would typically reduce your loan interest rate by %. So, if you were offered an interest rate of % on a year. Paying points on a mortgage means that if you plan on living in your new home for a long time, you will most likely save money over the life of the loan. · It. Mortgage points — also known as discount points — are upfront fees you pay to your lender to “buy” a lower interest rate. Mortgage points, also known as discount points, are fees a homebuyer pays directly to the lender (usually a bank) in exchange for a reduced interest rate. Real Property Definition | Using Crypto As A Savings Account
{"url":"https://infoteo.ru/recently-added/is-it-worth-buying-points-for-mortgage.php","timestamp":"2024-11-05T19:53:36Z","content_type":"text/html","content_length":"16238","record_id":"<urn:uuid:40f6d2c0-a65b-4ebf-85f1-92ec87b8afa5>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00387.warc.gz"}
Baccarat Policies and Strategy Sep 08 2019 Punto Banco Rules Baccarat is gambled on with 8 decks in a dealer’s shoe. Cards below ten are worth their printed number and with 10, J, Q, K are zero, and A is one. Wagers are made on the ‘banker’, the ‘player’, or for a tie (these aren’t actual people; they just represent the 2 hands to be dealt). Two cards are given to both the ‘bank’ and ‘gambler’. The value for each hand is the sum of the two cards, although the beginning number is dropped. For instance, a hand of five and six has a score of 1 (five plus six equals 11; drop the first ‘one’). A third card could be given out using the following rules: - If the gambler or banker achieves a value of 8 or nine, the two players hold. - If the gambler has five or less, she takes a card. Players holds otherwise. - If the gambler stands, the banker takes a card on a total lower than five. If the gambler takes a card, a table is employed to see if the banker stands or takes a card. Baccarat Banque Odds The greater of the 2 totals wins. Winning wagers on the bank payout 19:20 (equal cash less a 5% rake. Commission are recorded and cleared out once you leave the game so be sure to still have funds left just before you depart). Winning wagers on the player pay one to one. Winning bets for a tie frequently pays eight to one but on occasion 9 to 1. (This is a awful wager as ties occur lower than one in every 10 rounds. Be wary of wagering on a tie. Although odds are substantially better for 9:1 vs. eight to one) Wagered on correctly punto banco gives pretty good odds, aside from the tie bet of course. Baccarat Method As with all games baccarat banque has a handful of general false impressions. One of which is the same as a absurdity in roulette. The past is not a prophecy of future actions. Recording previous results at a table is a poor use of paper and an insult to the tree that was cut down for our paper needs. The most common and definitely the most accomplished scheme is the 1-3-2-6 tactic. This method is deployed to build up winnings and minimizing losses. Begin by betting 1 unit. If you win, add another to the two on the table for a total of three chips on the second bet. Should you succeed you will have 6 on the table, take away 4 so you have 2 on the 3rd bet. If you succeed on the 3rd bet, put down two on the four on the game table for a sum total of six on the 4th bet. If you lose on the 1st round, you take a hit of 1. A win on the 1st bet followed by a hit on the 2nd creates a loss of 2. Success on the initial 2 with a hit on the 3rd provides you with a profit of 2. And success on the first three with a loss on the 4th means you balance the books. Winning at all four wagers leaves you with 12, a take of 10. This means you will be able to give up the 2nd bet 5 instances for every favorable run of 4 wagers and in the end, experience no loss. You must be logged in to post a comment.
{"url":"http://fastplayingaction.com/2019/09/08/baccarat-policies-and-strategy/","timestamp":"2024-11-07T21:38:27Z","content_type":"application/xhtml+xml","content_length":"27484","record_id":"<urn:uuid:ad8f12d9-0ffe-445a-8517-cdbd56d6a48b>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00550.warc.gz"}
Public-key encryption in the bounded-retrieval model We construct the first public-key encryption scheme in the Bounded-Retrieval Model (BRM), providing security against various forms of adversarial "key leakage" attacks. In this model, the adversary is allowed to learn arbitrary information about the decryption key, subject only to the constraint that the overall amount of "leakage" is bounded by at most ℓ bits. The goal of the BRM is to design cryptographic schemes that can flexibly tolerate arbitrarily leakage bounds ℓ (few bits or many Gigabytes), by only increasing the size of secret key proportionally, but keeping all the other parameters - including the size of the public key, ciphertext, encryption/decryption time, and the number of secret-key bits accessed during decryption - small and independent of ℓ. As our main technical tool, we introduce the concept of an Identity-Based Hash Proof System (IB-HPS), which generalizes the notion of hash proof systems of Cramer and Shoup [CS02] to the identity-based setting. We give three different constructions of this primitive based on: (1) bilinear groups, (2) lattices, and (3) quadratic residuosity. As a result of independent interest, we show that an IB-HPS almost immediately yields an Identity-Based Encryption (IBE) scheme which is secure against (small) partial leakage of the target identity's decryption key. As our main result, we use IB-HPS to construct public-key encryption (and IBE) schemes in the Bounded-Retrieval Model. Publication series Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Volume 6110 LNCS ISSN (Print) 0302-9743 ISSN (Electronic) 1611-3349 Conference 29th in the Series of EuropeanConferences on the Theory and Application of Cryptographic Techniques, Eurocrypt 2010 Country/Territory France City French Riviera Period 30/05/10 → 3/06/10 Dive into the research topics of 'Public-key encryption in the bounded-retrieval model'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/public-key-encryption-in-the-bounded-retrieval-model","timestamp":"2024-11-01T20:04:47Z","content_type":"text/html","content_length":"52792","record_id":"<urn:uuid:5b10883b-15b6-4fba-b4e8-9389bbd614de>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00115.warc.gz"}
There are 3 item/s. Title Date Views Brief Description Zagier’s reduction theory for indefinite binary 2020 568 We give an in depth description of indefinite binary quadratic forms with a particular emphasis on Zagier’s reduction theory for such quadratic forms and the Fermat-Pell Equation forms. We also connect this theory with the theory of minus continued fractions and as a further application we offe... Quadratic reciprocity for the rational integers and 2010 4951 This thesis begins by giving a brief time line of the origins of Number Theory. It highlights the big theorems that have been the Gaussian integers constructed in this subject, along with the mathematicians who constructed them. The thesis, then, goes on to prove the Law ... Binary quadratic forms and genus theory 2013 15979 The study of binary quadratic forms arose as a natural generalization of questions about the integers posed by the ancient Greeks. A major milestone of understanding occurred with the publication of Gauss's Disquisitiones Arithmeticae
{"url":"http://libres.uncg.edu/ir/uncg/clist-etd.aspx?fn=Brett&ln=Tangedal&org=uncg","timestamp":"2024-11-14T11:33:04Z","content_type":"application/xhtml+xml","content_length":"10656","record_id":"<urn:uuid:d81d680b-d52b-439d-84aa-63a6da17df1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00411.warc.gz"}
College Physics College Physics Science and Technology Induction is the process in which an emf is induced by changing magnetic flux. Many examples have been discussed so far, some more effective than others. Transformers, for example, are designed to be particularly effective at inducing a desired voltage and current with very little loss of energy to other forms. Is there a useful physical quantity related to how “effective” a given device is? The answer is yes, and that physical quantity is called inductance. Mutual inductance is the effect of Faraday’s law of induction for one device upon another, such as the primary coil in transmitting energy to the secondary in a transformer. See [link], where simple coils induce emfs in one another. These coils can induce emfs in one another like an inefficient transformer. Their mutual inductance M indicates the effectiveness of the coupling between them. Here a change in current in coil 1 is seen to induce an emf in coil 2. (Note that "$E 2 size 12{E rSub { size 8{2} } } {}$ induced" represents the induced emf in coil 2.) In the many cases where the geometry of the devices is fixed, flux is changed by varying current. We therefore concentrate on the rate of change of current, $ΔI/Δt size 12{ΔI} {}$, as the cause of induction. A change in the current $I1 size 12{I rSub { size 8{1} } } {}$ in one device, coil 1 in the figure, induces an $emf2 size 12{"emf" rSub { size 8{2} } } {}$ in the other. We express this in equation form as $emf2=−MΔI1Δt, size 12{"emf" rSub { size 8{2} } = - M { {ΔI rSub { size 8{1} } } over {Δt} } } {}$ where $M size 12{M} {}$ is defined to be the mutual inductance between the two devices. The minus sign is an expression of Lenz’s law. The larger the mutual inductance $M size 12{M} {}$, the more effective the coupling. For example, the coils in [link] have a small $M size 12{M} {}$ compared with the transformer coils in [link]. Units for $M size 12{M} {}$ are $(V⋅s)/A=Ω⋅s size 12{ \( V cdot s \) "/A"= %OMEGA cdot s} {}$, which is named a henry (H), after Joseph Henry. That is, $1 H=1Ω⋅s size 12{1`H=1` %OMEGA cdot s} {}$. Nature is symmetric here. If we change the current $I2 size 12{I rSub { size 8{2} } } {}$ in coil 2, we induce an $emf1 size 12{"emf" rSub { size 8{1} } } {}$ in coil 1, which is given by $emf1=−MΔI2Δt, size 12{"emf" rSub { size 8{1} } = - M { {ΔI rSub { size 8{2} } } over {Δt} } } {}$ where $M size 12{M} {}$ is the same as for the reverse process. Transformers run backward with the same effectiveness, or mutual inductance $M size 12{M} {}$. A large mutual inductance $M size 12{M} {}$ may or may not be desirable. We want a transformer to have a large mutual inductance. But an appliance, such as an electric clothes dryer, can induce a dangerous emf on its case if the mutual inductance between its coils and the case is large. One way to reduce mutual inductance $M size 12{M} {}$ is to counterwind coils to cancel the magnetic field produced. (See [link].) The heating coils of an electric clothes dryer can be counter-wound so that their magnetic fields cancel one another, greatly reducing the mutual inductance with the case of the dryer. Self-inductance, the effect of Faraday’s law of induction of a device on itself, also exists. When, for example, current through a coil is increased, the magnetic field and flux also increase, inducing a counter emf, as required by Lenz’s law. Conversely, if the current is decreased, an emf is induced that opposes the decrease. Most devices have a fixed geometry, and so the change in flux is due entirely to the change in current $ΔI size 12{ΔI} {}$ through the device. The induced emf is related to the physical geometry of the device and the rate of change of current. It is given by $emf=−LΔIΔt, size 12{"emf"= - L { {ΔI} over {Δt} } } {}$ where $L size 12{L} {}$ is the self-inductance of the device. A device that exhibits significant self-inductance is called an inductor, and given the symbol in [link]. The minus sign is an expression of Lenz’s law, indicating that emf opposes the change in current. Units of self-inductance are henries (H) just as for mutual inductance. The larger the self-inductance $L size 12{L} {}$ of a device, the greater its opposition to any change in current through it. For example, a large coil with many turns and an iron core has a large $L size 12{L} {}$ and will not allow current to change quickly. To avoid this effect, a small $L size 12{L} {}$ must be achieved, such as by counterwinding coils as in [link]. A 1 H inductor is a large inductor. To illustrate this, consider a device with $L=1.0 H size 12{L=1 "." 0`H} {}$ that has a 10 A current flowing through it. What happens if we try to shut off the current rapidly, perhaps in only 1.0 ms? An emf, given by $emf=−L(ΔI/Δt) size 12{"emf"= - L \( ΔI/Δt \) } {}$, will oppose the change. Thus an emf will be induced given by $emf=−L(ΔI/Δt)=(1.0 H)[(10 A)/(1.0 ms)]=10,000 V$. The positive sign means this large voltage is in the same direction as the current, opposing its decrease. Such large emfs can cause arcs, damaging switching equipment, and so it may be necessary to change current more slowly. There are uses for such a large induced voltage. Camera flashes use a battery, two inductors that function as a transformer, and a switching system or oscillator to induce large voltages. (Remember that we need a changing magnetic field, brought about by a changing current, to induce a voltage in another coil.) The oscillator system will do this many times as the battery voltage is boosted to over one thousand volts. (You may hear the high pitched whine from the transformer as the capacitor is being charged.) A capacitor stores the high voltage for later use in powering the flash. (See Through rapid switching of an inductor, 1.5 V batteries can be used to induce emfs of several thousand volts. This voltage can be used to store charge in a capacitor for later use, such as in a camera flash attachment. It is possible to calculate $L size 12{L} {}$ for an inductor given its geometry (size and shape) and knowing the magnetic field that it produces. This is difficult in most cases, because of the complexity of the field created. So in this text the inductance $L size 12{L} {}$ is usually a given quantity. One exception is the solenoid, because it has a very uniform field inside, a nearly zero field outside, and a simple shape. It is instructive to derive an equation for its inductance. We start by noting that the induced emf is given by Faraday’s law of induction as $emf=−N(ΔΦ/Δt) size 12 {"emf"= - N \( ΔΦ/Δt \) } {}$ and, by the definition of self-inductance, as $emf=−L(ΔI/Δt) size 12{"emf"= - L \( ΔI/Δt \) } {}$. Equating these yields $emf=−NΔΦΔt=−LΔIΔt. size 12{"emf"= - N { {ΔΦ} over {Δt} } = - L { {ΔI} over {Δt} } } {}$ Solving for $L size 12{L} {}$ gives $L=NΔΦΔI. size 12{L=N { {ΔΦ} over {ΔI} } } {}$ This equation for the self-inductance $L size 12{L} {}$ of a device is always valid. It means that self-inductance $L size 12{L} {}$ depends on how effective the current is in creating flux; the more effective, the greater $ΔΦ size 12{ΔΦ} {}$/ $ΔI size 12{ΔI} {}$ is. Let us use this last equation to find an expression for the inductance of a solenoid. Since the area $A$ of a solenoid is fixed, the change in flux is $Δ Φ = Δ ( B A ) = A Δ B$. To find $Δ B$, we note that the magnetic field of a solenoid is given by $B=μ0nI=μ0NIℓ size 12{B=μ rSub { size 8{0} } ital "nI"=μ rSub { size 8{0} } { { ital "NI"} over {ℓ} } } {}$. (Here $n=N/ℓ size 12{n=N/ℓ} {}$, where $N$ is the number of coils and $ℓ$ is the solenoid’s length.) Only the current changes, so that $ΔΦ=AΔB=μ0NAΔIℓ size 12{ΔΦ=AΔB=μ rSub { size 8{0} } ital "NA" { {ΔI} over {ℓ} } } {}$. Substituting $Δ Φ$ into $L=NΔΦΔI size 12{L=N { {ΔΦ} over {ΔI} } } {}$ gives $L=NΔΦΔI=Nμ0NAΔIℓΔI. size 12{L=N { {ΔΦ} over {ΔI} } =N { {μ rSub { size 8{0} } ital "NA" { {ΔI} over {ℓ} } } over {ΔI} } } {}$ This simplifies to $L=μ0N2Aℓ(solenoid). size 12{L= { {μ rSub { size 8{0} } N rSup { size 8{2} } A} over {ℓ} } } {}$ This is the self-inductance of a solenoid of cross-sectional area $A$ and length $ℓ$. Note that the inductance depends only on the physical characteristics of the solenoid, consistent with its Calculating the Self-inductance of a Moderate Size Solenoid Calculate the self-inductance of a 10.0 cm long, 4.00 cm diameter solenoid that has 200 coils. This is a straightforward application of $L=μ0N2Aℓ size 12{L= { {μ rSub { size 8{0} } N rSup { size 8{2} } A} over {ℓ} } } {}$, since all quantities in the equation except $L size 12{L} {}$ are Use the following expression for the self-inductance of a solenoid: $L=μ0N2Aℓ. size 12{L= { {μ rSub { size 8{0} } N rSup { size 8{2} } A} over {ℓ} } } {}$ The cross-sectional area in this example is $A=πr2=(3.14...)(0.0200 m)2=1.26×10−3m2 size 12{A=πr rSup { size 8{2} } = \( 3 "." "14" "." "." "." \) \( 0 "." "0200"`m \) rSup { size 8{2} } =1 "." "26" times "10" rSup { size 8{ - 3} } `m rSup { size 8{2} } } {}$, $N$ is given to be 200, and the length $ℓ$ is 0.100 m. We know the permeability of free space is $μ0=4π×10−7T⋅m/A$. Substituting these into the expression for $L$ gives $L = (4π×10−7 T⋅m/A)(200)2(1.26×10−3 m2)0.100 m = 0.632 mH.$ This solenoid is moderate in size. Its inductance of nearly a millihenry is also considered moderate. One common application of inductance is used in traffic lights that can tell when vehicles are waiting at the intersection. An electrical circuit with an inductor is placed in the road under the place a waiting car will stop over. The body of the car increases the inductance and the circuit changes sending a signal to the traffic lights to change colors. Similarly, metal detectors used for airport security employ the same technique. A coil or inductor in the metal detector frame acts as both a transmitter and a receiver. The pulsed signal in the transmitter coil induces a signal in the receiver. The self-inductance of the circuit is affected by any metal object in the path. Such detectors can be adjusted for sensitivity and also can indicate the approximate location of metal found on a person. (But they will not be able to detect any plastic explosive such as that found on the “underwear bomber.”) See [link]. The familiar security gate at an airport can not only detect metals but also indicate their approximate height above the floor. (credit: Alexbuirds, Wikimedia Commons) Energy Stored in an Inductor We know from Lenz’s law that inductances oppose changes in current. There is an alternative way to look at this opposition that is based on energy. Energy is stored in a magnetic field. It takes time to build up energy, and it also takes time to deplete energy; hence, there is an opposition to rapid change. In an inductor, the magnetic field is directly proportional to current and to the inductance of the device. It can be shown that the energy stored in an inductor $Eind size 12{E rSub { size 8{"ind"} } } {}$ is given by $Eind=12LI2. size 12{E rSub { size 8{"ind"} } = { {1} over {2} } ital "LI" rSup { size 8{2} } } {}$ This expression is similar to that for the energy stored in a capacitor. Calculating the Energy Stored in the Field of a Solenoid How much energy is stored in the 0.632 mH inductor of the preceding example when a 30.0 A current flows through it? The energy is given by the equation $Eind=12LI2 size 12{E rSub { size 8{"ind"} } = { {1} over {2} } ital "LI" rSup { size 8{2} } } {}$, and all quantities except $Eind size 12{E rSub { size 8{"ind"} } } {}$ are known. Substituting the value for $L size 12{L} {}$ found in the previous example and the given current into $Eind=12LI2 size 12{E rSub { size 8{"ind"} } = { {1} over {2} } ital "LI" rSup { size 8{2} } } {} $ gives $Eind = 12LI2 = 0.5(0.632×10−3 H)(30.0 A)2=0.284 J.$ This amount of energy is certainly enough to cause a spark if the current is suddenly switched off. It cannot be built up instantaneously unless the power input is infinite. Section Summary • Inductance is the property of a device that tells how effectively it induces an emf in another device. • Mutual inductance is the effect of two devices in inducing emfs in each other. • A change in current $ΔI1/Δt size 12{ΔI rSub { size 8{1} } /Δt} {}$ in one induces an emf $emf2 size 12{"emf" rSub { size 8{2} } } {}$ in the second: $emf2=−MΔI1Δt, size 12{"emf" rSub { size 8{2} } = - M { {ΔI rSub { size 8{1} } } over {Δt} } } {}$ where $M$ is defined to be the mutual inductance between the two devices, and the minus sign is due to Lenz’s law. • Symmetrically, a change in current $ΔI2/Δt size 12{ΔI rSub { size 8{2} } /Δt} {}$ through the second device induces an emf $emf1 size 12{"emf" rSub { size 8{1} } } {}$ in the first: $emf1=−MΔI2Δt, size 12{"emf" rSub { size 8{1} } = - M { {ΔI rSub { size 8{2} } } over {Δt} } } {}$ where $M$ is the same mutual inductance as in the reverse process. • Current changes in a device induce an emf in the device itself. • Self-inductance is the effect of the device inducing emf in itself. • The device is called an inductor, and the emf induced in it by a change in current through it is $emf=−LΔIΔt, size 12{"emf"= - L { {ΔI} over {Δt} } } {}$ where $L size 12{L} {}$ is the self-inductance of the inductor, and $ΔI/Δt size 12{ΔI/Δt} {}$ is the rate of change of current through it. The minus sign indicates that emf opposes the change in current, as required by Lenz’s law. • The unit of self- and mutual inductance is the henry (H), where $1 H=1 Ω⋅s size 12{1`H=1` %OMEGA cdot s} {}$. • The self-inductance $L size 12{L} {}$ of an inductor is proportional to how much flux changes with current. For an $N size 12{N} {}$-turn inductor, $L=NΔΦΔI . size 12{L=N { {ΔΦ} over {ΔI} } } {}$ • The self-inductance of a solenoid is $L=μ0N2Aℓ(solenoid), size 12{L= { {μ rSub { size 8{0} } N rSup { size 8{2} } A} over {ℓ} } } {}$ where $N size 12{N} {}$ is its number of turns in the solenoid, $A size 12{A} {}$ is its cross-sectional area, $ℓ size 12{ℓ} {}$ is its length, and $μ0=4π×10−7T⋅m/A size 12{μ rSub { size 8{0} } = 4π times "10" rSup { size 8{"-7"} } `T cdot "m/A"} {}$ is the permeability of free space. • The energy stored in an inductor $Eind size 12{E rSub { size 8{"ind"} } } {}$ is $Eind=12LI2. size 12{E rSub { size 8{"ind"} } = { {1} over {2} } ital "LI" rSup { size 8{2} } } {}$ Conceptual Questions How would you place two identical flat coils in contact so that they had the greatest mutual inductance? The least? How would you shape a given length of wire to give it the greatest self-inductance? The least? Verify, as was concluded without proof in [link], that units of $T⋅m2/A=Ω⋅s=H size 12{T cdot m rSup { size 8{2} } /A= %OMEGA cdot s=H} {}$. Problems & Exercises Two coils are placed close together in a physics lab to demonstrate Faraday’s law of induction. A current of 5.00 A in one is switched off in 1.00 ms, inducing a 9.00 V emf in the other. What is their mutual inductance? If two coils placed next to one another have a mutual inductance of 5.00 mH, what voltage is induced in one when the 2.00 A current in the other is switched off in 30.0 ms? The 4.00 A current through a 7.50 mH inductor is switched off in 8.33 ms. What is the emf induced opposing this? A device is turned on and 3.00 A flows through it 0.100 ms later. What is the self-inductance of the device if an induced 150 V emf opposes this? Starting with $emf2=−MΔI1Δt size 12{"emf" rSub { size 8{2} } = - M { {ΔI rSub { size 8{1} } } over {Δt} } } {}$, show that the units of inductance are $(V⋅s)/A=Ω⋅s size 12{ \( V cdot s \) "/A"= %OMEGA cdot s} {}$. Camera flashes charge a capacitor to high voltage by switching the current through an inductor on and off rapidly. In what time must the 0.100 A current through a 2.00 mH inductor be switched on or off to induce a 500 V emf? A large research solenoid has a self-inductance of 25.0 H. (a) What induced emf opposes shutting it off when 100 A of current through it is switched off in 80.0 ms? (b) How much energy is stored in the inductor at full current? (c) At what rate in watts must energy be dissipated to switch the current off in 80.0 ms? (d) In view of the answer to the last part, is it surprising that shutting it down this quickly is difficult? (a) 31.3 kV (b) 125 kJ (c) 1.56 MW (d) No, it is not surprising since this power is very high. (a) Calculate the self-inductance of a 50.0 cm long, 10.0 cm diameter solenoid having 1000 loops. (b) How much energy is stored in this inductor when 20.0 A of current flows through it? (c) How fast can it be turned off if the induced emf cannot exceed 3.00 V? A precision laboratory resistor is made of a coil of wire 1.50 cm in diameter and 4.00 cm long, and it has 500 turns. (a) What is its self-inductance? (b) What average emf is induced if the 12.0 A current through it is turned on in 5.00 ms (one-fourth of a cycle for 50 Hz AC)? (c) What is its inductance if it is shortened to half its length and counter-wound (two layers of 250 turns in opposite directions)? (a) 1.39 mH (b) 3.33 V (c) Zero The heating coils in a hair dryer are 0.800 cm in diameter, have a combined length of 1.00 m, and a total of 400 turns. (a) What is their total self-inductance assuming they act like a single solenoid? (b) How much energy is stored in them when 6.00 A flows? (c) What average emf opposes shutting them off if this is done in 5.00 ms (one-fourth of a cycle for 50 Hz AC)? When the 20.0 A current through an inductor is turned off in 1.50 ms, an 800 V emf is induced, opposing the change. What is the value of the self-inductance? How fast can the 150 A current through a 0.250 H inductor be shut off if the induced emf cannot exceed 75.0 V? Integrated Concepts A very large, superconducting solenoid such as one used in MRI scans, stores 1.00 MJ of energy in its magnetic field when 100 A flows. (a) Find its self-inductance. (b) If the coils “go normal,” they gain resistance and start to dissipate thermal energy. What temperature increase is produced if all the stored energy goes into heating the 1000 kg magnet, given its average specific heat is $200 J/ Unreasonable Results A 25.0 H inductor has 100 A of current turned off in 1.00 ms. (a) What voltage is induced to oppose this? (b) What is unreasonable about this result? (c) Which assumption or premise is responsible?
{"url":"https://voer.edu.vn/c/inductance/0e60bfc6/3cb34de7","timestamp":"2024-11-01T22:22:21Z","content_type":"text/html","content_length":"218870","record_id":"<urn:uuid:4167ff93-e3f2-43b3-9c2e-704e61673aff>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00146.warc.gz"}
What Every Young Man Should Understand About the Power of Compound Interest “Compound interest is the Eighth Wonder of the World. He who understands it, earns it; he who doesn’t, pays it.” Albert Einstein supposedly said that. Lots of quotes get attributed to him that he didn’t actually say, and this may be one of them; I personally don’t see the guy who imagined riding a light beam to figure out the Theory of Relativity waxing poetic about compound interest. But even if Einstein really didn’t say compound interest was the Eighth Wonder of the World, it’s still a good point. Compound interest is pretty dang awesome. It’s a powerful concept — one that can mightily strengthen, or weaken, your finances. The man who understands it will have a tool to increase his net worth; the man who doesn’t will go through life stuck in a paycheck mentality. My seven-year-old son recently opened up a savings account, and it offered me the chance to explain compound interest to him. It didn’t go well. It’s one of those financial concepts that’s so simple that you take it for granted. Consequently, when you’re forced to explain it to a child, you realize you don’t have as much of a grasp on it as you thought you did. Einstein also supposedly said, “If you can’t explain it to a six-year-old, you don’t understand it yourself.” Again, even if he didn’t say that, it’s a good point. If your dad never sat you down to talk compound interest, you’re in luck; having honed my explanation on Gus, I’ll now pass it along to you. What Is Compound Interest? To understand compound interest, it’s useful to understand simple interest first. Simple interest is calculated on the principal or the original amount of a deposit or loan. It’s really, well, simple to figure out. Let’s say you take out a loan for $10,000 at a simple interest rate of 5%. The duration of the loan is four years. To calculate the interest that’ll accumulate on the loan, you’d use the following formula: Principal x interest rate x term of the loan Plugging in our numbers, it would be: $10,000 x .05 x 4 = $2,000 So that $10,000 loan will cost you $2,000 in simple interest. Car loans, home mortgages, and student loans use simple interest. A loan you take from a family member or friend will likely use simple interest (if they charge you interest at all). Now that you understand simple interest, we can move to compound interest. Compound interest is calculated on the principal amount and — this is key — also on the accumulated interest of previous periods. It’s interest on interest. Here’s what the compound interest formula looks like: P (1 + r/n) ^(nt) – P [P = Principal; r = annual interest rate in percentage terms; n = number of compounding periods for a year; t = number of years money is invested or borrowed] Yeah, it looks confusing, but let’s plug in our numbers from the simple interest example to see what we’d pay if the interest was compounded. So we got a $10,000 loan that compounds annually at 5%. The duration of the loan is 4 years. What would we pay in interest? Let’s look at the progression of the math: $10,000 (1+.05/1)^(1×4) – $10,000 → $10,000 (1+.05/1)^(4) – $10,000 → $10,000 (1.21550625) – $10,000 → $12,155.0625 – $10,000 = $2,155.06 So on a four-year loan that’s compounded annually, we’d pay $2,155.06 in compound interest. That’s $155.06 more than a loan issued on simple interest. Calculating interest on the interest already accrued on a principal can really add up. And add up fast as we’ll see in an example below. If you’d rather not do the math yourself, there are plenty of compound interest calculators online. Credit cards calculate balances on compound interest. Instead of compounding annually, credit card companies compound monthly. The high interest rates of credit cards coupled with their monthly compounding is why pretty much every single personal finance guru out there says “Don’t carry a balance on your credit cards!” You end up paying a lot for that extended credit. For example, a credit card balance of $10,000 carried at an interest rate of 20% (compounded monthly) would result in total compound interest of $2,193.91 over one year, or about $183 per month. Imagine what you could do with an extra $183 a month. Compound interest can work in your favor, though. Big time. When you sock your money into a savings account, banks typically pay compound interest daily on the money you keep with them. Granted, the interest rate you get is pretty crappy — somewhere between .03% and 1% depending on the bank — but when you compound at that rate daily and you keep that money in there for a long time, things can add up. If you invest in an index fund, you can leverage the power of compound interest by re-investing your earnings into buying more of the index fund which will allow you to earn even more, which you then re-invest, and so on and so forth. Compounding Periods Have a Big Effect on Earnings Looking at the compound interest formula, you’ll likely notice that the frequency of compounding periods can have a big effect on your earnings or how much you have to pay in interest. The more compounding periods, the more interest that is accrued. You’ll earn more in interest from a bank that compounds daily compared to a bank that only compounds monthly; you’ll pay more in interest on a loan that compounds monthly compared to one that compounds annually. So when looking at interest rates for a savings account or loan, make sure to pay attention to how often interest is compounded. Time Is Your Friend The real magic of compounding reveals itself over long periods of time. The longer you let your money sit in an account and compound itself, the more money you make. This is the big point I’ve been trying to make to my son. What’s helped flip on the light bulb in his head is this example from personal finance expert Beth Kobliner: If you were to save $1,000 a year from age 25 to 34 in a retirement account earning 8% a year, and never invest a penny more, your $10,000 investment would grow to $157,435 by age 65. But if you don’t start saving until you’re 35 years old and then invest $1,000 a year for the next 30 years (that’s a total investment of $30,000), you’ll have only $122,346 by age 65. The bottom line: Start early, so your money has enough time to pile up. Understanding this concept has helped turn Gus into a tightfisted Scrooge McDuck. “Man, imagine how much interest I can earn since I’m starting at seven years old!” At the beginning of each month, he loves to check his savings account to see how the interest he earns is going up little by little thanks to the magic of compound interest. Use the Power of Compound Interest to Your Advantage Understanding compound interest can really help you move ahead with your personal finances. Knowing that credit card companies compound the interest on your balance on a monthly basis should act as an incentive to pay off credit card debt as quickly as possible. Knowing that you can make money from your money should act as an incentive to sock away as much dough as you can and to not touch it for as long as you can. The key is to get started today. If you’ve got credit card debt, start paying it off now so compound interest doesn’t devour you. If you don’t have a savings or retirement account, start one today so you can leverage the power of this Eighth Wonder of the World. Now that we have a basic understanding of compound interest, we can start exploring things like APR and APY. We’ll do that in a future article.
{"url":"https://www.artofmanliness.com/career-wealth/wealth/what-every-young-man-should-understand-about-the-power-of-compound-interest/","timestamp":"2024-11-05T07:43:20Z","content_type":"text/html","content_length":"191849","record_id":"<urn:uuid:70e5ca99-3358-41c4-8ee0-141ca9e62f70>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00827.warc.gz"}
Learning developmental mode dynamics from single-cell trajectories Embryogenesis is a multiscale process during which developmental symmetry breaking transitions give rise to complex multicellular organisms. Recent advances in high-resolution live-cell microscopy provide unprecedented insights into the collective cell dynamics at various stages of embryonic development. This rapid experimental progress poses the theoretical challenge of translating high-dimensional imaging data into predictive low-dimensional models that capture the essential ordering principles governing developmental cell migration in complex geometries. Here, we combine mode decomposition ideas that have proved successful in condensed matter physics and turbulence theory with recent advances in sparse dynamical systems inference to realize a computational framework for learning quantitative continuum models from single-cell imaging data. Considering pan-embryo cell migration during early gastrulation in zebrafish as a widely studied example, we show how cell trajectory data on a curved surface can be coarse-grained and compressed with suitable harmonic basis functions. The resulting low-dimensional representation of the collective cell dynamics enables a compact characterization of developmental symmetry breaking and the direct inference of an interpretable hydrodynamic model, which reveals similarities between pan-embryo cell migration and active Brownian particle dynamics on curved surfaces. Due to its generic conceptual foundation, we expect that mode-based model learning can help advance the quantitative biophysical understanding of a wide range of developmental structure formation processes. This work proposes a method to obtain a reduced description of the collective dynamics of thousands of cells moving together during zebrafish gastrulation as a few fundamental modes, and to derive effective dynamics for these modes. This well-written work enables a simplified picture of the key features of cellular collective motion, that will be useful to physicists and biologists looking for a quantitative understanding of morphogenesis. Embryogenesis, the development of a multicellular organism from a single fertilized egg cell, requires coordinated collective motions of thousands of cells across a wide range of length and time scales (Gilbert and Barresi, 2016; Solnica-Krezel, 2005). Understanding how a highly reproducible and robust tissue organization arises from the dynamics and interactions of individual cells presents a major interdisciplinary challenge (Collinet and Lecuit, 2021). Recent advances in high-resolution live imaging make it possible to track the internal biological states and physical movements of many individual cells on pan-embryonic scales throughout various stages of development (Stelzer, 2015; Power and Huisken, 2017; Hartmann et al., 2019; Shah et al., 2019). This unprecedented wealth of data poses two intertwined compression problems of equal practical and conceptual importance. The first concerns the efficient reduction of high-dimensional tracking data without loss of relevant information; the second relates to inferring predictive low-dimensional models for the developmental dynamics. Mathematical solutions to the first problem are aided by taking into account the geometry and symmetries of the developing embryo, which suggest suitable basis functions for a coarse-grained and sparse mode representation of raw data (Levy, 2006). Efficient algorithmic approaches tackling the second problem appear within reach thanks to recent advances in the direct inference of dynamical systems equations from data (Brunton et al., 2016; Rackauckas et al., 2021). Building on these ideas, we construct and demonstrate here a computational framework that translates developmental single-cell trajectory data on curved surfaces into quantitative models for the dominant hydrodynamic modes. Widely applied in physics (Kac, 1966; Goldenfeld and Woese, 2011; Kantsler and Goldstein, 2012; Bhaduri et al., 2020), engineering (Soong and Grigoriu, 1993; Heydari et al., 2021), and spectral computing (Driscoll et al., 2014; Burns et al., 2020; Fortunato et al., 2021), mode representations (Schmid, 2010; Tu et al., 2014) provide a powerful tool to decompose and study system dynamics at and across different energetic, spatial and temporal scales. In quantum systems, for example, mode representations in the form of carefully constructed eigenstates are used to characterize essential energetic system properties (Slater and Koster, 1954; Jaynes and Cummings, 1963). Similarly, turbulence theory has seen significant progress by studying the coupling between Fourier modes that represent dynamics at different length scales. This approach enabled a better understanding of energy cascades (Kolmogorov, 1941; Wang et al., 2021) and provided insights into the nature of turbulence in non-living (Kraichnan and Montgomery, 1980; Pope, 2000) and in living (Dunkel et al., 2013; Bratanov et al., 2015; Ramaswamy and Jülicher, 2016; Alert et al., 2020) systems. Additionally, the multi-scale nature of many biological processes make them particularly amenable to a representation in terms of spatial and temporal modes (Marchetti et al., 2013). Despite this fact, however, mode representations are not yet widely used to characterize and compress cell tracking data, or to infer dynamic models from such data. To demonstrate the practical potential of mode representations for the description of multicellular developmental processes, we develop here a computational framework that takes cell tracking data as inputs, translates these data into a sparse mode representation by exploiting symmetries of the biological system, and utilizes recently developed ODE inference techniques (Rackauckas et al., 2021) to infer a predictive dynamical model. The model will be specified in terms of a learned Green’s function that propagates initial cell density and flux data forward in time. To validate the approach, we demonstrate that it correctly recovers the hydrodynamic equations for active Brownian particle (ABP) dynamics on curved surfaces. Subsequently, as a first example application to experimental single-cell tracking data, we consider the pan-embryonic cell migration during early gastrulation in zebrafish (Shah et al., 2019), an important vertebrate model system for studying various morphogenetic events (Solnica-Krezel, 2005; Krieg et al., 2008; Morita et al., 2017). During gastrulation, complex migratory cell movements organize several thousand initially undifferentiated cells into different germlayers that lay out the primary body plan (Rohde and Heisenberg, 2007). The underlying high-dimensional single-cell data make this process a prototypical test problem for illustrating how spatio-temporal information can be efficiently compressed to analyze and model biological structure formation. Broadly, our goal is to translate experimentally measured single-cell trajectories on a curved surface into a quantitative model of collective cell migration dynamics. As a specific example, we consider recently published lightsheet microscopy data that captures the individual movements of thousands of cells during early zebrafish development from epiboly onset at 4 hours post-fertilization (hpf) to about 18 hpf (Shah et al., 2019). This developmental period is characterized by a collective symmetry breaking event during which cells collectively migrate over the yolk cell surface (Rohde and Heisenberg, 2007). Namely, they rearrange from an initial localization around the animal pole (AP) (Figure 1A, left) into a more elongated configuration that already indicates the basic geometry of the fully developed zebrafish larva (Figure 1A, right). Working with a two-dimensional (2D) sphere projection of the experimental data, we first describe a coarse-graining approach that faithfully captures cell-mass transport on a curved surface. We then construct a sparse mode representation of the resulting hydrodynamic fields in terms of scalar and vector spherical harmonic basis functions, discuss mode signatures of morphogenetic symmetry breaking events, and connect them to the dynamics of topological defects in the cellular flux. We validate this mode representation framework and the subsequent model inference using synthetic data of ABPs on a sphere, for which coarse-grained fields and learned models can be directly compared against analytical predictions. Finally, we infer a linear model for the mode dynamics of the experimental zebrafish data, which enables us to study the characteristics of cell interactions through kernels that couple cell density and flux and compare their features with the hydrodynamic mean-field signatures of ABPs on a sphere. From single-cell tracking data to sparse mode amplitude representations. Coarse-graining of cellular dynamics on a spherical surface The experimentally observed cell motions are approximately two-dimensional (2D): The radius of the yolk cell surface on which the dynamics takes place is much larger than the average height changes of the evolving cell mass (Shah et al., 2019). We therefore adopt a thin film approximation, in which the cellular motion is represented on an effective spherical mid-surface (gray surface in Figure 1B); refined future models should aim to account for the full 3D dynamics. Focusing here on the in-plane dynamics, we project all cell positions and velocities onto a spherical mid-surface $S$ of radius $Rs=300μm$. On this spherical surface, each cell $α=1,2,…,N$ has a position $rα⁢(t)$ and in-plane velocity $vα⁢(t)=d⁢rα/d⁢t$. As a second processing step, a coarse-grained representation of the single-cell dynamics on a spherical surface is determined. To facilitate the applicability of our framework to a wide range of experimental inputs, we propose a coarse-graining approach that can flexibly integrate cell number variations stemming from cell divisions, but also those from experimental uncertainties in cell imaging and tracking. Consequently, we first consider an idealized scenario in which the total cell number is approximately constant. In this case, mass conservation informs the construction of self-consistent coarse-graining kernels on a spherical surface. In a second step, we describe how this approach generalizes when there are variations in the total cell number. Consistent coarse-graining of idealized microscopic data Our specific aim is to translate microscopic cell positions $rα⁢(t)$ and velocities $vα⁢(t)$ into a continuous cell surface density $ρ⁢(r,t)$ and an associated flux $J⁢(r,t)$ at any point $r$ of the spherical mid-surface. For an approximately constant total number of cells, the fields $ρ$ and $J$ are related by the mass conservation equation (1) $\frac{\partial \rho }{\partial t}+{abla }_{S}\cdot J=0.$ Here, $∇S⋅J$ denotes the in-plane divergence of the cell number flux. To convert cell position $rα⁢(t)$ and velocities $vα⁢(t)$ into a normalized cell surface density $ρ⁢(r,t)$ and an associated normalized flux $J⁢(r,t)$, we consider a kernel coarse-graining of the form (Appendix 1) (2a) $\rho \left(r,t\right)=\frac{1}{N}\sum _{\alpha =1}^{N}K\left[r,{r}_{\alpha }\left(t\right)\right]$ (2b) $J\left(r,t\right)=\frac{1}{N}\sum _{\alpha =1}^{N}K\left[r,{r}_{\alpha }\left(t\right)\right]\cdot {\overline{v}}_{\alpha },$ where $N$ is the total number of cells and $v¯α=vα/|rα|$ is the angular velocity of a given cell on a reference unit sphere (Appendix 1). The kernels $K⁢(r,r′)$ and $K⁢(r,r′)$ are given by a scalar and a matrix-valued function, respectively. The matrix kernel $K⁢(r,r′)$ takes into account contributions of a particle with velocity $vα$ at $r′=rα$ to nearby points $r$ on the sphere, which involves an additional projection to ensure that $J⁢(r,t)$ is everywhere tangent to the spherical surface (Appendix 1). Importantly, the mass conservation Equation 1 implies a non-trivial consistency relation between the kernels $K⁢(r,r′)$ and $K⁢(r,r′)$ in Equation 2a, Equation 2b. The kernels that obey this condition represent different coarse-graining length scales (Appendix 1—figure 2). Throughout, we fix an intermediate coarse-graining length scale to enable a sparse representation of the experimental data, while ensuring that spatial details of the dynamics remain sufficiently well resolved. The final surface density $ρ⁢(r,t)$ and the associated normalized flux $J⁢(r,t)$, computed from Equation 2a and Equation 2b using a kernel with an effective great-circle coarse-graining width of $∼70⁢μ⁢m$, are shown in Figure 1C (see also Video 1). This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Time evolution of the pre-processed cell tracking data (point cloud, see Materials and methods), and of the density field $ρ(r,t)$ (colormap) and associated flux $J(r,t)$ (streamlines) corresponding to the harmonic modes ${ρlm,jlm(1),jlm(2)}$ shown in Figure 1D. Consequences of cell number variations in experimental data Because cell divisions are essential to most developmental processes, total cell numbers will in many cases – including early zebrafish gastrulation (Kobitski et al., 2015) – vary over time. True cell numbers and cell number changes are often difficult to measure due to experimental uncertainties arising from single-cell imaging and tracking within dense cellular aggregates. We therefore merely assume here that single cells are tracked in a representative fashion so that local relative surface densities found from Equation 2a reflect the probability that cells are present at a given point $r$. In the absence of further information on cell deaths and cell divisions, we additionally make the more restrictive assumption that cell appearances or disappearances are everywhere proportional to the local cell density. With these assumptions, we can define a cell number surface density $ρ~⁢(r,t)=N⁢(t)⁢ρ⁢(r,t)$, where $N⁢(t)$ is the cell number at time $t$ and $ρ⁢(r,t)$ is the normalized surface density given in Equation 2a. Similarly, a cell number flux is given by $J~⁢(r,t)=N⁢(t)⁢J⁢(r,t)$, where the flux $J⁢(r,t)$ is computed from the data as described by Equation 2b. Using these definitions in Equation 1, we find that the fields $ρ~⁢(r,t)$ and $J~⁢(r,t)$ obey a continuity equation (3) $\frac{\partial \stackrel{~}{\rho }}{\partial t}+{abla }_{S}\cdot \stackrel{~}{J}=k\left(t\right)\stackrel{~}{\rho },$ where $k⁢(t)=N˙⁢(t)/N⁢(t)$ denotes a time-dependent effective growth rate. Importantly, under the two above assumptions, Equation 3 encodes for any time-dependent total cell number $N⁢(t)§gt;0$ the same information as Equation 1 for coarse-grained normalized surface density $ρ⁢(r,t)$ and associated flux $J⁢(r,t)$ given by Equation 2a and Equation 2b, respectively. In the following analysis, we hence focus on these normalized fields. Spatial mode representation on a spherical surface To obtain a sparse mode representation of the hydrodynamic fields $ρ⁢(r,t)$ and $J⁢(r,t)$ on the spherical surface, we expand them in terms of scalar and vector spherical harmonics (SHs) (Arfken et al., 2013; Sandberg, 1978) (Appendix 2.A). SHs are defined on points $r^=r/Rs$ of the unit sphere, where $Rs=300$ μm is the mid-surface radius. In this basis, the scalar density field is represented (4) $\rho \left(\mathbf{r},t\right)=\sum _{l=0}^{{l}_{\mathrm{m}\mathrm{a}\mathrm{x}}}\sum _{m=-1}^{l}{\rho }_{lm}\left(t\right){Y}_{lm}\left(\stackrel{^}{\mathbf{r}}\right),$ which conveniently separates the time- and space-dependence of $ρ⁢(r,t)$ into mode amplitudes $ρl⁢m⁢(t)$ and scalar harmonic functions $Yl⁢m⁢(r^)$, respectively. The maximal mode number $lmax$ is a proxy for the maximal spatial resolution at which $ρ⁢(r,t)$ is faithfully represented. Similarly, the vector-valued flux $J⁢(r,t)$ can be decomposed into time-dependent mode amplitudes $jl⁢m(1)⁢(t)$ and $jl⁢m(2)⁢(t)$, while its spatial dependence is described by vector SHs $Ψlm(r^)$ and $Φlm(r^)$ (Sandberg, 1978) (Appendix 2, Video 2). (5) $\mathbf{J}\left(\mathbf{r},t\right)=\sum _{l=1}^{{l}_{\text{max}}}\sum _{m=-l}^{l}\left({j}_{lm}^{\left(1\right)}\left(t\right){\mathbf{\Psi }}_{lm}\left(\stackrel{^}{\mathbf{r}}\right)+{j}_{lm} ^{\left(2\right)}\left(t\right){\mathbf{\Phi }}_{lm}\left(\stackrel{^}{\mathbf{r}}\right)\right).$ This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Reconstruction of the hydrodynamics fields in real space by adding consecutive scalar and vector spherical harmonic modes of progressively higher order $l$. Besides the in-plane divergence $∇S⋅J$ that leads to local density changes (see Equation 1), the cell number flux $J⁢(r,t)$ also contains an in-plane curl component $∇S×J$ that is associated with locally rotational cell flux. The two sets of vector SHs ${Ψlm}$ and ${Φlm}$ conveniently decompose the flux into these contributions: Because $∇S⋅Φlm=∇S×Ψlm=0$, and $r^⋅(∇S×Φlm)=∇S⋅Ψlm=−l(l+1)Ylm/ Rs$ (Sandberg, 1978), we see from Equation 5 that $jl⁢m(1)⁢(t)$ corresponds to modes that drive density changes and $jl⁢m(2)⁢(t)$ represents modes of local rotational cell motion that change relative cell positions but do not change local density. Indeed, using harmonic mode representations of the cell number density Equation 4 and the cell number flux Equation 5 directly in the continuity Equation 1, we find a set of ordinary differential equation in mode space (6) $\frac{\mathrm{d}}{\mathrm{d}t}{\rho }_{lm}\left(t\right)=\frac{l\left(l+1\right)}{{R}_{s}}{j}_{lm}^{\left(1\right)}\left(t\right),$ where $l=0,1,…,lmax$ and for each value of $l$, $m=-l,-l+1,…,l-1,l$. Equation 6 offers an alternative way of determining the modes $jl⁢m(1)⁢(t)$ directly from the modes $ρl⁢m⁢(t)$ of the coarse-grained cell number density (see Equation 4 and Equation 2a), while ensuring that the resulting fields obey mass conservation exactly. In practice, the modes $jl⁢m(1)⁢(t)$ found from a vector harmonic representation of the coarse-grained cell number flux (Equation 2b) will often deviate from modes $jl⁢m(1)⁢(t)$ determined from Equation 6, even if cell numbers are expected to be conserved. This can be, for example, due to limited accuracy in determining velocities $vα⁢(t)$ from noisy single-cell trajectories $rα⁢(t)$, or due to spatially inhomogeneous appearances and disappearances of cells in tracking data. Consistent with our simplifying assumption that cell number changes in the data can be sufficiently well approximated by a globally homogeneous growth rate (compare Equation 1 with Equation 3), the subsequent analysis uses the modes $jl⁢m(1)⁢(t)$ as determined from the density modes $ρl⁢m⁢(t)$ via Equation 6, together with modes $jl⁢m(2)⁢(t)$ from the explicit velocity coarse-graining Equation 2b. The complete construction is detailed in Appendix 2 and the full coarse-grained dynamics is shown in Video 1. The representation of $ρ⁢(r,t)$ and $J⁢(r,t)$ in terms of spherical harmonic modes with $l≤lmax$ leads in total to $3⁢(lmax+1)2$ mode amplitude trajectories, displaying only a few dominant contributions (Figure 1D) with almost no signal remaining for $l≥5$ (Figure 1—figure supplement 1, Video 2). This demonstrates that the underlying coarse-grained experimental data is sufficiently smooth and implies that a spectral representations is indeed meaningful. Thus, the coarse-graining approach outlined above provides a sparse spectral representation of high-dimensional microscopic single-cell data. The associated harmonic basis functions and vectors have an intuitive physical meaning, convenient algebraic properties and, as we will see, encode information about the length scales and symmetries of the collective dynamics. Temporal mode representation We further compress the dynamical information by representing the time series of the modes in terms of Chebyshev polynomial basis functions $Tn⁢(t)$ (Driscoll et al., 2014; Mason and Handscomb, 2002 ). To simplify notation, we define a dynamic mode vector $a⁢(t)=[ρl⁢m⁢(t),jl⁢m(1)⁢(t),jl⁢m(2)⁢(t)]⊤$ that collects all the modes up to $l=lmax$ determined in the previous section and consider an (7) $\mathbf{a}\left(t\right)=\sum _{n=0}^{{n}_{\text{max}}}{T}_{n}\left(t\right)\phantom{\rule{thinmathspace}{0ex}}{\stackrel{^}{\mathbf{a}}}_{n}$ in terms of the spatio-temporal mode coefficients $a^n$ with temporal mode number $n$ (Appendix 2). This compression allows us to accurately evaluate time derivatives of the mode amplitudes (Supekar et al., 2021), an important step when using Equation 6 to determine flux modes $jl⁢m(1)⁢(t)$ directly from density modes $ρl⁢m$. Fixing $lmax=4$ and $nmax=30$ in the remainder, the initial single-cell data set of about 1.4 million recorded cell position entries, or 4.2 million degrees of freedom, has thus been reduced to 2250 mode coefficients, corresponding to a compression ratio Characterization of the developmental mode dynamics A harmonic mode decomposition naturally integrates the geometry of the underlying domain and simultaneously provides useful insights into spatial scales and symmetries of the dynamics. For each mode $(l⁢m)$ in the sets of SHs ${Yl⁢m}$, ${Ψlm}$ and ${Φlm}$, the integer index $l$ indicates the spatial scale of the harmonic, with $l=0$ being a constant and larger $l$ indicating progressively finer spatial scales. The second index $m∈{-l,-l+1,…,l}$ provides additional information about the orientation of the harmonic scalar function or vector field. The modes $l=1$ and $l=2$ are particularly useful for characterizing the symmetry of spatial patterns on a spherical surface (Mietke et al., 2019; Scholich et al., 2020): Modes with $l=1$ indicate patterns with a global polar symmetry, whereas modes with $l=2$ represent spatial patterns with a global nematic symmetry. We now exploit these features for a detailed characterization of the symmetry breaking that takes place during cellular rearrangements and to study the properties of the cellular flux in more detail. To this end, we discuss spatial averages (8) $⟨O{⟩}_{s}\left(t\right)=\frac{1}{{A}_{s}}{\int }_{\mathcal{S}}d{A}_{s}\phantom{\rule{thickmathspace}{0ex}}O\left(\mathbf{r},t\right)$ of different real-space observables $O(r,t)$ over the mid-surface $S$. Mode signatures of developmental symmetry breaking To study how different developmental stages and their associated symmetry breaking events are reflected in the mode representation, we first consider the average cell surface density fluctuations (9) ${⟨{\left(\rho -{⟨\rho ⟩}_{s}\right)}^{2}⟩}_{s}=\sum _{l=1}^{{l}_{\text{max}}}\sum _{m=-l}^{l}{\rho }_{lm}^{2}\left(t\right).$ For each mode $l$, the power spectrum $Pρ,l⁢(t)=∑m=-llρl⁢m2⁢(t)$ in Equation 9 provides a rotationally invariant quantity (Çetingül et al., 2012; Schwab et al., 2013) that can effectively serve as an order parameter to characterize the symmetry of cell density patterns on the spherical surface. The dynamics of the density fluctuations given in Equation 9 broken down into contributions $Pρ,l⁢(t)$ from each mode $l≤lmax=4$ is shown in Figure 2B. Several features of this representation are particularly striking and can be directly related to specific developmental stages. First, patterns of cell surface density fluctuations evolve from a dominantly polar symmetry ($l=1$) into density patterns with a prominent nematic symmetry ($l=2$). These mode signatures intuitively reflect the essential symmetry breaking that takes place when cells collectively reorganize from an initially localized cell dome (Figure 1B, 52 min) into an elongated shape that wraps in an open ring-like pattern around the yolk cell (Figure 1B, 760 min). Second, during this transition at around 300 min (9 hpf) (black triangle in Figure 2B), the cell surface density is most homogeneous as fluctuations become minimal for all modes $l$. Interestingly, this time point approximately marks the completion of epiboly, when the different cell layers have fully engulfed the yolk. Finally, although in a less pronounced manner, the power spectrum of the mode $l=4$ also exhibits an increased amplitude towards later times, indicating the formation of structures at finer spatial scales as development progresses. We find that mode signatures of the symmetry breaking and progression through developmental stages are robust (Figure 2—figure supplement 1B, D), illustrating that mode-based analysis can provide a systematic and meaningful characterization of developmental symmetry breaking events. Figure 2 with 4 supplements see all Mode signatures of developmental symmetry breaking and topological defects in cellular flux. Mode signatures of emergent topological defects in cellular flux The vectorial nature of the cell number flux $J⁢(r,t)$ on a spherical surface implies the presence of topological defects (colored circles in Figure 2A, see Materials and methods) (Kamien, 2002). Several recent experimental results pertaining to the self-organization of multicellular systems suggest an important role of such topological defects in organizing morphogenetic events ( Doostmohammadi et al., 2016; Saw et al., 2017; Guillamat et al., 2020; Copenhagen et al., 2020; Meacock et al., 2020; Maroudas-Sacks et al., 2020). We therefore analyze how defects within the cell number flux $J⁢(r,t)$ are dynamically organized during early zebrafish gastrulation and if signatures of defect formation and annihilation are present in the mode representation Equation 5. We first consider the average squared divergence and curl of the cell number flux given by (10a) ${⟨{\left({abla }_{S}\cdot J\right)}^{2}⟩}_{s}=\sum _{l=1}^{{l}_{\text{max}}}\sum _{m=-l}^{m}{\left[\frac{l\left(l+1\right)}{{R}_{s}}{j}_{lm}^{\left(1\right)}\left(t\right)\right]}^{2},$ (10b) ${⟨{\left({abla }_{S}×J\right)}^{2}⟩}_{s}=\sum _{l=1}^{{l}_{\text{max}}}\sum _{m=-l}^{m}{\left[\frac{l\left(l+1\right)}{{R}_{s}}{j}_{lm}^{\left(2\right)}\left(t\right)\right]}^{2},$ which are shown in Figure 2C (top). The two contributions to the collective cellular dynamics – locally compressible, divergent flux quantified by the divergence $∇S⋅J$ and locally incompressible, rotational cell motion characterized by the curl $∇S×J$ – are independently determined by the modes $jl⁢m(1)⁢(t)$ and $jl⁢m(2)⁢(t)$. Therefore, each contribution can be evaluated conveniently and with high accuracy from a representation of $J⁢(r,t)$ in terms of vector SHs. From Figure 2C (top), we see that the most significant divergent flux (blue curve) occurs around 300 min at the transition from epiboly towards the convergence and extension stage. A quantification of the incompressible rotational flux relative to the total cell number flux is shown in Figure 2C (bottom), where we plotted the relative curl amplitude (11) ${S}_{\text{curl}}\left(t\right)=\frac{{\sum }_{l,m}{\left[{j}_{lm}^{\left(2\right)}\left(t\right)\right]}^{2}}{{\sum }_{l,m}{\left[{j}_{l,m}^{\left(1\right)}\left(t\right)\right]}^{2}+{\sum }_ This measure suggests a correlation between incompressible rotational cell motion and the occurrence of topological defects (circles in Figure 2A) in the cell flux $J⁢(r,t)$. The total number of topological defects present at any time point is depicted in Figure 2C (bottom, blue curve). Because the vector-valued flux is defined on a sphere, we observe that the total topological charge always sums to +2 (Kamien, 2002), while additional defect pairs with opposite charge (red +1 and white -1 circles in Figure 2A) can be created, resulting in total defect numbers greater than two (see Figure 2C, bottom). Interestingly, the relative curl amplitude $Scurl$ defined in Equation 11 indicates that increased contributions from incompressible rotational flux are associated with the formation of topological defects in the cell number flux, a feature that is robustly identified by our framework (Figure 2—figure supplement 1A, C, Figure 2—figure supplement 3, Figure 2—figure supplement 4). The appearance of additional defects at the end of epiboly, when the developing embryo begins to extrude more significantly in the radial direction, suggests that topological defects in the 2D projected cellular flux fields could signal the start of the formation of more complex structures in three dimensions. Learning a linear hydrodynamic model of the developmental mode dynamics The results in Figure 2 confirm that a low-dimensional mode representation can capture essential characteristics of developmental symmetry breaking processes. The mode representation therefore provides a natural starting point for the inference of hydrodynamic models from coarse-grained cell-tracking data. For a given time-dependent mode vector $a⁢(t)=[ρl⁢m⁢(t),jl⁢m(1)⁢(t),jl⁢m(2)⁢(t)]⊤$ that contains all modes up to $l=lmax$, the simplest hydrodynamic model corresponds to the linear dynamical equation (12) $\frac{\mathrm{d}a\left(t\right)}{\mathrm{d}t}=M\cdot a\left(t\right),$ where the constant coefficient matrix $M$ encodes the couplings between different modes. Intuitively, Equation 12 aims to describe an experimentally observed density and flux dynamics in terms of a relaxation process, starting from inhomogeneous initial conditions represented by $a⁢(0)$. The mathematical learning problem is then to find a coefficient matrix $M$ such that the linear model Equation 12 holds for the mode vector time series $a⁢(t)$ that was determined from the coarse-graining procedure described in the previous sections. Validation of the learning framework using active Brownian particle dynamics Before applying the combined coarse-graining and inference framework to experimental data, we illustrate and validate the learning approach on synthetic data for which coarse-graining results and hydrodynamic mean-field equations are analytically tractable. To this end, we consider the stochastic dynamics of non-interacting active Brownian particles (ABPs) on the unit sphere of radius $R0=1$ (Sknepnek and Henkes, 2015; Fily et al., 2016; Castro-Villarreal and Sevilla, 2018). Similar to a migrating cell, an ABP at position $x⁢(t)$ moves across the unit sphere at constant speed v[0] in the direction of its fluctuating orientation unit vector $u⁢(t)$. The strength of the orientational Gaussian white noise is characterized by a rotational diffusion constant $Dr$ (Figure 3A, Appendix 3). Learning active Brownian particle (ABP) dynamics on a sphere. This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Coarse-grained dynamics of active Brownian particles on the unit sphere in the low-noise ($Dr=0.5$) and high-noise ($Dr=10$) regime. Compared with conventional passive Brownian motion, self-propulsion of an ABP along its orientation direction $u$ introduces a persistence to the particle’s motion that is reduced as rotational noise $Dr$ is increased. Additionally, the topology of the spherical surface implies that in the low-noise regime, $R0⁢Dr/v0§lt;1$, particles are expected to return to the vicinity of their starting points after a duration $Δ⁢t≈2⁢π⁢R0/v0$. The conjunction of persistent motion and topology then leads to oscillatory dynamics in the positional correlation $⟨x⁢(t)⋅x⁢(0)⟩$ (blue dots in Figure 3B, Appendix 3). Comparing correlations from stochastic ABP simulations in different noise regimes with theoretical predictions (solid lines in Figure 3B) validates our numerical ABP simulation scheme. To generate a test data set for our coarse-graining and inference framework, we simulated non-interacting ABPs in both the low-noise ($R0⁢Dr/v0§lt;1$) and the high-noise ($R0⁢Dr/v0§gt;1$) regime with initial positions drawn from the experimental data shown in Figure 1. Specifically, at each cell position present in the data, we generated 60 particles with random orientation, amounting to approximately $1.2×105$ particles in total, and simulated their dynamics on a unit sphere. The resulting trajectory data were coarse-grained following the procedure outlined in the previous sections, yielding dynamic density fields $ρ⁢(r,t)$ and fluxes $J⁢(r,t)$ (Video 3), together with their mode representations $ρl⁢m⁢(t),jl⁢m(1)⁢(t)$ and $jl⁢m(2)⁢(t)$. In the second ‘learning’ step, we infer a sparse mode coupling matrix $M$ that approximates the dynamics Equation 12 for the dynamical mode vectors $a⁢(t)=[ρl⁢m,jl⁢m(1),jl⁢m(2)]⊤$ obtained from the coarse-grained simulated ABP data. Our inference algorithm combines adjoint techniques (Rackauckas et al., 2021) and a multi-step sequential thresholding approach inspired by the Sparse Identification of Nonlinear Dynamics (SINDy) algorithm introduced by Brunton et al., 2016. The full algorithm is detailed in Appendix 4 and illustrated in the summary flowchart Appendix 4—figure 1. Importantly, we perform the sparse regression using dynamical mode vectors $a⁢(t)$ rescaled by their median absolute deviation (MAD) to compensate for substantial scale variations between different modes. The final output matrix $M$ of this learning algorithm is shown in the right panel of Figure 3C and can be compared against the analytically coarse-grained dynamics of ABPs on curved surfaces (Fily et al., 2016; Castro-Villarreal and Sevilla, 2018). Under suitable closure assumptions (Appendix 3), the mean-field dynamics of ABPs on a unit sphere is given in harmonic mode space by (13a) $\frac{\mathrm{d}{j}_{lm}^{\left(2\right)}}{\mathrm{d}t}=-{D}_{r}{j}_{lm}^{\left(2\right)},$ (13b) $\frac{\mathrm{d}{j}_{lm}^{\left(1\right)}}{\mathrm{d}t}=-\frac{{v}_{0}^{2}}{2{R}_{0}}{\rho }_{lm}-{D}_{r}{j}_{lm}^{\left(1\right)}$ (13c) $\frac{\mathrm{d}{j}_{lm}^{\left(2\right)}}{\mathrm{d}t}=-{D}_{r}{j}_{lm}^{\left(2\right)}$ from which we can read off the mode coupling matrix $M$ shown in the left panel of Figure 3C. A direct comparison between the theoretical and the inferred matrices shows that our framework recovers both the structure and the quantitative values of $M$ with good accuracy. Due to the finite number of ABPs used to determine the coarse-grained fields, we do not expect that the theoretically predicted coupling matrix is recovered perfectly from the data. Instead, some mode couplings suggested by Equation 13a may not be present or modified in the particular realization of the ABP dynamics that was coarse-grained. Indeed, direct simulation of the learned model projected in real space (Figure 3E) reveals a density and flux dynamics that agrees very well with the dynamics of the the coarse-grained input data (Figure 3D). Altogether, these results demonstrate that the proposed inference framework enables us to to faithfully recover expected mean-field dynamics from coarse-grained fields of noisy particle-based data. Learning developmental mode dynamics from experimental data The same inference framework can now be directly applied to the coarse-grained experimental zebrafish embryo data shown in Figure 1C and D, yielding a sparse coefficient matrix $M$ (Figure 4A and B) that encodes the dynamics of the developmental mode vector $a⁢(t)=[ρl⁢m⁢(t),jl⁢m(1)⁢(t),jl⁢m(2)⁢(t)]⊤$ according to Equation 12. The inferred coupling between the time derivative of density modes $ρl⁢m$ and flux modes $jl⁢m(1)$ faithfully recovers mass conservation (Figure 4C; see Equation 6). Overall, the learned matrix $M$ has 395 non-zero elements, effectively providing further compression of the experimental data, which required 2,250 spatio-temporal mode coefficients collected in $a^n$ (see Equation 7) for its representation. Using the mode vector $a(t=0)$ of the first experimental time point as the initial condition, the inferred minimal model Equation 12 with $M$ shown in (Figure 4A and B) faithfully recovers both the mode and real-space dynamics seen in the coarse-grained fields of the experimental input data (Figure 4E–G, Video 4). Model learning for experimental data of collective cell motion during early zebrafish development. This video cannot be played in place because your browser does support HTML5 video. You may still download the video for offline viewing. Comparison of dynamics of the experimental and learned density $ρ⁢(r,t)$ (colormap) and flux fields $J(r,t)$ (streamlines) represented in a Mollweide projection. It is instructive to analyze the inferred matrix $M$ and the linear model it encodes in more detail. Comparing the MAD-rescaled matrix (see Appendix 4) learned for the experimental zebrafish data ( Figure 4B) with the non-dimensionalized matrix learned for the active Brownian particle dynamics (Figure 3C), we find similar patterns of prominent diagonal and block-diagonal couplings. Consistent with the analysis of single cell trajectories (Shah et al., 2019), this suggests that a random, but persistent movement of cells akin to ABPs moving on a sphere partially contributes to the early gastrulation process in zebrafish. This is complemented in the minimal model of the experimental dynamics by significant off-diagonal contributions (Figure 4B), which are absent in the non-interacting ABP model. Such off-diagonal contributions represent effective linear approximations of cell-cell interactions, environmental influences or other external stimuli reflected in the experimental time-series data. Ultimately, such contributions to the mode coupling matrix $M$ help realize the symmetry breaking process observed in the underlying experimental data (Figure 2). The inferred mode coupling matrix $M$ shown in Figure 4B together with Equation 12 provides a highly robust minimal model. Specifically, despite being linear, it is numerically stable over a period approximately four times as long as the input data from which the matrix $M$ was learned. Furthermore, simulations with modified initial conditions (see Figure 4—figure supplement 1) still exhibit a characteristic symmetry breaking and lead to the emergence of density and flux patterns similar to those seen in Figure 4F and G. For example, simulating Equation 12 using the initial condition of a different experimental data set (Figure 2—figure supplement 1) leads to final patterns with the same symmetry as in the original training data, further corroborating that the observed symmetry breaking is directly encoded in the interactions represented by the matrix $M$. A similar robustness is observed under moderate perturbations of the initial condition, such as a rotation of initial cell density patterns relative to the coordinate system in which $M$ was inferred, or a local depletion of the initial density, emulating a partial removal of cells as experimentally realized in Morita et al., 2017. Taken together, these numerical experiments demonstrate that the inferred mode coupling matrix $M$ meaningfully captures the dynamics and interactions of cells that facilitate the symmetry breaking observed during early zebrafish development. Green’s function representation of learned models in real space To characterize the inferred spatial interactions in more detail, we can analyze the real-space representation of the learned mode coupling matrix $M$. While the density dynamics represented by $M$ (the first row in Figure 4AB) simply reflects mass conservation Equation 1 in real space, the dynamics of the flux (the second and third row in Figure 4A and B) corresponds in real space to the integral equation (Appendix 4) (14) $\frac{\partial }{\partial t}J\left(r,t\right)=\int \mathrm{d}{\mathrm{\Omega }}^{\prime }\left[{m}^{\rho }\left(r,{r}^{\prime }\right)\rho \left({r}^{\prime },t\right)+M{}^{J}\left(r,{r}^{\ prime }\right)\cdot J\left({r}^{\prime },t\right)\right],$ where $d⁢Ω′=sin⁡θ′⁢d⁢θ′⁢d⁢ϕ′$ is the spherical surface area element. The vector-valued kernel $mρ⁢(r,r′)$ in Equation 14 connects the distribution of cell density $ρ$ across the surface to dynamic changes of the flux $J$ at a given point $r$. Similarly, the matrix-valued kernel $MJ⁢(r,r′)$ describes how the distribution of cell fluxes at $r′$ affects temporal changes of the flux at $r$. To analyze the spatial range of interactions between points $r$ and $r′$, we use the fact that the matrix-valued kernel $MJ⁢(r,r′)$ has only one non-zero eigenvalue (Appendix 4—figure 2). Consequently, the trace $tr⁢(MJ)$ serves as a proxy for the distance-dependent interaction strength mediated by $MJ$. Averages of $tr⁢(MJ)$ over point-pairs with the same angular distance $ω=acos⁢ (r⋅r′)$ are shown for the ABP dynamics and for the minimal model inferred from experimental data in Figure 4D. Note that to make the models amenable to comparison, we compute $MJ⁢(r,r′)$ from the known mean-field model of ABPs (Equation 13a) using the same finite number of modes as used to represent the ABP and the zebrafish data ($lmax=4$). In theory, one expects for the ABP dynamics a highly localized, homogeneous kernel $tr⁢(MJ)∼δ⁢(r-r′)$, so that an exact spectral representation would require an infinite number of modes (see Appendix 4). In practice, using a finite number of modes leads to a wider kernel range (Figure 4D ’ABP theory’) and introduces an apparent spatial inhomogeneity, as indicated by the non-zero standard deviation of $tr⁢(MJ)$ at fixed distance $ω$ (blue shades). Both the quantitative profile of $tr⁢(MJ)$ and its variation are successfully recovered by applying the inference framework to stochastic simulations of ABPs (Figure 4D ’ABP simulation’) where $MJ⁢(r,r′)$ was computed from the learned mode coupling matrix $M$ shown in Figure 3C. For the inferred minimal model of the cell dynamics (Figure 4D ’Zebrafish experiment’), we find a similar short-ranged flux-flux coupling mediated by $MJ$. However, the increased variability of $tr⁢(MJ)$ at fixed distances $ω$ indicates more substantial spatial inhomogeneities of the corresponding interactions. These inhomogeneities are absent in a non-interacting system of ABPs and represent an interpretable real-space signature of the symmetry-breaking mechanisms built into the underlying mode coupling matrix $M$. A similar analysis can be performed for the kernel $mρ⁢(r,r′)$ that couples the density at position $r′$ to dynamics of fluxes at position $r$ (see Equation 14), where we average the magnitude $|mρ⁢ (r,r′)|$ over pairs ($r$, $r′$) with the same angular distance $ω$ (Figure 4D insets). Using a finite number of modes to compute this kernel in the different scenarios again introduces apparent spatial inhomogeneities in all cases. Additionally, all kernel profiles exhibit a distinct maximum at short range, indicating a coupling between density gradients and the flux dynamics that emerges microscopically from a persistent ABP and cell motion (see Appendix 3 and 4) – an observations that is consistent with the similar block-diagonal structure of both inferred matrices $M$ (compare Figures 3C and 4B). In conclusion, the real-space analysis and comparison of inferred interaction kernels further highlights potential ABP-like contributions to the collective cellular organization during early zebrafish development and reveals an effectively non-local coupling between density and flux dynamics. The latter could result, for example, from unresolved fast-evolving morphogens (Hannezo and Heisenberg, 2019), through mechanical interactions with the surrounding material (Münster et al., 2019) or due to other relevant degrees of freedom that are not explicitly captured in this linear hydrodynamic model. More generally, a real-space representation of kernels provides an alternative interpretable way to study the interactions and symmetry-breaking mechanisms encoded by models directly learned in mode space. Leveraging a sparse mode representation of collective cellular dynamics on a curved surface, we have presented a learning framework that translates single-cell trajectories into quantitative hydrodynamic models. This work complements traditional approaches to find quantitative continuum models of complex multicellular processes (Etournay et al., 2015; Hannezo et al., 2015; Morita et al., 2017; Streichan et al., 2018; Münster et al., 2019) that match problem-specific constitutive relations of active materials in real-space with experimental observations. We have demonstrated here that length scales and symmetries associated with a mode representation can directly inform about the character of symmetry breaking transitions and topological features of collective cellular motion even before a model is specified. The successful applications to synthetic ABP simulation data and experimental zebrafish embryo data show that model learning in mode space provides a promising and computationally feasible approach to infer quantitative interpretable models in complex geometries. The learned linear minimal model for cell migration during early zebrafish morphogenesis quantitatively recapitulates the spatiotemporal dynamics of a complex developmental process (Figure 4F and G), and highlights similarities between collective cell migration and analytically tractable ABP dynamics on a curved surface. An extension to nonlinear mode-coupling models or an integration of additional, experimentally measured degrees of freedom, such as concentration fields of morphogens involved in mechanochemical feedbacks (Hannezo and Heisenberg, 2019), is in principle straightforward by including nonlinear terms in Equation 12. Furthermore, the above framework could be generalized to describe the dynamics within a spherical shell of finite height by complementing the surface vector SHs used in this work by their radial counterpart (Barrera et al., 1985). To provide a concrete example, we focused here on applying the model learning framework to single-cell tracking data of early zebrafish morphogenesis. However, the essentially spherical organization of cells during gastrulation observed in zebrafish is shared by many species whose early development occurs through a similar discoidal cleavage (Gilbert and Barresi, 2016), and the framework introduced here is directly applicable once tracking data becomes available for these systems. More generally, as novel imaging technologies are being developed (Keller et al., 2010; Royer et al., 2016; Shah et al., 2019), we expect that even larger and more detailed imaging data will further facilitate the exploration of finer scales and length-scale bridging processes (Lenne et al., 2021) through learning approaches that directly built on mode-based data representations. We obtained two single-cell tracking data sets from the experiments described in Shah et al., 2019. These data consist of the Cartesian coordinates of each cell together with a tracking ID. Some of the data is accessible at https://idr.openmicroscopy.org with ID number idr0068. We first denoised each cell trajectory using MATLAB’s (Matlab, 2019) wavelet denoiser function wdenoise, and centered the cloud of cells by least-squares fitting a spherical surface through it and shifting the origin at each time to coincide with the center of this sphere. We then computed the velocity of each cell by using Tikhonov-regularized differentiation as described in Knowles and Renka, 2014 and implemented in the MATLAB third-party module rdiff (Wagner, 2020). After examination of the cells’ velocity distribution, we further removed outlier cells whose speed is in the 95th percentile or above and verified that this operation only removes aberrant cells. Finally, we rotated the data to align the animal pole of the embryo with the $z$-axis, as determined by the direction of the center of mass of the initial cell distribution. The resulting single cell data are shown as point clouds in Figure 1B and in Video 1. Topological defect tracking Request a detailed protocol We have developed a defect tracker that identifies topological defects in vector fields tangent to a spherical surface via integration along suitable Burger circuits. The corresponding software is available at (https://github.com/NicoRomeo/surf-vec-defects; Romeo, 2022, copy archived at swh:1:rev:6dc742c376b0d085e19ece65f932ac6935342aba). Consistent coarse-graining on curved surfaces We describe the derivation of self-consistent coarse-graining kernels that are used in the main text to convert single cell information into a continuous density field and its associated fluxes on a spherical surface. We first motivate this problem for a flat surface and then proceed with a detailed derivation for the case of a spherical surface. Kernel consistency in Euclidean space It is instructive to first consider a set of particles $α=1,2,3,…$ at positions $Xα⁢(t)$ moving with velocities $Vα(t)=dXα/dt$, where capitalized vectors indicate position and velocity in Euclidean space, e.g. particles move on a flat surface or within some three-dimensional volume. A coarse-grained density $ρ⁢(X,t)$ and a mass flux $J⁢(X,t)$ can be defined from this microscopic information by (15a) $\begin{array}{ll}\rho \left(\mathbf{X},t\right)& =\sum _{\alpha }{K}_{e}\left[\mathbf{X},{\mathbf{X}}_{\alpha }\left(t\right)\right],\end{array}$ (15b) $\begin{array}{ll}\mathbf{J}\left(\mathbf{X},t\right)& =\sum _{\alpha }{\mathcal{K}}_{e}\left[\mathbf{X},{\mathbf{X}}_{\alpha }\left(t\right)\right]\cdot {\mathbf{V}}_{\alpha }\left(t\right),\ where $Ke⁢(X,X′)$ and $Ke⁢(X,X′)$ represent a scalar-valued and a matrix-valued kernel function, respectively. At the same time, in a system with a constant number of particles, mass conservation implies, in general, (16) ${\mathrm{\partial }}_{t}\rho \left(\mathbf{X},t\right)+{\mathrm{abla }}_{\mathbf{X}}\cdot \mathbf{J}\left(\mathbf{X},t\right)=0,$ relating the density $ρ⁢(X,t)$ and the mass flux $J⁢(X,t)$ of particles. Using the coarse-graining prescriptions Equation 15a and Equation 15b directly in Equation 16 and assuming the resulting relation must hold for any set of particle trajectories, one finds a general kernel consistency relation (17) ${abla }_{{X}^{\prime }}{K}_{e}\left(X,{X}^{\prime }\right)+{abla }_{X}\cdot {K}_{e}\left(X,{X}^{\prime }\right)=0.$ This condition is automatically satisfied for any translationally invariant and isotropic pair of kernels $Ke⁢(X,X′)=Ke⁢(X-X′)$ and $Ke⁢(X,X′)=Ke⁢(X-X′)⁢I$, where $I$ is the unit matrix. Coarse-graining with such kernels is frequently employed in practice: Positions and velocities can be, for example, simply convolved with a Gaussian function of mean zero (Supekar et al., 2021). Kernel consistency on a curved surface For a surface parameterized by $r⁢(s1,s2)∈ℝ3$ with generalized coordinates $s1,s2$, two tangential basis vectors are defined by $ei=∂⁡r/∂⁡si$ ($i=1,2$). Partial derivatives are, in the following, denoted $∂i:=∂/∂⁡si$. The metric tensor is given by $gi⁢j=ei⋅ej$. The mean curvature is defined by $H⁢n=-∇i⁡ei/2$, where $n=e1×e2/|e1×e2|$ denotes the unit surface normal and the Einstein summation convention is used. The covariant form of mass conservation Equation 1 (main text) on a curved surface reads (18) ${\partial }_{t}\rho +{abla }_{i}{J}^{i}=0,$ with $Ji=ei⋅J$ and $∇i$ denotes the covariant derivative. In general, we are interested in describing an effective dynamics for cell positions and velocities that are projected onto a common reference sphere of radius $Rs$. Such a description can be found by first formulating the coarse-graining approach for a unit sphere, on which particle positions and velocities are fully determined by angular coordinates and corresponding angular velocities, and finally rescaling the density and flux fields by suitable factors of $Rs$. The corresponding coarse-graining Equation 2b (main text) of in-plane angular velocities $v¯α⁢(t)=vα⁢(t)/|rα⁢(t)|$ for particles $α$ on a unit sphere reads covariantly (19) ${J}^{i}=\sum _{\alpha }K{\left(r,{r}_{\alpha }\right)}_{{j}^{\prime }}^{i}{\overline{v}}_{\alpha }^{{j}^{\prime }},$ where $v¯αi=ei⋅v¯α$ and we drop the dependence on time to simplify the notation. The two-point kernel tensor $K⁢(r,r′)i⁢j′$ (a ‘bitensor’) is evaluated in the tangent space of $r$ for its first index and in the tangent space of $r′$ at the second, primed index (Appendix 1—figure 1). Mass conservation on a curved surface, Equation 18, together with the coarse-graining prescriptions Equation 2a (main text) and Equation 19 then implies a covariant kernel consistency relation (20) ${\partial }_{{j}^{\prime }}K\left(r,{r}^{\prime }\right)+{abla }_{i}K{\left(r,{r}^{\prime }\right)}_{{j}^{\prime }}^{i}=0.$ Illustration of the action of the coarse-graining tensor kernel $K⁢(r,r′)i⁢j′$Equation 19. Solving the kernel consistency relation on a sphere We solve Equation 20 in the following on the unit sphere, such that $r=n$ corresponds to the surface normal. The final result can simply be rescaled to any spherical surface of radius $Rs$. Furthermore, we note that the parameter (21) $x=r\cdot {r}^{\prime }$ provides a measure for the great circle distance $ω⁢(x)=acos⁢(x)$ between two points on a sphere. Hence, we consider an ansatz for the kernels in Equation 20 of the form (22a) $K\left(r,{r}^{\prime }\right)=f\left(x\right)$ (22b) $K{\left(r,{r}^{\prime }\right)}_{i{j}^{\prime }}=g\left(x\right){e}_{i}\cdot {e}_{{j}^{\prime }},$ with two unknown scalar functions $f⁢(x)$ and $g⁢(x)$. The relevant derivatives of the ansatz Equation 22a and Equation 22b can readily be evaluated to (23a) ${\partial }_{{j}^{\prime }}K\left(r,{r}^{\prime }\right)=\frac{\mathrm{d}f\left(x\right)}{\mathrm{d}x}r\cdot {e}_{{j}^{\prime }}$ (23b) ${abla }_{i}K{\left(r,{r}^{\prime }\right)}_{{j}^{\prime }}^{i}=\frac{\mathrm{d}g\left(x\right)}{\mathrm{d}x}{r}^{\prime }\cdot \left({e}_{i}\otimes {e}^{i}\right)\cdot {e}_{{j}^{\prime }}-2g\ left(x\right)r\cdot {e}_{{j}^{\prime }}.$ Here, ⊗ denotes a dyadic product and we use $∂i⁡x=r′⋅ei$ and $∂i′⁡x=r⋅ei′$, which follows from Equation 21, as well as $∇i⁡ei=-2⁢r$ in the second equation, which holds on a unit sphere and follows from the definition of the mean curvature. We then use the expansion of the identity matrix in $ℝ3$ on the spherical basis $I=ei⊗ei+n⊗n$, such that in our case with $r=n$ we have $ei⊗ei=I-r⊗r$. Hence, Equation 23b becomes (24) ${abla }_{i}K{\left(r,{r}^{\prime }\right)}_{{j}^{\prime }}^{i}=-\frac{\mathrm{d}g\left(x\right)}{\mathrm{d}x}\left({r}^{\prime }\cdot r\right)\left(r\cdot {e}_{{j}^{\prime }}\right)-2g\left(x\ right)r\cdot {e}_{{j}^{\prime }}.$ Using Equation 24 and Equation 23a in the kernel consistency relation Equation 20 and dividing by $r⋅ej′$ (at $r=r′$, for which $r⋅ej′=0$, Equation 20 is obeyed for any $f⁢(x)$, $g⁢(x)$), we find that the scalar functions in the kernel ansatz Equation 22a and Equation 22b have to obey Hence, the general covariant consistency relation Equation 20 implies for the kernel ansatz Equation 22a and Equation 22b that the weighting functions $g⁢(x)$ and $f⁢(x)$ must be related by (25) $g\left(x\right)=\frac{1}{{x}^{2}}{\int }_{0}^{x}duu\frac{\mathrm{d}f\left(u\right)}{\mathrm{d}u}.$ Kernel functions with compact support In the last step, we determine a family of kernel functions $g⁢(x)$ and $f⁢(x)$ defined on the interval $x∈[-1,1]$ that satisfy Equation 25, along with the requirements: 1. $f⁢(x)$and $g⁢(x)$ must be $C1$ regular on$[-1,1]$ 2. $f≥0$on$[-1,1]$ 3. $f$is normalized to one on the unit sphere. Recalling $x=cos⁡[ω⁢(r,r′)]$ with angular distance $ω$ between $r$ and $r′$, a family of functions fulfilling these conditions is given by (26a) ${f}_{k}\left(\omega \right)=\frac{k+1}{2\pi }\left(\mathrm{cos}\omega {\right)}^{k}{\mathbf{1}}_{\left\{\mathrm{cos}\omega >0\right\}}$ (26b) ${g}_{k}\left(\omega \right)=\frac{k}{2\pi }\left(\mathrm{cos}\omega {\right)}^{k-1}{\mathbf{1}}_{\left\{\mathrm{cos}\omega >0\right\}},$ where $1{cos⁡ω>0}$ is an indicator function that is one if $cos⁡ω§gt;0$ and vanishes otherwise (Appendix 1—figure 2). In this work, we have chosen the kernels Equation 22a and Equation 22b with $f= fk$ and $g=gk$ for $k=6$. For these kernels derived here, densities $ρ⁢(r,t)$ and associated fluxes $J⁢(r,t)$ that are coarse-grained on a unit sphere can be converted into effective densities and fluxes on a spherical surface of radius $Rs$ through the rescaling $ρ→ρ/Rs2$ and $J→J/Rs$. Equivalently, rescaled kernels $K⁢(r,r′)→K⁢(r,r′)/Rs2$ and $K⁢(r,r′)i⁢j′→K⁢(r,r′)i⁢j′/Rs$ can be used directly, as was done in Equation 2a and Equation 2b of the main text to generate the data shown in Figure 1 (main text). Family of kernel functions $fk⁢(ω)$ and $gk⁢(ω)$ given in Equation 26a. Spatio-temporal mode decomposition In this section, we provide explicit expressions for the scalar and spherical harmonic basis functions (‘spatial modes’), as well as for the Chebyshev basis functions (‘temporal modes’) used in this work. Additionally, we describe a systematic approach to determine the minimal number of modes needed to describe the coarse-grained data, while preserving a high level of accuracy in the Spatial basis: spherical harmonics In this work, we use the real spherical harmonics defined in spherical coordinates $(θ,ϕ)$ by Arfken et al., 2013 as (27) ${Y}_{lm}\left(\theta ,\varphi \right)=\sqrt{\frac{2l+1}{4\pi }\frac{\left(l-|m|\right)!}{\left(l+|m|\right)!}}{P}_{l}^{|m|}\left(\mathrm{cos}\theta \right){N}_{m}\left(\varphi \right)\text{ },$ where $Pl|m|⁢(x)$ is the associated Legendre polynomial of degree $l$ and order $|m|$, and (28) ${N}_{m}\left(\varphi \right)=\left\{\begin{array}{cc}\hfill \sqrt{2}\mathrm{cos}\left(m\varphi \right)\hfill & \hfill \text{if}m§gt;0\hfill \\ \hfill 1\hfill & \hfill \text{if}m=0\hfill \\ \ hfill \sqrt{2}\mathrm{sin}\left(|m|\varphi \right)\hfill & \hfill \text{if}m§lt;0\hfill \end{array}.$ Vector spherical harmonics can be defined and expressed as vector fields in 3D or covariantly as (Sandberg, 1978; Mietke et al., 2019) (29a) ${\Psi }_{lm}={abla }_{S}{Y}_{lm}⇔{\mathrm{\Psi }}_{\left(lm\right)}^{i}={g}^{ij}{\partial }_{j}{Y}_{lm}$ (29b) ${\Phi }_{lm}=\stackrel{^}{r}×{\Psi }_{lm}⇔{\mathrm{\Phi }}_{\left(lm\right)}^{i}={ϵ}^{ji}{\partial }_{j}{Y}_{lm}$ where $∇S=eθ⁢∂θ+eϕ⁢sin-1⁡θ⁢∂ϕ$ denotes the gradient operator an a unit sphere, $ϵi⁢j$ is the covariant Levi-Civita tensor, and $gi⁢j$ the metric tensor. Scalar harmonics $Yl⁢m$ and either vector harmonic $Λlm∈{Ψlm,Φlm}$ are orthogonal: (30a) $\int d\mathrm{\Omega }{Y}_{lm}{Y}_{{l}^{\prime }{m}^{\prime }}={\delta }_{l{l}^{\prime }}{\delta }_{m{m}^{\prime }}$ (30b) $\int \mathrm{d}\mathrm{\Omega }\phantom{\rule{thinmathspace}{0ex}}{\mathbf{\Lambda }}_{lm}\cdot {\mathbf{\Lambda }}_{{l}^{\mathrm{\prime }}{m}^{\mathrm{\prime }}}=l\left(l+1\right){\delta }_{l {l}^{\mathrm{\prime }}}{\delta }_{m{m}^{\mathrm{\prime }}},$ where $dΩ=sin⁡θdθdϕ$. The increasing complexity of patterns and accuracy of reconstruction with larger $l$ is illustrated in Appendix 2—figure 1 and Video 2. Sequentially adding vector spherical harmonics $Ψl⁢m$ and $Φl⁢m$ – equivalent to increasing $lmax$ in Equation 5 – resolves increasing levels of details present in experimental flux fields ("Data"). Temporal basis: Chebyshev polynomials Chebyshev polynomials of the first kind $Tn$ are defined by Arfken et al., 2013. (31) ${T}_{n}\left(\mathrm{cos}x\right)=\mathrm{cos}\left(nx\right).$ Chebyshev polynomials form an orthogonal basis of continuous functions on the interval $[-1,1]$, such that an expansion (32) $f\left(t\right)=\sum _{n=0}^{{n}_{\text{max}}}{c}_{n}{T}_{n}\left(t\right)$ uniformly converges as $nmax→∞$ (Driscoll et al., 2014). This representation also allows computing derivatives spectrally from (33) ${f}^{\mathrm{\prime }}\left(t\right)=\sum _{n=0}^{{n}_{\text{max}}}{c}_{n}{T}_{n}^{\mathrm{\prime }}\left(t\right).$ Information loss through coarse-graining Coarse-graining microscopic data into smooth fields is an irreversible operation, during which some of the original particle information is irretrievably lost. The choice of coarse-graining scale is thus dictated by a trade-off between smoothness and information content - choosing larger coarse-graining scales leads to smoother fields but blurs finer scale structures which may be of interest. To inform our choice of coarse-graining scale, we quantify the loss of information incurred by the coarse-graining operation. The measure we introduce to quantify information loss is based on the the well-known relationship between the smoothness of functions in real space and Fourier space (Stein and Shakarchi, 2011): A smooth function in real space should have a peaked, quickly decaying spectrum in Fourier space while a collection of point-like objects such as delta functions should have a uniform non-decaying spectrum. Specifically, we describe a uniformly sampled field as a $M×N$ matrix with components being the field values $Xi,j=X⁢(θi,ϕj)$. In our case, $Xi,j$ represents either the density field $ρ$ or any of the Cartesian components of the flux vector field $J$ at a given time point. We find the complex discrete Fourier spectrum $X^i,j$ of this matrix using the two-dimensional fast Fourier transform. We then calculate the power spectral density (PSD) of the Fourier spectrum as $Ri,j=|X^i,j|2$ and interpret the normalized PSD. ${P}_{i,j}=\frac{{R}_{i,j}}{{\sum }_{a,b}{R}_{a,b}}$ as a discrete probability distribution. The spectral entropy $S$ characterizing the information content of the field $X$ is then defined by (34) $S\left(X\right)=-\frac{1}{{\mathrm{log}}_{2}NM}\sum _{i,j}{P}_{i,j}{\mathrm{log}}_{2}{P}_{i,j}.$ Smooth fields are sharply peaked in Fourier space and have a low spectral entropy, whereas fields that resolve discrete single particle information are rather flat in Fourier space and have a large spectral entropy. The difference in entropy between particle data and smoothed fields then measures the information eliminated by the coarse-graining procedure. If we additionally normalize by the entropy of the spectral entropy $S0⁢(X)$ of the raw particle data, we finally obtain a relative measure of the information that is lost in the coarse-graining process. In general, a measure as given in Equation 34 can be defined for any transform with the property that smoothness in real space leads to a fast decaying spectrum in transform space. We compute the spectral entropy of density and flux component fields at a representative time point and for varying coarse-graining length scales (Appendix 2—figure 2). Specifically, we coarse-grain density and flux through the procedure described in the main text and in Appendix 1 for different values of the kernel parameter $k$ (see Equation 26a). Large values of $k$ correspond to small coarse-graining length scales, with the effective half-width at half-maximum (HWHM) of the kernels given in Equation 22a and Equation 22b with weight functions Equation 26a and Equation 26b scaling as HWHM $=arccos⁡(2-1/k)$. Normalized spectral entropies $S⁢(X)/S0⁢(X)$ with $X∈{ρ,J}$ are then computed using Equation 34. For the flux field, we define $S⁢(J):=S⁢(Jx)+S⁢(Jy)+S⁢(Jz)$ ("Flux sum" in Appendix 2—figure 2) and interpret the sum of these three contributions ("Flux x", "Flux y", "Flux z" in Appendix 2—figure 2) as the total information contained in the flux field. We find that the spectral entropies of all fields show similar features. In particular, an increasing coarse-graining width first results in a sharp loss of information as individual particle positions are blurred, followed by less steep information loss as continuous fields progressively lose details of finer structures. In this work, we use an intermediate value of the coarse-graining parameter $k=6$ (yellow data in Appendix 2—figure 2). Normalized spectral entropy as a function of the coarse-graining kernel width (top) computed for density $ρ$ and flux field $J$ using Equation 34. Optimal compression in space and time Spectral representations are exact in the limit of an infinite number of modes. In practice, we choose a maximal harmonic mode number $lmax$ and maximal Chebyshev mode number $nmax$. A too large value of $lmax$ and $nmax$ provides little compression benefit, while too small values suffer accuracy penalties. Hence, there is a compression-accuracy trade-off that we seek to optimize. To evaluate the trade-off quantitatively, we define a heuristic compression metric $C$ by (35) $1/C=\frac{{n}_{\text{max}}}{{N}_{t}}+\frac{{\left({l}_{\text{max}}+1\right)}^{2}}{{N}_{s}},$ where $Nt$ is the number of sampled time steps and $Ns$ is the number of spatial grid points used for coarse-graining. Larger values of $C$ correspond to a higher compression factor. To define accuracy metrics, we consider the norm $‖f{‖}^{2}=\sum _{i=1}^{{N}_{t}}f\left({t}_{i}{\right)}^{2}$ where the sum runs over $Nt$ regularly sampled time points t[i]. We denote a particular mode representation ${ρ~l⁢m⁢(t),j~l⁢m(1)⁢(t),j~l⁢m(2)⁢(t)}$ of the data that was coarse-grained via Equation 2a and Equation 2b (main text) for $l=0,…,lmaxref=20$ as the ‘uncompressed’ reference. A measure to characterize the accuracy of a mode-truncated ‘compressed’ data representation is then given by a relative average mode reconstruction error (36) ${E}_{\text{modes}}\left({n}_{\text{max}},{l}_{\text{max}}\right)=\frac{1}{2\left({l}_{\text{max}}^{\text{ref}}+1{\right)}^{2}}\sum _{l=0}^{{l}_{\text{max}}}\sum _{m=-l}^{m=l}{\left(\frac{{‖{\ rho }_{lm}-{\stackrel{~}{\rho }}_{lm}‖}^{2}}{‖{\stackrel{~}{\rho }}_{lm}{‖}^{2}}+\frac{{‖{j}_{lm}^{\left(2\right)}-{\stackrel{~}{j}}_{lm}^{\left(2\right)}‖}^{2}}{{‖{\stackrel{~}{j}}_{lm}^{\left(2\ This measure compares the compressed mode representation ${ρl⁢m⁢(t),jl⁢m(2)⁢(t)}$, truncated at maximal Chebychev mode number $nmax$ (temporal representation Equation 32, Appendix 2) and maximal harmonic mode number $lmax$ (spatial representation, Equations 4; 5, main text) with the reference modes ${ρ~l⁢m⁢(t),j~l⁢m(2)⁢(t)}$. To find a compromise between accuracy, characterized by $Emodes⁢ (nmax,lmax)$, and compression $C$ defined in Equation 35, the aim is to find a pair $(nmax,lmax)$ on the Pareto front (Jin and Sendhoff, 2008) of $Emodes$ vs. $1/C$ (red dots in Appendix 2—figure 3). Relative average mode reconstruction error $Emodes⁢(nmax,lmax)$Equation 36. Note that the modes $j~l⁢m(1)⁢(t)$ and $jl⁢m(1)⁢(t)$ are so far omitted from this analysis, because the latter are in practice found directly from density modes via Equation 6 (main text). However, taking temporal derivatives of $ρl⁢m⁢(t)$ using Equation 33 to determine $jl⁢m(1)⁢(t)$ introduces undesirable oscillations for too large Chebychev cut-offs $nmax$. This implies an additional trade-off between the need for accuracy (higher $nmax$) and stability (lower $nmax$). In practice, we wish to find values of $(nmax,lmax)$ such that relative amplitudes of pairs $(j~l⁢m(1),j~l⁢m(2))$ and $(jl⁢m(1),jl⁢m(2))$ are preserved by the compression. This can be achieved by comparing the relative curl amplitude ${S}_{\text{curl}}\left(t\right)=\frac{{\sum }_{lm}{\left[{j}_{lm}^{\left(2\right)}\left(t\right)\right]}^{2}}{{\sum }_{lm}{\left[{j}_{lm}^{\left(1\right)}\left(t\right)\right]}^{2}+{\left[{j}_{lm}^ to the analog quantity $S~curl⁢(t)$ computed from the reference modes ${j~l⁢m(1)⁢(t),j~l⁢m(2)⁢(t)}$ and analyzing the curl reconstruction error (37) ${E}_{\text{curl}}=\frac{\parallel {S}_{\text{curl}}-{\stackrel{~}{S}}_{\text{curl}}\parallel }{\parallel {\stackrel{~}{S}}_{\text{curl}}\parallel }$ as a function of $nmax$ and $lmax$ (Appendix 2—figure 4). From this, we find a region of low error around $lmax=4,nmax=30$, which also is on the Pareto front of the accuracy vs. compression trade-off (orange circles in Appendix 2—figures 3 and 4) and represents the final values used throughout this work. $Scurl$reconstruction error landscape (log scale) as a function of $lmax$ and $nmax$. Active Brownian particles on the sphere In this section, we describe the stochastic dynamics of non-interacting, active Brownian particles (ABPs) (Romanczuk et al., 2012) on curved surfaces and derive analytically coarse-grained mean-field equations, as well as a kernel representation of ABP dynamics. These results are used in the main text to validate our coarse-graining and inference framework. We consider active Brownian particles at position $x∈ℝ3$ that move with speed v[0] on the surface of a unit sphere (radius $R0=1$) in the direction of their unit orientation vector $u∈ℝ3$. Since $|x| =1$ at all times, we can interpret v[0] as the particle’s angular speed on the unit sphere. The orientation vector is at all times tangential to the surface, but is subject to random in-plane fluctuations characterized by a rotational diffusion coefficient $Dr$. The corresponding dynamics of $x⁢(t)$ and $u⁢(t)$ is given by the stochastic differential equations (in units $R0=1$) (38a) $\mathrm{d}x={v}_{0}u\mathrm{d}t$ (38b) $\mathrm{d}u=-{v}_{0}x\mathrm{d}t+\left(x×u\right)\sqrt{2{D}_{r}}\circ \mathrm{d}\xi ,$ where the stochastic differential Equation 38b is interpreted in the Stratonovich sense, as denoted by the symbol "o" (Braumann, 2007). It follows from Equation 38a that $x⁢(t)$ and $u⁢(t)$ are normalized at all times. In the absence of rotational diffusion ($Dr=0$), the vectors $x$ and $u$ rotate over time by an angle $v0⁢t$ around the axis $u×x$. Consequently, particle trajectories in the absence of noise trace out great circles in the plane defined by $(u×x)$. Spatial correlation of APBs on a sphere $C⁢(t)=⟨x⁢(t)⋅x⁢(0)⟩$To illustrate how ABPs on a sphere differ from ABPs in Euclidean space, we study first the correlation function , where the angled brackets denote a Gaussian white-noise average. To this end, we rewrite the ABP dynamics Equation 38a in their equivalent Itô form given by (39a) $\mathrm{d}x={v}_{0}u\mathrm{d}t$ (39b) $\mathrm{d}u=-\left({v}_{0}x+{D}_{r}u\right)\mathrm{d}t+\sqrt{2{D}_{r}}\left(x×u\right)\mathrm{d}\xi .$ In the Itô formulation any smooth function $f⁢(x,u)$ obeys $⟨f⁢(x,u)⁢d⁢ξ⟩=0$, such that (Winkler et al., 2015) $\frac{\mathrm{d}}{\mathrm{d}t}⟨x\left(t\right)\cdot x\left(0\right)⟩={v}_{0}⟨u\left(t\right)\cdot x\left(0\right)⟩$ $\frac{\mathrm{d}}{\mathrm{d}t}⟨u\left(t\right)\cdot x\left(0\right)⟩=-{v}_{0}⟨x\left(t\right)\cdot x\left(0\right)⟩-{D}_{r}⟨u\left(t\right)\cdot x\left(0\right)⟩,$ which yields a damped harmonic oscillator equation for the correlation function (40) $\frac{{\mathrm{d}}^{2}}{\mathrm{d}{t}^{2}}C\left(t\right)+{D}_{r}\frac{\mathrm{d}}{\mathrm{d}t}C\left(t\right)+{v}_{0}^{2}C\left(t\right)=0.$ Normalization and orthogonality of $x⁢(t)$ and $u⁢(t)$ imply the initial conditions $C=1$ and $d⁢C/d⁢t=0$ at $t=0$. The behavior of solutions of Equation 40 is a function of the rotational Péclet number $Per:=v0/Dr$ that quantifies the ratio between active motion and orientational diffusion. For $Per§lt;1$, (‘high-noise regime’), the position correlation function $C⁢(t)=⟨x⁢(t)⋅x⁢(0)⟩$ decays according to Equation 40 monotonically to zero. For $Per§gt;1$, (‘low -noise regime’) position correlations exhibit damped oscillations. To validate our simulation method (described in the following section), analytic predictions for $C⁢(t)$ are in Figure 3B (main text) compared against the ensemble average $⟨x⁢(t)⋅x⁢(0)⟩$ over $3×104$ simulated ABPs. Stochastic simulation of active Brownian particles on the sphere To ensure a numerically exact normalization of the particle’s position and orientation vectors on the unit sphere, we simulated the dynamics (41a) $\mathrm{d}x=\frac{{v}_{0}}{|u|}\left(u-\frac{u\cdot x}{{|x|}^{2}}x\right)\mathrm{d}t$ (41b) $\mathrm{d}u=-{v}_{0}\frac{x}{{|x|}^{2}}\mathrm{d}t+\frac{\left(x×u\right)}{|x×u|}\sqrt{2{D}_{r}}\circ \mathrm{d}\xi .$ We numerically solve the Itô formulation of this system using the Euler-Mayurama scheme (Higham., 2001), and confirm that this system reproduces the correlation dynamics predicted by Equation 40 ( Figure 3B, main text). To study the continuum dynamics of a large number of non-interacting ABPs on a sphere, we determine the dynamics of the probability density $p⁢(x,u,t)$ of particle positions $x$ and orientations $u$ at time $t$. To do so, it is convenient to express particle positions in terms of a parameterisation $x⁢(t)=x⁢[x1⁢(t),x2⁢(t)]$ that defines tangential basis vectors by $ei=∂⁡x/∂⁡xi$ ($i=1,2$) and a metric tensor $gi⁢j=ei⋅ej$. By definition, we have $d⁢x=ei⁢d⁢xi$ and Equation 38a can be rewritten as (42) $\mathrm{d}{x}^{i}={v}_{0}{u}^{i}\mathrm{d}t.$ General tangential vectors on the surface can be written as $u=ui⁢ei$ and on a unit sphere the surface normal can be identified with particle positions $n=e1×e2/|e1×e2|=x$. Hence, on the unit sphere the Gauss-Weingarten relation reads $∂i⁡ej=-Ci⁢j⁢x+Γi⁢jk⁢ek$, where $Γi⁢jk$ denote Christoffel symbols and $Ci⁢j$ is the curvature tensor. This implies together with Equation 42 the geometric $\begin{array}{rl}\mathrm{d}\mathbf{u}& ={\mathbf{e}}_{i}\mathrm{d}{u}^{i}+{u}^{i}\left({\mathrm{\partial }}_{j}{\mathbf{e}}_{i}\right)\mathrm{d}{x}^{j}\\ & ={\mathbf{e}}_{i}\mathrm{d}{u}^{i}-{C}_ {ij}{u}^{i}{u}^{j}{v}_{0}\mathbf{x}\mathrm{d}t+{v}_{0}{u}^{i}{u}^{j}{\mathrm{\Gamma }}_{ij}^{k}{\mathbf{e}}_{k}\mathrm{d}t.\end{array}$ Comparing this identity with the stochastic dynamics $d⁢u$ in Equation 38b and using that $Ci⁢j⁢ui⁢uj=gi⁢j⁢ui⁢uj=|u|2=1$ for unit vectors $u$ on the unit sphere, we find the covariant stochastic differential equation (43) $\mathrm{d}{u}^{i}=-{v}_{0}{u}^{j}{u}^{k}{\mathrm{\Gamma }}_{jk}^{i}\mathrm{d}t+{ϵ}_{\text{ }k}^{i}{u}^{k}\sqrt{2{D}_{r}}\circ \phantom{\rule{thinmathspace}{0ex}}\mathrm{d}\xi .$ In Equation 43, $ϵi⁢j=x⋅(ei×ej)$ denotes the Levi-Civita tensor on the unit sphere. In this covariant basis, we define the scalar probability density (44) $p\left(x,u,t\right)=⟨\frac{1}{\sqrt{g\left(x\right)}}\prod _{i}\delta \left[{x}^{i}-{x}^{i}\left(t\right)\right]\delta \left[{u}^{i}-{u}^{i}\left(t\right)\right]⟩,$ where $δ⁢(x)$ denotes a Dirac function. Combining Equations 42; 43, standard methods (Fily et al., 2016; Castro-Villarreal and Sevilla, 2018) allow us to obtain the Fokker-Planck equation for $p⁢ (x,u,t)$ as (45) $\frac{\partial }{\partial t}p\left(x,u,t\right)={D}_{r}\frac{\partial }{\partial {u}^{i}}\left[{ϵ}_{k}^{i}{u}^{k}\frac{\partial }{\partial {u}^{j}}\left({ϵ}_{l}^{j}{u}^{l}p\right)\right]-{abla }_{i}\left({v}_{0}{u}^{i}p\right)+\frac{\partial }{\partial {u}^{i}}\left({v}_{0}{u}^{j}{u}^{k}{\mathrm{\Gamma }}_{jk}^{i}p\right)$ Using the identity $ϵki⁢ϵlj=gi⁢j⁢gk⁢l-δli⁢δkj$, the dynamics of the probability density is finally given by (46) $\frac{\partial }{\partial t}p\left(x,u,t\right)={D}_{r}\frac{\partial }{\partial {u}^{i}}\left[\left({g}^{ij}-{u}^{i}{u}^{j}\right)\frac{\partial p}{\partial {u}^{j}}\right]-{v}_{0}{u}^{i}{abla }_{i}p+\frac{\partial }{\partial {u}^{i}}\left({v}_{0}{u}^{j}{u}^{k}{\mathrm{\Gamma }}_{jk}^{i}p\right),$ which agrees with the result in Castro-Villarreal and Sevilla, 2018. To connect the Fokker-Planck dynamics given in Equation 46 to hydrodynamic fields, we define (probability) density and fluxes by $ρ⁢(x,t)=∫d2⁢u⁢p⁢(x,u,t)$, and $Ji⁢(x,t)=v0⁢∫d2⁢u⁢ui⁢p⁢(x,u,t)$. Their dynamics on the unit sphere is given by Castro-Villarreal and Sevilla, 2018. (47a) $\frac{\partial \rho }{\partial t}=-{abla }_{i}{J}^{i}$ (47b) $\frac{\partial {J}^{i}}{\partial t}=-\frac{{v}_{0}^{2}}{2}{abla }^{i}\rho -{D}_{r}{J}^{i},$ where couplings to higher order fields are neglected, as they vanish at shorter time-scales due to the presence of rotational noise. Expressing Equation 47a and Equation 47b in terms of scalar and vector spherical harmonics (see Appendix 2) for an arbitrary sphere radius R[0] yields the mode dynamics Equation 13a, Equation 13b and Equation 13c the main text. Learning and interpreting the linear model We describe details about the inference procedure used to learn the linear ordinary differential equation (ODE) model considered in the main text. We then discuss how the matrix $M$ found by this procedure can be further studied in terms of its real-space kernel representation and derive this kernel for the ABP dynamics introduced in Appendix 4. Inference of the dynamical mode coupling matrix M Given a dynamical mode vector $a⁢(t)=[ρl⁢m⁢(t),jl⁢m(1)⁢(t),jl⁢m(2)⁢(t)]⊤$, the goal is to learn a linear minimal model (48) $\frac{\mathrm{d}a\left(t\right)}{\mathrm{d}t}=M\cdot a\left(t\right)$ of the mode dynamics. Here, $M$ is an unknown $n×n$ mode coupling matrix, where generally $n=3⁢(lmax+1)2-2$. In systems with global mass conservation, as considered in this work, one can additionally use that the mode $ρ00$ is constant and eliminate the corresponding couplings from $M$. To describe the algorithm that was used to infer the mode coupling matrix $M$, we parameterize $M$ by a vector $p$ that contains all non-zero entries and introduce a function $ℳ$ that represents the underlying matrix structure. Together, they generate the explicit form $M=ℳ⁢(p)$ of the mode coupling matrix. Imposing structure on the matrix, such as rank constraints, or sparsity leads to a shorter vector $p$ and modifies the definition of $ℳ$ accordingly. Denoting $A(t;M,p,a0)$ as the result of numerically integrating the system of ODEs Equation 48 up to time $t$ from initial condition $a0$ with $M=ℳ⁢(p)$, we define the loss function (49) $L\left(\mathbf{p};\phantom{\rule{thinmathspace}{0ex}}\mathcal{M},\phantom{\rule{thinmathspace}{0ex}}{t}_{I},\phantom{\rule{thinmathspace}{0ex}}{t}_{N}\right)=\frac{1}{N-I}\sum _{i=I}^{N}‖\ where the t[i] are time points in an interval $[tI,tN]$ at which the data and the ODE solution are sampled. Using the ODE solvers and optimization functions provided by the Julia modules DifferentialEquations.jl and DiffEqFlux.jl (Rackauckas et al., 2021), we can differentiate through the ODE solver to calculate derivatives of the loss function Equation 49 with respect to parameters $p$ and subsequently apply gradient-based optimization algorithms. The loss function is minimized using the ADAM algorithm (Kingma and Ba, 2017), followed by the Broyden-Fletcher-Goldfarb-Shannon (BFGS) algorithm (Nocedal and Wright, 2006). To increase the robustness of the optimization and promote sparsity, we use a sequentially thresholded algorithm (Supekar et al., 2021; Brunton et al., 2016; Reinbold, 2020). A complete overview of this procedure is shown in Appendix 4—figure 1 and the details of the specific design decisions made in the algorithm are discussed in the following: 1. To account for the variation in scale between the different modes in the data $a⁢(t)$, each mode is normalized by its median absolute deviation (MAD) across the full time-span in which the data are available. Specifically, we scale each mode by (50) $\mathrm{m}\mathrm{a}\mathrm{d}\left({a}_{i}\right)={\mathrm{m}\mathrm{e}\mathrm{d}\mathrm{i}\mathrm{a}\mathrm{n}}_{k}\phantom{\rule{thinmathspace}{0ex}}\left(|{a}_{i}\left({t}_{k}\right)-{\ where $a¯i=mediank⁢[ai⁢(tk)]$ and the median is taken over all time-points, giving rise to a scaled mode vector $a~⁢(t)$. Losses analogous to Equation 49 that are computed using scaled data are denoted in the following by $L~$. 2. To prevent over-fitting, we divide the data into two regions, a learning region from $tI$ to $tN$ and a validation region from $tN$ to $tF$. Only data from the learning region is used in the optimization of the loss function Equation 49. However, the model is integrated into the validation region, and a corresponding validation loss using only the data in the validation region is calculated. During each optimization run, we choose the model with the lowest loss in the validation region, lowering the likelihood of over-fitting to the specific data in the learning region. 3. To prevent the optimization from getting stuck in local minima, we incrementally increase the time-span of the data included in the optimization objective (blue box in Appendix 4—figure 1). We increase the time window backward from a fixed endpoint $t1=tF$, choosing each iteration an earlier initial condition at time $ti§lt;ti−1$. The advantage of stepping backward rather than forward from a fixed initial condition is twofold: first, the validation region stays unchanged throughout the optimization, making comparisons of the validation loss easy. Second, because the initial condition changes with each run, the learned matrix tends to be more robust to fluctuations in the initial condition. 4. After the optimization step, sparsity is promoted by thresholding the elements in the matrix (Brunton et al., 2016), removing small magnitude elements that do not noticeably contribute to the mode dynamics (purple box in Appendix 4—figure 1). The optimization procedure is then repeated until the thresholding converges. The threshold is chosen to generate a sparse matrix that still reproduces the dynamics faithfully. 5. Once the sparsity pattern is obtained from the sequential thresholding and optimization procedure a final run of the optimization is performed on the unscaled mode data to find the final dynamical matrix $M$, which removes any potential slight bias the MAD scaling might have introduced in the parameter values $p$. Finally, the numerical stability of the model can be checked by examining the eigenvalues of the learned matrix. For the ABP test data, we learn a matrix $M$ for which the largest real part of its eigenvalues is at machine precision. For the experimental data, the largest real part in the eigenvalues is $7.4×10-4$, which corresponds to a time scale of around 675 mins. While the corresponding dynamics will eventually become unstable, solutions remain bound over a period of approximately 45 hours, which is four times as long as the input data from which the mode coupling matrix was Learning and validation regions used in this work For the ABP data, the first 15 frames are excluded, so that – consistent with coarse-graining assumptions (see Appendix 3, Equation 47a, Equation 47b) any remnants of higher orientational order introduced by the initial conditions have decayed. The subsequent 140 frames are used as the learning region, followed by a validation region of 20 frames. Each frame corresponds to a time interval of approximately 0.06 in units of $R0/v0=1$. We exclude the first and last 10 frames of the experimental zebrafish data and split the remaining data into a learning region of 360 frames, with the remaining 40 frames used for validation. Each frame corresponds to a time interval of 2 min. Initially the data is rescaled using the median absolute deviation (MAD) defined in Equation 50 to account for variation in scales across the modes. Green’s function representation of the learned matrix The learned matrix $M$ consists of 9 blocks each with $[(lmax+1)2-1]×[(lmax+1)2-1]$ entries. Each block relates a mode family to time derivatives of another and we write $M=\left(\begin{array}{ccc}\hfill {M}^{\rho \rho }\hfill & \hfill {M}^{\rho 1}\hfill & \hfill {M}^{\rho 2}\hfill \\ \hfill {M}^{1\rho }\hfill & \hfill {M}^{11}\hfill & \hfill {M}^{12}\hfill \\ \hfill {M}^{2\rho }\hfill & \hfill {M}^{21}\hfill & \hfill {M}^{22}\hfill \end{array}\right).$ We denote the components of each block by $(Mm1⁢m2)l⁢m,l′⁢m′≡Mα⁢βm1⁢m2$, where $m1,m2∈{ρ,1,2}$, and $α$, $β$ are multi-indices that represent the harmonic modes $(l⁢m)$. Using the mode representation Equation 5 and the form of the linear minimal model Equation 48, we find (51) $\begin{array}{ll}\frac{\mathrm{\partial }}{\mathrm{\partial }t}\mathbf{J}\left(\mathbf{r},t\right)& =\sum _{\alpha =lm}\left(\frac{\mathrm{d}{j}_{\alpha }^{\left(1\right)}\left(t\right)}{\ mathrm{d}t}{\mathbf{\Psi }}_{\alpha }\left(\stackrel{^}{\mathbf{r}}\right)+\frac{\mathrm{d}{j}_{\alpha }^{\left(2\right)}\left(t\right)}{\mathrm{d}t}{\mathbf{\Phi }}_{\alpha }\left(\stackrel{^}{\ mathbf{r}}\right)\right)\\ & =\sum _{\alpha =lm}\sum _{\beta ={l}^{\mathrm{\prime }}{m}^{\mathrm{\prime }}}\left[{M}_{\alpha \beta }^{1\rho }{\rho }_{\beta }\left(t\right)+{M}_{\alpha \beta }^{11}{j} _{\beta }^{\left(1\right)}\left(t\right)+{M}_{\alpha \beta }^{12}{j}_{\beta }^{\left(2\right)}\left(t\right)\right]{\mathbf{\Psi }}_{\alpha }\left(\stackrel{^}{\mathbf{r}}\right)\\ & +\left[{M}_{\ alpha \beta }^{2\rho }{\rho }_{\beta }\left(t\right)+{M}_{\alpha \beta }^{21}{j}_{\beta }^{\left(1\right)}\left(t\right)+{M}_{\alpha \beta }^{22}{j}_{\beta }^{\left(2\right)}\left(t\right)\right]{\ mathbf{\Phi }}_{\alpha }\left(\stackrel{^}{\mathbf{r}}\right).\end{array}$ Using Equation 30a, Equation 51 can be cast into the dynamic kernel Equation 14 given in the main text, where we defined the vector kernel (52) ${\mathbf{m}}^{\rho }\left(\mathbf{r},{\mathbf{r}}^{\mathrm{\prime }}\right)=\sum _{\alpha =lm}\sum _{\beta ={l}^{\mathrm{\prime }}{m}^{\mathrm{\prime }}}{M}_{\alpha \beta }^{1\rho }{\mathbf{\ Psi }}_{\alpha }\left(\stackrel{^}{\mathbf{r}}\right){Y}_{\beta }\left({\stackrel{^}{\mathbf{r}}}^{\mathrm{\prime }}\right)+{M}_{\alpha \beta }^{2\rho }{\mathbf{\Phi }}_{\alpha }\left(\stackrel{^}{\ mathbf{r}}\right){Y}_{\beta }\left({\stackrel{^}{\mathbf{r}}}^{\mathrm{\prime }}\right)$ and the matrix kernel (53) $\begin{array}{ll}{M}^{J}\left(\mathbf{r},{\mathbf{r}}^{\mathrm{\prime }}\right)& =\sum _{\alpha =lm}\sum _{\beta ={l}^{\mathrm{\prime }}{m}^{\mathrm{\prime }}}\frac{1}{l\left(l+1\right)}\left [{M}_{\alpha \beta }^{11}{\mathbf{\Psi }}_{\alpha }\left(\stackrel{^}{\mathbf{r}}\right)\otimes {\mathbf{\Psi }}_{\beta }\left({\stackrel{^}{\mathbf{r}}}^{\mathrm{\prime }}\right)\\ & \phantom{\rule {1em}{0ex}}+{M}_{\alpha \beta }^{12}{\mathbf{\Psi }}_{\alpha }\left(\stackrel{^}{\mathbf{r}}\right)\otimes {\mathbf{\Phi }}_{\beta }\left({\stackrel{^}{\mathbf{r}}}^{\mathrm{\prime }}\right)+{M}_{\ alpha \beta }^{21}{\mathbf{\Phi }}_{\alpha }\left(\stackrel{^}{\mathbf{r}}\right)\otimes {\mathbf{\Psi }}_{\beta }\left({\stackrel{^}{\mathbf{r}}}^{\mathrm{\prime }}\right)\\ & \phantom{\rule{1em} {0ex}}+{M}_{\alpha \beta }^{22}{\mathbf{\Phi }}_{\alpha }\left(\stackrel{^}{\mathbf{r}}\right)\otimes {\mathbf{\Phi }}_{\beta }\left({\stackrel{^}{\mathbf{r}}}^{\mathrm{\prime }}\right)\right],\end where ⊗ denotes a dyadic product. The matrix $MJ⁢(r,r′)$ has a 0 eigenvalue with right eigenvector $r^′$ and left eigenvector $r^$, which implies $det⁡(MJ)=0$. Numerical analysis of the matrix invariants shows that a second eigenvalue is 0 (Appendix 4—figure 2), leaving only a single non-zero eigenvalue that can be conveniently found from $tr⁢[MJ⁢(r,r′)]$ and is shown in Figure 4D (main Real-space kernels of active Brownian particle dynamics In the following we determine a real-space kernel representation in the form Equation 14 for the flux dynamics of ABPs given in Equation 47b. We can read off the kernel coefficients in Equation 52 and in Equation 53 from the coarse-grained ABP dynamics in mode space, given in Equation 13b and Equation 13c. For the kernel $mρ⁢(r,r′)$, we have $Mα⁢β1⁢ρ=-v022⁢δα⁢β$ and $Mα⁢β2⁢ρ=0$ ($α,β=(l⁢m)$), such that Equation 52 becomes (54) ${m}^{\rho }\left(r,{r}^{\prime }\right)=-\frac{{v}_{0}^{2}}{2}{abla }_{S}\sum _{\alpha =lm}{Y}_{\alpha }\left(\stackrel{^}{r}\right){Y}_{\alpha }\left({\stackrel{^}{r}}^{\prime }\right)=-\frac {{v}_{0}^{2}}{2}{abla }_{S}\delta \left(r-{r}^{\prime }\right).$ Here, we have used in the first step the definition of $Ψlm(r^)$ given in Equation 29a and in the second step the completeness of the spherical harmonic basis functions $Yl⁢m⁢(r^)$, where $δ⁢(r-r′)= δ⁢(ϕ-ϕ′)⁢δ⁢(cos⁡θ-cos⁡θ′)$ denotes the delta function on a sphere. Note that a unit sphere was considered throughout this analysis, such that $r=r^$. Similarly, Equation 13b and Equation 13c imply for the kernel coefficients in Equation 53 that $Mα⁢β11=Mα⁢β22=-Dr⁢δα⁢β$ and $Mα⁢β12=Mα⁢β21=0$. Consequently, we have (55) ${M}^{J}\left(\mathbf{r},{\mathbf{r}}^{\mathrm{\prime }}\right)=-{D}_{r}\sum _{\alpha =lm}\frac{1}{l\left(l+1\right)}\left[{\mathbf{\Psi }}_{\alpha }\left(\stackrel{^}{\mathbf{r}}\right)\otimes {\mathbf{\Psi }}_{\alpha }\left({\stackrel{^}{\mathbf{r}}}^{\mathrm{\prime }}\right)+{\mathbf{\Phi }}_{\alpha }\left(\stackrel{^}{\mathbf{r}}\right)\otimes {\mathbf{\Phi }}_{\alpha }\left({\stackrel {^}{\mathbf{r}}}^{\mathrm{\prime }}\right)\right]=-{D}_{r}\delta \left(\mathbf{r}-{\mathbf{r}}^{\mathrm{\prime }}\right){P}_{\parallel },$ where $P∥=I-r⊗r$ is the tangential projector on the unit sphere. The hydrodynamic flux Equation 47b of ABPs on a sphere can therefore be written in the equivalent integral kernel form (56) ${\partial }_{t}J\left(r,t\right)=\int d{\mathrm{\Omega }}^{\prime }\left[-\frac{{v}_{0}^{2}}{2}{abla }_{S}\delta \left(r-{r}^{\prime }\right)\rho \left({r}^{\prime },t\right)-{D}_{r}\delta \ left(r-{r}^{\prime }\right)J\left({r}^{\prime },t\right)\right].$ To make analytic kernel properties comparable to practical inference scenarios in which we work with a finite number of harmonic modes, we computed the sums in Equations 54; 55 up to a maximum mode number $lmax=4$. The resulting kernels – depicted in Figure 4D (main text) – approximate the Dirac delta function $δ⁢(r-r′)$ and its derivative, leading to the finite range of $tr⁢(MJ)$ with amplitude maximum at $ω=0$, while $|mρ|$ vanishes at and peaks away from $ω=0$. Additionally, finite mode representations introduce an apparent kernel inhomogeneity across the spherical surface as evident from the non-zero standard deviation depicted in Figure 4D of the main text (blue shades). The $3×3$-matrix invariant $I2=12⁢(tr⁢[(MJ)2]-(tr⁢[MJ])2)$ sampled for pairs of positions $r$, $r′$ vanishes to machine precision for the dynamical matrix $M$ learned on the zebrafish data. 14. Book Chebfun Guide Oxford: Oxford University Press. 19. Book Developmental Biology Oxford: Oxford University Press. 44. Book Natick, MA, USA: Matlab. 55. Conference 2018 IEEE High Performance Extreme Computing Conference Interactive Supercomputing on 40,000 Cores for Machine Learning and Data Analysis. pp. 1–6. 69. Book Random Vibration of Mechanical and Structural Systems NASA STI/Recon Technical Report A. 70. Book Fourier Analysis: An Introduction New Jersey, United States: Princeton University Press. 76. Book Emergence of puffs, weak and strong slugs from a stochastic predator-prey model for transitional turbulence with stream-wise shear interactions In: Shih HY, editors. Bulletin of the American Physical Society. New York, United States: Simons Foundation. pp. 65–69. Article and author information Author details European Molecular Biology Organization (ALTF 528-2019) Deutsche Forschungsgemeinschaft (431144836) James S. McDonnell Foundation (Complex Systems Scholar Award) Alfred P. Sloan Foundation (G-2021-16758) • Nicolas Romeo • Alasdair Hastewell Robert E Collins Distinguished Scholarship Fund The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. We thank Nico Scherf and Ghopi Shah for providing single-cell tracking data and for giving helpful advice on zebrafish development and we thank Paul Matsudaira for discussions. We thank the MIT SuperCloud (Reuther et al., 2018) for providing us access to HPC resources. This work was supported by a MathWorks Science Fellowship (NR and ADH), a Longterm Fellowship from the European Molecular Biology Organization (ALTF 528–2019, AM), a Postdoctoral Research Fellowship from the Deutsche Forschungsgemeinschaft (Project 431144836, AM), a Complex Systems Scholar Award from the James S McDonnell Foundation (JD), the Robert E Collins Distinguished Scholarship Fund (JD) and the Alfred P Sloan Foundation (G-2021–16758, JD). © 2021, Romeo et al. This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited. Views, downloads and citations are aggregated across all versions of this paper published by eLife. A two-part list of links to download the article, or parts of the article, in various formats. Downloads (link to download the article as PDF) Open citations (links to open the citations from this article in various online reference manager services) Cite this article (links to download the citations from this article in formats compatible with various reference manager tools) 1. Nicolas Romeo 2. Alasdair Hastewell 3. Alexander Mietke 4. Jörn Dunkel Learning developmental mode dynamics from single-cell trajectories eLife 10:e68679. Further reading 1. Computational and Systems Biology 2. Physics of Living Systems Explaining biodiversity is a fundamental issue in ecology. A long-standing puzzle lies in the paradox of the plankton: many species of plankton feeding on a limited variety of resources coexist, apparently flouting the competitive exclusion principle (CEP), which holds that the number of predator (consumer) species cannot exceed that of the resources at a steady state. Here, we present a mechanistic model and demonstrate that intraspecific interference among the consumers enables a plethora of consumer species to coexist at constant population densities with only one or a handful of resource species. This facilitated biodiversity is resistant to stochasticity, either with the stochastic simulation algorithm or individual-based modeling. Our model naturally explains the classical experiments that invalidate the CEP, quantitatively illustrates the universal S-shaped pattern of the rank-abundance curves across a wide range of ecological communities, and can be broadly used to resolve the mystery of biodiversity in many natural ecosystems. 1. Computational and Systems Biology 2. Physics of Living Systems Planar cell polarity (PCP) – tissue-scale alignment of the direction of asymmetric localization of proteins at the cell-cell interface – is essential for embryonic development and physiological functions. Abnormalities in PCP can result in developmental imperfections, including neural tube closure defects and misaligned hair follicles. Decoding the mechanisms responsible for PCP establishment and maintenance remains a fundamental open question. While the roles of various molecules – broadly classified into “global” and “local” modules – have been well-studied, their necessity and sufficiency in explaining PCP and connecting their perturbations to experimentally observed patterns have not been examined. Here, we develop a minimal model that captures the proposed features of PCP establishment – a global tissue-level gradient and local asymmetric distribution of protein complexes. The proposed model suggests that while polarity can emerge without a gradient, the gradient not only acts as a global cue but also increases the robustness of PCP against stochastic perturbations. We also recapitulated and quantified the experimentally observed features of swirling patterns and domineering non-autonomy, using only three free model parameters - the rate of protein binding to membrane, the concentration of PCP proteins, and the gradient steepness. We explain how self-stabilizing asymmetric protein localizations in the presence of tissue-level gradient can lead to robust PCP patterns and reveal minimal design principles for a polarized system. 1. Computational and Systems Biology 2. Physics of Living Systems Synthetic genetic oscillators can serve as internal clocks within engineered cells to program periodic expression. However, cell-to-cell variability introduces a dispersion in the characteristics of these clocks that drives the population to complete desynchronization. Here, we introduce the optorepressilator, an optically controllable genetic clock that combines the repressilator, a three-node synthetic network in E. coli, with an optogenetic module enabling to reset, delay, or advance its phase using optical inputs. We demonstrate that a population of optorepressilators can be synchronized by transient green light exposure or entrained to oscillate indefinitely by a train of short pulses, through a mechanism reminiscent of natural circadian clocks. Furthermore, we investigate the system’s response to detuned external stimuli observing multiple regimes of global synchronization. Integrating experiments and mathematical modeling, we show that the entrainment mechanism is robust and can be understood quantitatively from single cell to population level.
{"url":"https://elifesciences.org/articles/68679","timestamp":"2024-11-07T05:54:27Z","content_type":"text/html","content_length":"709581","record_id":"<urn:uuid:fda1ee66-7208-4bd1-8971-edb95e3a2d20>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00105.warc.gz"}
Explore D1 Maths Book Pdf 7th Edition for Success - O/A Level BooksExplore D1 Maths Book Pdf 7th Edition for Success Explore D1 Maths Book Pdf 7th Edition for Success: Despite its reputation as a difficult topic, math is essential to many other subjects and professions. Access to quality materials is crucial to learners who want to achieve academic success. Between these resources, the D1 Math’s Book Pdf 7th Edition has grown in favor with students. This thorough book is now the go-to resource for anybody wishing to improve their mathematics knowledge and abilities. We will go over the attributes, advantages, and ways that the D1 Math’s Book Pdf 7th Edition may help you succeed in this post. Understanding the D1 Maths Book Pdf 7th Edition: Pupils facing sophisticated mathematical challenges, especially in subjects like continuous math and statistical analysis, are the target audience for the D1 Math’s Book Pdf, 7th Edition. It is carefully designed to make learning easier with lots of activities, real-world examples, and short explanations. Recently updated knowledge reflecting contemporary educational standards is brought to learners with this version, guaranteeing they are allowed to the newest approaches and strategies. Key Features for D1 Maths Book Pdf 7th Edition: 1. Extensive Coverage: a term used diagrams, and optimization are just a few of the many subjects covered in this book. Because every chapter builds on the one before it, learning may go smoothly from start to finish. 2. User-friendly Style: The D1 Math’s Book Pdf 7th Edition has an entertaining and easy-to-use layout. By utilizing well-organized headers, numbered lists, and illustrations, pupils may effortlessly grasp intricate subjects. 3. Practice Problems: This book’s vast array of difficulties in practice represents one of its best qualities. There are many opportunities for pupils to assess their learning because these tasks span in difficulty from simple to complex. 4. Solved Examples: To demonstrate how to address various problem types, instances of operation are included in each chapter. This methodical advice promotes autonomy to solve issues and demystifies difficult ideas. 5. Supplementary Supplies: Tools that interact and online activities are among more supplies that are frequently included with the D1 Math’s Book Pdf, 7th Edition. These extra resources have the potential to significantly improve the educational process. Why Choose the D1 Maths Book Pdf 7th Edition? The choice of books may have a big impact on the academic life of a pupil. This is the D1 Math Book Pdf 7th Edition’s unique selling point: Purity and Accuracy: The writers have made sure that all of the descriptions are clear and accurate, which is crucial for understanding ideas related to math. Revised Content: The seventh edition is now more pertinent and centered around users thanks to input from instructors and pupils. Availability: Pupils may study while on the go because the format of a PDF is simple to use on a variety of devices. This adaptability is essential in the hectic classroom of nowadays. Strategies for Utilizing the D1 Maths Book Pdf 7th Edition: Make a Study Program: Come up with a regular study schedule that includes going over the material again. Organizing the information into digestible chunks might help you avoid being distracted. Interact with the Description: Take a meaningful role in the subject matter instead of just reading it. To reinforce the knowledge acquired, work with the illustrations, do the exercises and challenges, and comment on the material. Create Study Groups: Working together with peers helps improve comprehension. Clarifying concerns and gaining fresh insights might result from discussing issues and answers found in the D1 Math Book Pdf, 7th Edition. Make Use of Extra Supplies: Make use of any additional assets that come with the book. Video clips, discussion boards, and online tests can offer additional instruction. and various clarifications. Seek Help When Desired: Don’t be afraid to ask instructors, instructors, or online forums for assistance if you’re having trouble understanding a certain idea. Learning communities and blogs may be quite helpful. Real-World Applications of D1 Mathematics: Comprehending D1 math is not limited to the learning environment; it is essential for several practical uses. Here are some instances: Computer science: The cornerstone of machine software development and programming is discrete mathematics, which includes concepts like data structures and algorithmic techniques. Architecture: To increase productivity and cut expenses, optimization techniques are crucial in disciplines like systems planning and operations research research. Economy: Critical company’s choices are informed by statistical techniques used in modeling, financier modeling, and forecasting. These techniques are developed from D1 math. Testimonials from Users: A lot of pupils have used the D1 Math’s Book Pdf 7th Edition with success. Here are some of our endorsements: Emily J.: “This book completely changed the way I thought about math.” I aced my examinations thanks to the review questions and the excellent descriptions.” Michael T.: When finding the D1 Math’s Book Pdf 7th Edition, I had trouble understanding discrete math. It was far easier to understand because of the samples and design.” Sarah K.: “I like having access to the book’s online materials. They offer additional practice and nicely balance the material.” The D1 Maths Book Pdf 7th Edition is, in summary, a priceless tool for learners who want to succeed in math. For students of all skill stages, its thorough information, intuitive design, and abundance of possibilities for practice render it an exceptional option. Through active engagement with this text, learners may establish a solid mathematical basis, leading to a multitude of academic and professional prospects. The D1 Maths Book Pdf 7th Edition will help you succeed whether you’re studying for tests or want to get a deeper knowledge of complex mathematical ideas. Go ahead and peruse the sections, and you’ll see a dramatic increase in your mathematical ability!
{"url":"https://olevelalevelbooks.com/d1-maths-book-pdf-7th-edition/","timestamp":"2024-11-05T03:33:01Z","content_type":"text/html","content_length":"102327","record_id":"<urn:uuid:7ff56bd8-a555-4f1b-98e6-30337fee46d6>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00486.warc.gz"}
Weight Test of the Concrete, Weight and Dry Weight of Fresh Concrete: – The weight of fresh concrete is depended on mix design. Concrete content cement, sand, fine and coarse aggregates, Water, admixtures etc. Total weight of these mixtures is the total weight of concrete. To calculate weight in certain volume, density must be known. Fresh Concrete density varies between 2400 Kg/CUM to 2550 Kg/CUM. Weight of Dry Concrete: – The difference in weight between wet runny concrete just mixed and the concrete in a solid state? Count with ~5% weight reduction once water evaporates. There is the concrete shrinkage aspect to consider. At first it is plastic and soft. Later it sets and hardens. After concrete had fully cured it keeps around ~95% of its original weight compared to the wet state. Significance: – Knowing the concrete density is very important. If change in density is more then concrete may be not of desired strength. If density varies more then must check the quality of concrete. Much difference in density indicates faults in mix design or fault in the time of site mixing. Apparatus: – Weight scale: – To calculate quantity up to 5000kg 1CUM Pot:- To pour concrete Procedure: – 1. Take a desired mix design and mix concrete according to it. 2. Pour This concrete in Measurement pot 3. Weight the fresh concrete. This is weight volume 4. Weight for 24 Hours then calculate weight again. this is dry volume calculations: – Weight Weight of Concrete V1= Dry weight of the Concrete V2= Retain weight in percent= V2/V1 x 100
{"url":"https://constropedia.com/weight-test-of-the-concrete-weight-and-dry/","timestamp":"2024-11-11T06:13:01Z","content_type":"text/html","content_length":"63115","record_id":"<urn:uuid:bc5bfddd-11b6-408c-8307-268e46bec9f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00394.warc.gz"}
Simulating isotropic vector-valued Gaussian random fields on the sphere through finite harmonics approximations The paper tackles the problem of simulating isotropic vector-valued Gaussian random fields defined over the unit two-dimensional sphere embedded in the three-dimensional Euclidean space. Such random fields are used in different disciplines of the natural sciences to model observations located on the Earth or in the sky, or direction-dependent subsoil properties measured along borehole core samples. The simulation is obtained through a weighted sum of finitely many spherical harmonics with random degrees and orders, which allows accurately reproducing the desired multivariate covariance structure, a construction that can actually be generalized to the simulation of isotropic vector random fields on the d-dimensional sphere. The proposed algorithm is illustrated with the simulation of bivariate random fields whose covariances belong to the F, spectral Matérn and negative binomial classes of covariance functions on the two-dimensional sphere. • Addition theorem • Bivariate spectral Matérn covariance • Multivariate Schoenberg sequence • Sphere • Spherical harmonics Dive into the research topics of 'Simulating isotropic vector-valued Gaussian random fields on the sphere through finite harmonics approximations'. Together they form a unique fingerprint.
{"url":"https://khazna.ku.ac.ae/en/publications/simulating-isotropic-vector-valued-gaussian-random-fields-on-the-","timestamp":"2024-11-12T20:07:16Z","content_type":"text/html","content_length":"56426","record_id":"<urn:uuid:dc902f7e-0c1d-413a-a288-ef313bd42d2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00189.warc.gz"}
Ozone depletion - Wikipedia Ozone depletion consists of two related events observed since the late 1970s: a steady lowering of about four percent in the total amount of ozone in Earth's atmosphere,[citation needed] and a much larger springtime decrease in stratospheric ozone (the ozone layer) around Earth's polar regions.[1] The latter phenomenon is referred to as the ozone hole. There are also springtime polar tropospheric ozone depletion events in addition to these stratospheric events.
{"url":"https://www.petrbroz.eu/wp-admin/admin-ajax.php?action=embed_extended_iframe&url=https://en.wikipedia.org/wiki/Ozone_depletion","timestamp":"2024-11-04T01:15:39Z","content_type":"text/html","content_length":"2401","record_id":"<urn:uuid:c31383b5-4eac-4d77-8bbd-b76c60bc61cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00271.warc.gz"}
The Sloan Corporation is trying to choose between the following two mutually exclusive design projects: The Sloan Corporation is trying to choose between the following two mutually exclusive design projects: Cash... The Sloan Corporation is trying to choose between the following two mutually exclusive design projects: Cash Flow Year (I) Cash Flow Cash Flow (II) 0 –$70,000 -17,400 1 31,500 9400 2 31,500 9400 3 31,500 9400 a.If the required return is 11 percent, what is the profitability index for both projects? Project A and B b.What is the NPV for both projects? Need Online Homework Help? Get Answers For Free Most questions answered within 1 hours. Ask a Question
{"url":"https://justaaa.com/finance/87349-the-sloan-corporation-is-trying-to-choose-between","timestamp":"2024-11-13T21:18:38Z","content_type":"text/html","content_length":"47245","record_id":"<urn:uuid:8388b9d6-4d86-4319-b928-7d648e01a55f>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00078.warc.gz"}
Coq devs & plugin devs For just a day or two, I've seen many OOM failures with Perennial. cc @Tej Chajed Is there some recent change that would explain this? Perennial has been growing a lot lately, as the big case study they are working on is nearing completion on my 16GB RAM system I had to go down to -j3 to avoid OOM situations most of the files towards the end of the compilation process need 3-4GB of RAM per coqc so I guess the answer is -- yes, there is such a recent change, namely as they keep working up the stack of what they verify, coq needs more and more RAM (using third person pronouns here as I am not directly involved with that verification, just helping with some of the infrastructure) I don't think there was a single change that made a big difference though. just a general growth of the case study. A good candidate for in-depth profiling of the memory use then... would be interesting to know how memory use correlates with lines-per-file for the project. Could be multiple thousands-line files in there My educated guess is rather that it's correlated to lines per proof. The bigger your proof, the more stuff you'll get in there that's not garbage collected (evars, universes, etc.) when it was added to CI (https://github.com/coq/coq/pull/10770) it was said Perennial takes about 30 minutes to build, including all the verified examples (a replicated disk, a mail server, as well as some smaller ones). ah, but those tend to be coupled (lines overall vs. proof script lines) Yes, there is a correlation, but not always. Some developments have a lot of short lemmas, i.e. four-color Gaëtan Gilbert said: when it was added to CI (https://github.com/coq/coq/pull/10770) it was said Perennial takes about 30 minutes to build, including all the verified examples (a replicated disk, a mail server, as well as some smaller ones). depending on the machine people use, I think it currently takes between 20 and 40min so factoring lemma statements/proofs is not only good for reuse, but also for avoiding garbage collector woes (and possibly also for improving machine learning) Karl Palmskog said: would be interesting to know how memory use correlates with lines-per-file for the project. Could be multiple thousands-line files in there the case studies (program_proof folder) sum up to 10k lines of specs and 20k lines of proofs (according to coqwc) that's a whole lot of case studies together so, this does not seem completely out of the ordinary right, but one would need to know distribution over files of those lines spec proof comments 169 460 1 src/program_proof/addr/addr_proof.v 452 1071 23 src/program_proof/buf/buf_proof.v 57 51 0 src/program_proof/buf/defs.v 162 727 5 src/program_proof/buftxn/buftxn_proof.v 48 80 6 src/program_proof/buftxn_replication/buftxn_replication_proof.v 199 343 133 src/program_proof/buftxn/sep_buftxn_proof.v 10 0 0 src/program_proof/examples/all_examples.v 46 204 4 src/program_proof/examples/alloc_addrset.v 289 526 61 src/program_proof/examples/alloc_crash_proof.v 105 165 3 src/program_proof/examples/alloc_proof.v 320 1004 33 src/program_proof/examples/async_inode_proof.v 301 819 46 src/program_proof/examples/dir_proof.v 135 877 55 src/program_proof/examples/indirect_inode_append_proof.v 361 886 17 src/program_proof/examples/indirect_inode_proof.v 268 624 34 src/program_proof/examples/inode_proof.v 23 12 2 src/program_proof/examples/print_assumptions.v 135 308 54 src/program_proof/examples/replicated_block_proof.v 211 465 39 src/program_proof/examples/single_async_inode_proof.v 126 262 20 src/program_proof/examples/single_inode_proof.v 40 94 1 src/program_proof/examples/toy_proof.v 89 100 8 src/program_proof/kvs/specs.v 97 158 10 src/program_proof/lockservice/bank_proof.v 175 297 20 src/program_proof/lockservice/common_proof.v 36 40 2 src/program_proof/lockservice/fmcounter_map.v 86 34 3 src/program_proof/lockservice/kv_durable.v 115 152 5 src/program_proof/lockservice/kv_proof.v 353 0 32 src/program_proof/lockservice/lockservice_crash.v 139 283 31 src/program_proof/lockservice/lockservice_proof.v 313 0 30 src/program_proof/lockservice/lockservice.v 50 102 12 src/program_proof/lockservice/movers.v 5 0 1 src/program_proof/lockservice/nondet.v 147 181 31 src/program_proof/lockservice/rpc.v 62 104 1 src/program_proof/lockservice/scratch.v 316 1272 195 src/program_proof/simple/simple.v 13 53 0 src/program_proof/simple/spec.v 714 468 0 src/program_proof/txn/commit_proof.v 128 78 2 src/program_proof/txn/invariant.v 33 187 2 src/program_proof/txn/load_proof.v 15 51 0 src/program_proof/txn/map_helpers.v 17 2 5 src/program_proof/txn/recovery_proof.v 2 0 0 src/program_proof/txn/txn_proof.v 107 100 6 src/program_proof/wal/abstraction.v 71 285 56 src/program_proof/wal/circ_proof_crash.v 372 812 8 src/program_proof/wal/circ_proof.v 83 279 0 src/program_proof/wal/common_proof.v 56 262 4 src/program_proof/wal/flush_proof.v 338 952 17 src/program_proof/wal/heapspec_list.v 463 1284 12 src/program_proof/wal/heapspec.v 82 122 9 src/program_proof/wal/highest.v 643 1037 23 src/program_proof/wal/installer_proof.v 551 269 67 src/program_proof/wal/invariant.v 147 349 0 src/program_proof/wal/lib.v 88 568 6 src/program_proof/wal/logger_proof.v 8 0 0 src/program_proof/wal/proof.v 97 266 4 src/program_proof/wal/read_proof.v 137 396 191 src/program_proof/wal/recovery_proof.v 376 822 6 src/program_proof/wal/sliding_proof.v 77 116 4 src/program_proof/wal/sliding.v 15 0 0 src/program_proof/wal/specs.v 34 30 2 src/program_proof/wal/thread_owned.v 87 77 3 src/program_proof/wal/transitions.v 213 619 52 src/program_proof/wal/write_proof.v 10407 21185 1397 total the new stuff is mostly in wal,txn, simple, buf* you have at least 5 files or so with 1000+ lines of proof scripts, I would look at memory use for those Anyway, we absolutely need a lighter target to run in Coq's CI. @Tej Chajed any proposal for how to best exclude some of these proofs for Coq's CI? Perennial is now more like 100 minutes to build single-threaded on a fast machine, which is indeed way to much for every Coq CI run. More seriously the memory usage isn't something we track and it doesn't affect me locally so I don't notice. I think we can fix this pretty easily while still testing the core of Perennial - I'll write a new .v file that depends on a subset of Perennial and we'll switch Coq CI to build that instead of the default target Couple of comments: • as discussed in Github, setting OCAMLRUNPARAM to some GC magic could help with mem comsuption • using simple compiler and compare could also be interesting, actually you can use my branch which does replace coqc with the simple compiler and do a bench for perennial only. But my investigations in other developments showed the real problem was the evar_map in the proofs, but indeed I'm unsure if the stm may have some leak. opened a PR switching to a lite target: https://github.com/coq/coq/pull/13402 Just looking at memory allocations, the lia cache seems to have an important share of the memory consumption in perennial. You could try Unset Lia Cache to see what happens. @Emilio Jesús Gallego Arias haven't I heard you complaining about the Lia cache recently as well? It has a crazy implementation, that's essentially a database that we fully load in memory when we fetch a key-value assignment. That is, one single key-value assignment can joyfully thrash your memory Indeed @Pierre-Marie Pédrot , we found some weird stuff a while ago, but didn't investigate more. I am currently writing a stupid patch that is slightly less stupid than the current key-value store implementation. Lia cache in perennial gobbles up several hundreds of megabytes in one go, it might help quite a bit to not clutter the memory with it. Last updated: Oct 13 2024 at 01:02 UTC
{"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/Perennial.20in.20CI.html","timestamp":"2024-11-05T17:14:24Z","content_type":"text/html","content_length":"28671","record_id":"<urn:uuid:124f62a5-4c98-4a9c-aeeb-e7324a7ac2c9>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00223.warc.gz"}
The time to ruin and the number of claims until ruin for phase-type claims We consider a renewal risk model with phase-type claims, and obtain an explicit expression for the joint transform of the time to ruin and the number of claims until ruin, with a penalty function applied to the deficit at ruin. The approach is via the duality between a risk model with phase-type claims and a particular single server queueing model with phase-type customer interarrival times; see Frostig (2004). This result specializes to one for the probability generating function of the number of claims until ruin. We obtain explicit expressions for the distribution of the number of claims until ruin for exponentially distributed claims when the inter-claim times have an Erlang- n distribution. Bibliographical note Funding Information: The research of Esther Frostig was supported by the Israel Science Foundation grant 606/09 . • Deficit at ruin • Duality • Extended Negative Binomial distribution • Number of claims to ruin • Renewal risk model • Ruin time ASJC Scopus subject areas • Statistics and Probability • Economics and Econometrics • Statistics, Probability and Uncertainty Dive into the research topics of 'The time to ruin and the number of claims until ruin for phase-type claims'. Together they form a unique fingerprint.
{"url":"https://cris.haifa.ac.il/en/publications/the-time-to-ruin-and-the-number-of-claims-until-ruin-for-phase-ty","timestamp":"2024-11-02T06:09:35Z","content_type":"text/html","content_length":"54003","record_id":"<urn:uuid:f281ee30-0ad0-4040-906a-27daea054c49>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00852.warc.gz"}
The Master of Engineering in Instrumentation and Applied Physics is an intensive professional degree program that can be completed in nine months while on campus. The program is designed as a unique discovery experience, offering greater technical depth than is possible in an undergraduate program, while providing the tools you'll need to successfully work within multidisciplinary teams. The coursework provides opportunities to become proficient within your technical discipline at the graduate level, and the background to become an effective professional physicist. Students need to complete the following coursework requirements: Required coursework (16 credit hours) Course Title Meeting times Hours Physics 503 Instrumentation Physics: Applications of Machine Learning Two 75-minute classes per week 4 Physics 523 Instrumentation and Applied Physics Project (two semesters) Two 3-hour classes each week 8 Physics 524 Survey of Instrumentation and Laboratory Techniques Two 50-minute classes per week 2 Physics 525 Survey of Fundamental Device Physics Two 50-minute classes per week 2 Total 16 Elective coursework (12 credit hours) Some courses may specify prerequisites, and some may require the permission of the instructor for students to enroll in the course. Please see the Course Explorer for more information. Course Title PHYS 427 Thermal & Statistical Physics PHYS 436 Electromagnetic Fields II PHYS 460 Condensed Matter Physics PHYS 487 Quantum Physics II PHYS 505 Classical Electromagnetism PHYS 508 Mathematical Physics I PHYS 509 Mathematical Physics II PHYS 514 Modern Atomic Physics PHYS 540 Astrophysics PHYS 560 Condensed Matter Physics I PHYS 561 Condensed Matter Physics II PHYS 565 Theory of Semiconductors and Semiconductor Devices PHYS 580 Quantum Mechanics I PHYS 598 CPA Computational Physics and Astrophysics Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Aerospace Engineering. Course Title AE 412 Viscous Flow & Heat Transfer AE 416 Applied Aerodynamics AE 420 Finite Element Analysis AE 433 Aerospace Propulsion AE 452 Introduction to Nonlinear Dynamics and Vibrations AE 485 Spacecraft Environment and Interactions AE 504 Optimal Aerospace Systems AE 512 Molecular Gas Dynamics Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Astronomy. Course Title ASTR 404 Stellar Astrophysics ASTR 501 Radiative Processes ASTR 502 Astrophysical Dynamics ASTR 503 Observational Astronomy ASTR 504 Theoretical Stellar Physics ASTR 505 Star Formation ASTR 506 Galaxies ASTR 507 Physical Cosmology Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Atmospheric Sciences. Course Title ATMS 404 Risk Analysis in Earth Science ATMS 410 Radar Meteorology ATMS 411 Satellite Remote Sensing ATMS 500 Dynamic Meteorology ATMS 535 Aerosol Sampling and Analysis No focus area credential available. Course Title BIOE/ECE 414 Biomedical Instrumentation Civil and Environmental Engineering Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Civil and Environmental Engineering. Course Title CEE 401 Concrete Materials CEE 405 Asphalt Materials I CEE 406 Pavement Design I CEE 408 Railroad Transportation Engrg CEE 409 Railroad Track Engineering CEE 434 Environmental Systems I CEE 437 Water Quality Engineering CEE 441 Air Pollution Sources, Transport and Control CEE 442 Environmental Engineering Principles, Physical CEE 450 Surface Hydrology CEE 451 Environmental Fluid Mechanics CEE 457 Groundwater CEE 458 Water Resources Field Methods CEE 461 Reinforced Concrete I CEE 465 Design of Structural Systems CEE 470 Structural Analysis CEE 471 Structural Mechanics CEE 472 Structural Dynamics I CEE 473 Wind Effects on Structures CEE 483 Soil Mechanics and Behavior CEE 545 Aerosol Sampling and Analysis CEE 582 Consolidation of Clays CEE 586 Rock Mechanics and Behavior CEE 588 Geotechnical Earthquake Engrg Chemical and Biomolecular Engineering No focus area credential available. Course Title CHBE 421 Momentum and Heat Transfer Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Computer Science. Course Title CS/ECE 407 Cryptography CS 411 Database Systems CS 424 Real-Time Systems CS 425 Distributed Systems CS 431 Embedded Systems CS 437 Topics in Internet of Things CS 439 Wireless Networks CS 450 Numerical Analysis CS 461/ ECE 422 Computer Security I CS 473 Algorithms CS 543 Computer Vision CS 555 Numerical Methods for PDEs CS 565 Human-Computer Interaction Electrical and Computer Engineering Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Electrical and Computer Engineering. Course Title ECE 401 Signal Processing ECE 402 Electronic Music Synthesis ECE 414 Biomedical Instrumentation ECE 416 Biosensors ECE 418 Image & Video Processing ECE 420 Embedded DSP Laboratory ECE 421 Neural Interface Engineering ECE 422 Computer Security I ECE 431 Electric Machinery ECE 437 Sensors and Instrumentation ECE 443 LEDs and Solar Cells ECE 455 Optical Electronics ECE 457 Microwave Devices & Circuits ECE 464 Power Electronics BIOE/ECE 467 Biophotonics ECE 470 Introduction to Robotics ECE 476 Power System Analysis ECE 481 Nanotechnology ECE 486 Control Systems ECE 489 Robot Dynamics and Control ECE 491 Numerical Analysis ECE 520 EM Waves & Radiating Systems ECE 522 Emerging Memory and Storage Systems ECE 523 Plasma Technology of Gaseous Electronics ECE 528 Analysis of Nonlinear Systems ECE 535 Theory of Semiconductors & Devices ECE 538 2D Material Electronics and Photonics ECE 540 Computational Electromagnetics ECE 558 Digital Imaging ECE 570 Nonlinear Optics ECE 572 Quantum Opto-Electronics Food Science and Human Nutrition Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Food Science and Human Nutrition. Course Title FSHN 414 Food Chemistry FSHN 418 Food Analysis FSHN 460 Food Processing Engineering FSHN 464 Beverage Science & Technology FSHN 465 Principles of Food Technology FSHN 466 Food Product Development FSHN 480 Basic Toxicology FSHN 574 Value added biotransformation FSHN 576 Food Safety for Global Food Security Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Geology. Course Title GEOL 411 Structural Geol and Tectonics GEOL 432 Mineralogy and Mineral Optics GEOL 436 Petrology and Petrography GEOL 440 Sedimentology and Stratigraphy GEOL 470 Introduction to Hydrogeology GEOL 561 Geomicrobiology & Geochemistry GEOL 562 Isotope Geology GEOL 571 Contaminant Fate and Transport GEOL 573 River Morphodynamics Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Health Technology. Course Title HT 502 Human Factors Methods for Health Technology HT 503 Hardware Engineering for Health Technology HT 504 Software Engineering for Health Technology Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Information Sciences. Course Title IS 445 Data Visualization IS 464 Information Assurance IS 477 Data Management, Curation & Reproducibility IS 515 Information Modeling IS 525 Data Warehousing and Business Intelligence IS 537 Theory & Practice of Data Cleaning IS 541 Copyright for Information Professions IS 543 Digital Preservation Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Mathematics. Course Title MATH 413 Intro to Combinatorics MATH 415 Applied Linear Algebra MATH 441 Differential Equations MATH 442 Intro Partial Diff Equations MATH 446 Applied Complex Variables MATH 461 Probability Theory MATH 463 Statistics and Probability I MATH 473 Algorithms MATH 481 Vector and Tensor Analysis MATH 541 Functional Analysis MATH 545 Harmonic Analysis MATH 552 Numerical Methods for PDEs Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Mechanical Engineering. Course Title ME 400 Energy Conversion Systems ME 402 Design of Thermal Systems ME 403 Internal Combustion Engines ME 404 Intermediate Thermodynamics ME 410 Intermediate Gas Dynamics ME 411 Viscous Flow & Heat Transfer ME 412 Numerical Thermo-Fluid Mechs ME 420 Intermediate Heat Transfer ME 430 Failure of Engrg Materials ME 445 Introduction to Robotics ME 447 Computational Design and Dynamics of Soft Systems ME 452 Num Control of Mfg Processes ME 458 Additive Manufacturing and Product Design ME 460 Industrial Control Systems ME 461 Computer Cntrl of Mech Systems ME 471 Finite Element Analysis ME 472 Introduction to Tribology ME 481 Whole-Body Musculoskel Biomech ME 482 Musculoskel Tissue Mechanics ME 483 Mechanobiology ME 487 MEMS-NEMS Theory & Fabrication ME 501 Combustion Fundamentals ME 502 Thermal Systems ME 504 Multiphase Systems & Processes ME 522 Thermal Radiation ME 530 Fatigue Analysis ME 540 Control System Theory & Design ME 546 Analysis of Nonlinear Systems ME 586 Mechanics of MEMS Materials Science and Engineering No focus area credential available. Course Title MSE 402 Kinetic Processes in Materials MSE 403 Synthesis of Materials MSE 405 Microstructure Determination MSE 406 Thermal-Mech Behavior of Matls MSE 420 Ceramic Materials & Properties MSE 422 Electrical Ceramics MSE 440 Mechanical Behavior of Metals MSE 441 Metals Processing MSE 443 Design of Engineering Alloys MSE 450 Polymer Science & Engineering MSE 455 Macromolecular Solids MSE 456 Mechanics of Composites MSE 458 Polymer Physics MSE 460 Electronic Materials I MSE 464 Magnetic Materials and their Applications MSE 466 Materials in Electrochem Syst MSE 470 Design and Use of Biomaterials MSE 473 Biomolecular Materials Science MSE 474 Biomaterials and Nanomedicine MSE 480 Surfaces and Colloids MSE 485 Atomic Scale Simulations MSE 487 Materials for Nanotechnology MSE 488 Optical Materials MSE 501 Kinetic Processes in Materials MSE 581 Advanced Electron Microscopy MSE 582 Surface Physics MSE 583 Dynamics of Complex Fluids MSE 584 Point and Line Defects Nuclear, Plasma, and Radiological Engineering Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Nuclear Technology. Course Title NPRE 402 Nuclear Power Engineering NPRE 421 Plasma and Fusion Science NPRE 429 Plasma Engineering NPRE 435 Radiological Imaging NPRE 441 Radiation Protection NPRE 442 Radioactive Waste Management NPRE 445 Interaction of Radiation with Matter NPRE 455 Neutron Diffusion & Transport NPRE 470 Fuel Cells & Hydrogen Sources NPRE 475 Wind Power Systems NPRE 526 Plasma-Material Interactions Natural Resources and Environmental Sciences Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Natural Resources and Environmental Sciences. Course Title NRES 401 Watershed Hydrology NRES 403 Watersheds and Water Quality NRES 406 Fluvial Geomorphology NRES 474 Soil and Water Conservation NRES 480 Human-Wildlife Interactions NRES 488 Soil Fertility and Fertilizers Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Statistics. Course Title STAT 400 Statistics and Probability I STAT 408 Actuarial Statistics I STAT 420 Methods of Applied Statistics STAT 424 Analysis of Variance STAT 425 Statistical Modeling I STAT 428 Statistical Computing STAT 429 Time Series Analysis STAT 431 Applied Bayesian Analysis STAT 433 Stochastic Processes STAT 440 Statistical Data Management STAT 530 Bioinformatics STAT 558 Risk Modeling and Analysis STAT 571 Multivariate Analysis STAT 575 Large Sample Theory STAT 587 Hierarchical Linear Models Theoretical and Applied Mechanics Students graduating with eight or more credit hours of electives will be considered to have completed a focus area in Instrumentation and Theoretical and Applied Mechanics. Course Title TAM 412 Intermediate Dynamics TAM 413 Fund of Engrg Acoustics TAM 416 Introduction to Nonlinear Dynamics and Vibrations TAM 424 Mechanics of Structural Metals TAM 428 Mechanics of Composites TAM 435 Intermediate Fluid Mechanics TAM 445 Continuum Mechanics TAM 451 Intermediate Solid Mechanics TAM 456 Experimental Stress Analysis TAM 470 Computational Mechanics TAM 514 Elastodynamics and Vibrations TAM 524 Micromechanics of Materials TAM 531 Inviscid Flow TAM 532 Viscous Flow TAM 534 Non-Newtonian Fluid Mechanics & Rheology TAM 549 Asymptotic Methods TAM 551 Solid Mechanics I TAM 555 Fracture Mechanics Professional development coursework (4 credit hours) Course Title Hours TE 450 Startups: incorporation, funding, contracts, and intellectual property 3 TE 460 Lectures in engineering entrepreneurship 1 TE 461 Technology entrepreneurship 3 TE 466 High-tech venture marketing 2 TE 565 Technological innovation and strategy 2 TE 566 Finance for engineering management 2 ENG 598 CPD Seminar on topics for Professional Master's students interested in an industry position 1 Choose 4 credit hours from the courses listed above, or with approval of the program coordinator, other courses in Business, Law, or Economics. Total 4 Sample programs Students graduating with eight or more credit hours of electives in a single area (such as Aerospace Engineering) will be considered to have completed a focus area. Keep in mind that some courses may specify prerequisites, and some may require the permission of the instructor for students to enroll in the course.
{"url":"https://physics.illinois.edu/academics/masters/curriculum","timestamp":"2024-11-14T04:05:14Z","content_type":"text/html","content_length":"87809","record_id":"<urn:uuid:79bfc5f1-de04-4b58-b2df-a62a47d9bbbd>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00785.warc.gz"}
Implementing Support Vector Machines (SVMs) from Scratch Support Vector Machines (SVMs) are a powerful set of supervised learning methods used for classification, regression, and outlier detection. This tutorial will guide you through implementing SVMs from scratch, focusing on classification. By the end, you’ll understand the theory behind SVMs and how to code them without relying on external libraries. This tutorial assumes you have a good grasp of Python and linear algebra. 1. Introduction to Support Vector Machines What is a Support Vector Machine? A Support Vector Machine (SVM) is a supervised machine learning algorithm that can be used for classification or regression challenges. It works by finding the hyperplane that best divides a dataset into classes. In a two-dimensional space, this hyperplane is a line dividing a plane into two parts where each class lies on either side. Why Use SVM? • Effective in high-dimensional spaces: Particularly useful when the number of dimensions exceeds the number of samples. • Memory efficient: Uses a subset of training points in the decision function (called support vectors). • Versatile: Different kernel functions can be specified for the decision function. Common kernels include linear, polynomial, RBF (Gaussian), and sigmoid. 2. Mathematical Foundation of SVM Hyperplanes and Support Vectors A hyperplane in an $n$-dimensional space is a flat affine subspace of dimension $n-1$. For a 2D space, the hyperplane is a line. In SVM, we aim to find a hyperplane that maximizes the margin between the two classes. The points lying closest to the hyperplane are called support vectors. The margin is the distance between the hyperplane and the closest data points from either class. Maximizing the margin helps improve the model’s generalization ability. Mathematical Formulation Given a training dataset $\{(x_i, y_i)\}_{i=1}^n$ where $x_i \in \mathbb{R}^n$ and $y_i \in \{-1, 1\}$, the decision function for a linear SVM is: $f(x) = \mathbf{w}^T \mathbf{x} + b$ The goal is to find $\mathbf{w}$ and $b$ such that the margin is maximized. This can be formulated as a constrained optimization problem: Minimize $\frac{1}{2} ||\mathbf{w}||^2$ Subject to $y_i (\mathbf{w}^T \mathbf{x}_i + b) \geq 1$ for all $i$. The Dual Problem Using Lagrange multipliers, the above problem can be converted into its dual form, which is easier to solve: $L(\alpha) = \sum_{i=1}^n \alpha_i - \frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n \alpha_i \alpha_j y_i y_j \mathbf{x}_i^T \mathbf{x}_j$ Subject to: $\sum_{i=1}^n \alpha_i y_i = 0$ $\alpha_i \geq 0 \text{ for all } i$ where $\alpha_i$ are the Lagrange multipliers. Kernel Trick The kernel trick allows SVM to create non-linear decision boundaries by transforming the input space into a higher-dimensional space where a linear separation is possible. Common kernels include: • Linear Kernel: $K(\mathbf{x}, \mathbf{x}') = \mathbf{x}^T \mathbf{x}'$ • Polynomial Kernel: $K(\mathbf{x}, \mathbf{x}') = (\mathbf{x}^T \mathbf{x}' + c)^d$ • Radial Basis Function (RBF) Kernel: $K(\mathbf{x}, \mathbf{x}') = \exp(-\gamma ||\mathbf{x} - \mathbf{x}'||^2)$ 3. Implementing SVM from Scratch Data Preprocessing Before we start coding the SVM, let’s preprocess the data. We’ll use the Iris dataset for simplicity. import numpy as np from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.preprocessing import StandardScaler # Load the Iris dataset iris = load_iris() X = iris.data[:100, :2] # We will use only the first two features and two classes for simplicity y = iris.target[:100] # Convert the labels to {-1, 1} y = np.where(y == 0, -1, 1) # Split the data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Standardize the features scaler = StandardScaler() X_train = scaler.fit_transform(X_train) X_test = scaler.transform(X_test)Code language: Python (python) Kernel Functions Let’s implement some kernel functions: def linear_kernel(x1, x2): return np.dot(x1, x2) def polynomial_kernel(x1, x2, degree=3, coef0=1): return (np.dot(x1, x2) + coef0) ** degree def rbf_kernel(x1, x2, gamma=0.1): return np.exp(-gamma * np.linalg.norm(x1 - x2) ** 2)Code language: Python (python) Optimization Problem The optimization problem involves finding the values of $latex \alpha) that maximize the dual problem. We will use the Sequential Minimal Optimization (SMO) algorithm to solve this. Sequential Minimal Optimization (SMO) SMO breaks the problem into smaller subproblems, which are then solved analytically. This approach significantly simplifies the optimization process. class SVM: def __init__(self, kernel=linear_kernel, C=1.0, tol=1e-3, max_passes=5): self.kernel = kernel self.C = C self.tol = tol self.max_passes = max_passes def fit(self, X, y): n_samples, n_features = X.shape self.alpha = np.zeros(n_samples) self.b = 0 self.X = X self.y = y passes = 0 while passes < self.max_passes: num_changed_alphas = 0 for i in range(n_samples): E_i = self._decision_function(X[i]) - y[i] if (y[i] * E_i < -self.tol and self.alpha[i] < self.C) or (y[i] * E_i > self.tol and self.alpha[i] > 0): j = np.random.randint(0, n_samples) while j == i: j = np.random.randint(0, n_samples) E_j = self._decision_function(X[j]) - y[j] alpha_i_old = self.alpha[i] alpha_j_old = self.alpha[j] if y[i] != y[j]: L = max(0, self.alpha[j] - self.alpha[i]) H = min(self.C, self.C + self.alpha[j] - self.alpha[i]) L = max(0, self.alpha[j] + self.alpha[i] - self.C) H = min(self.C, self.alpha[j] + self.alpha[i]) if L == H: eta = 2 * self.kernel(X[i], X[j]) - self.kernel(X[i], X[i]) - self.kernel(X[j], X[j]) if eta >= 0: self.alpha[j] -= y[j] * (E_i - E_j) / eta self.alpha[j] = np.clip(self.alpha[j], L, H) if abs(self.alpha[j] - alpha_j_old) < 1e-5: self.alpha[i] += y[i] * y[j] * (alpha_j_old - self.alpha[j]) b1 = self.b - E_i - y[i] * (self.alpha[i] - alpha_i_old) * self.kernel(X[i], X[i]) - y[j] * (self.alpha[j] - alpha_j_old) * self.kernel(X[i], X[j]) b2 = self.b - E_j - y[i] * (self.alpha[i] - alpha_i_old) * self.kernel(X[i], X[j]) - y[j] * (self.alpha[j] - alpha_j_old) * self.kernel(X[j], X[j]) if 0 < self.alpha[i] < self.C: self.b = b1 elif 0 < self.alpha[j] < self.C: self.b = b2 self.b = (b1 + b2) / 2 num_changed_alphas += 1 if num_changed_alphas == 0: passes += 1 passes = 0 def _decision_function(self, X): return np.dot((self.alpha * self.y), self.kernel(self.X, X)) + self.b def predict(self, X): return np.sign(self._decision_function(X))Code language: Python (python) Training and Testing the SVM Now, let’s train our SVM model and test it on the test data. # Initialize the SVM with a linear kernel svm = SVM(kernel=linear_kernel, C=1.0) # Train the SVM svm.fit(X_train, y_train) # Predict the test data y_pred = svm.predict(X_test) # Calculate the accuracy accuracy = np.mean(y_pred == y_test) print(f'Accuracy: {accuracy * 100:.2f}%')Code language: Python (python) 6. Evaluation Metrics While accuracy is a common metric, it’s important to consider other metrics, especially when dealing with imbalanced datasets. Precision, Recall, and F1-Score from sklearn.metrics import precision_score, recall_score, f1_score precision = precision_score(y_test, y_pred) recall = recall_score(y_test, y_pred) f1 = f1_score(y_test, y_pred) print(f'Precision: {precision:.2f}') print(f'Recall: {recall:.2f}') print(f'F1-Score: {f1:.2f}')Code language: Python (python) Confusion Matrix A confusion matrix provides a detailed breakdown of prediction results. from sklearn.metrics import confusion_matrix conf_matrix = confusion_matrix(y_test, y_pred) print('Confusion Matrix:') print(conf_matrix)Code language: Python (python) 7. Optimizations and Practical Tips Handling Imbalanced Data When dealing with imbalanced data, you can adjust the class weights. class SVM: def __init__(self, kernel=linear_kernel, C=1.0, tol=1e-3, max_passes=5, class_weight=None): self.kernel = kernel self.C = C self.tol = tol self.max_passes = max_passes self.class_weight = class_weight def fit(self, X, y): n_samples, n_features = X.shape self.alpha = np.zeros(n_samples) self.b = 0 self.X = X self.y = y if self.class_weight: weight = np.vectorize(self.class_weight.get)(y) weight = np.ones(n_samples) passes = 0 while passes < self.max_passes: num_changed_alphas = 0 for i in range(n_samples): E_i = self._decision_function(X[i]) - y[i] if (y[i] * E_i < -self.tol and self.alpha[i] < self.C * weight[i]) or (y[i] * E_i > self.tol and self.alpha[i] > 0): j = np.random.randint(0, n_samples) while j == i: j = np.random.randint(0, n_samples) E_j = self._decision_function(X[j]) - y[j] alpha_i_old = self.alpha[i] alpha_j_old = self.alpha[j] if y[i] != y[j]: L = max(0, self.alpha[j] - self.alpha[i]) H = min(self.C * weight[j], self.C * weight[j] + self.alpha[j] - self.alpha[i]) L = max(0, self.alpha[j] + self.alpha[i] - self.C * weight[j]) H = min(self.C * weight[j], self.alpha[j] + self.alpha[i]) if L == H: eta = 2 * self.kernel(X[i], X[j]) - self.kernel(X[i], X[i]) - self.kernel(X[j], X[j]) if eta >= 0: self.alpha[j] -= y[j] * (E_i - E_j) / eta self.alpha[j] = np.clip(self.alpha[j], L, H) if abs(self.alpha[j] - alpha_j_old) < 1e-5: self.alpha[i] += y[i] * y[j] * (alpha_j_old - self.alpha[j]) b1 = self.b - E_i - y[i] * (self.alpha[i] - alpha_i_old) * self.kernel(X[i], X[i]) - y[j] * (self.alpha[j] - alpha_j_old) * self.kernel(X[i], X[j]) b2 = self.b - E_j - y[i] * (self.alpha[i] - alpha_i_old) * self.kernel(X[i], X[j]) - y[j] * (self.alpha[j] - alpha_j_old) * self.kernel(X[j], X[j]) if 0 < self.alpha[i] < self.C * weight[i]: self.b = b1 elif 0 < self.alpha[j] < self.C * weight[j]: self.b = b2 self.b = (b1 + b2) / 2 num_changed_alphas += 1 if num_changed_alphas == 0: passes += 1 passes = 0Code language: Python (python) Feature Scaling Feature scaling can significantly impact the performance of SVMs. Ensure that your data is standardized or normalized before training. Parameter Tuning Tuning parameters like $C$, kernel type, and kernel parameters is crucial for achieving optimal performance. Use grid search or randomized search to find the best parameters. from sklearn.model_selection import GridSearchCV from sklearn.svm import SVC # Define the parameter grid param_grid = { 'C': [0.1, 1, 10], 'kernel': ['linear', 'poly', 'rbf'], 'gamma': [0.001, 0.01, 0.1, 1], 'degree': [2, 3, 4] # Initialize the SVM model svc = SVC() # Initialize the grid search grid_search = GridSearchCV(svc, param_grid, cv=5, scoring='accuracy') # Fit the grid search grid_search.fit(X_train, y_train) # Print the best parameters and score print(f'Best Parameters: {grid_search.best_params_}') print(f'Best Score: {grid_search.best_score_}')Code language: Python (python) 8. Conclusion In this tutorial, we’ve implemented a Support Vector Machine (SVM) from scratch. We’ve covered the mathematical foundations, implemented the SMO algorithm, and explored how to handle real-world challenges like imbalanced data and feature scaling. By understanding the inner workings of SVMs, you can better leverage their power and apply them effectively in various machine learning tasks. Further Reading • Pattern Recognition and Machine Learning by Christopher M. Bishop • The Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani, and Jerome Friedman • Support Vector Machines: Concepts and Applications by Ingo Steinwart and Andreas Christmann Practice Problems • Implement SVM with a different kernel (e.g., polynomial or RBF) from scratch. • Apply your SVM implementation to a different dataset (e.g., the digits dataset). • Experiment with different values of $C$ and observe how it affects the decision boundary and performance. By following this tutorial and experimenting further, you’ll gain a deep understanding of SVMs and be well-equipped to use them in your machine learning projects.
{"url":"https://www.w3computing.com/articles/implementing-support-vector-machines-svms-from-scratch/","timestamp":"2024-11-02T20:33:19Z","content_type":"text/html","content_length":"83586","record_id":"<urn:uuid:ac66f140-fb4e-4384-a5d5-6d0415e7ac30>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00552.warc.gz"}
Understanding database [7] : Hash based indexes • Get link • Facebook • Twitter • Pinterest • Email • Other Apps Understanding database [7] : Hash based indexes Understanding database 7 : Hash based indexes When we are to implement hash based indexes, it is key that we should partition our data into buckets. As for the rule to allocate entries, we use each entry’s search key as its bucket id to partition, where each bucket will hold entries that share a same property. In order to determine what is that property we have chosen to partition with, we will have to introduce a hash function h(v), where v ranges over all the search key values. We will consider each bucket as a unit of storage containing one or more records that is stored in a page. It is often considered better to use hash based indexes for equality searches, and it is unable to support range searches due to the use of buckets. The steps of a equality search with hash index is as below : 1. Given v value, compute with h(v) = a 2. Retrieve the bucket a 3. Search the wanted entry from the bucket As we may see from above, the cost of searching is equivalent to the number of pages in the bucket, which is superior compared to B+ trees (if no overflow chains). One variety of hashing that is often used is the static hashing method. The bucket allocation is based on a modulo value that is equivalent to the number of buckets existing primarily. The id of the bucket is being retrieved via h(k) % m, where m is the number of buckets existing. The number of pages are fixed, and are never deallocated by any chance and are sequentially allocated. As a result of the fixed number of pages, overflow pages may be needed for entries outside the capacity, and this may pull the efficiency down. Extendible hashing is another variety which deals with the excessive overflowing chains that may occur. Each time when we have full pages, we will double the number of buckets by 2, and reallocate the entries based on their hashes. We will need a ‘global depth’ variable to indicate the number of bits that we will need to get from the hash function. For example, when we have a hashed result of binary encoding 101, where our global depth is 2, we will take the last 2 bits 01 as the bucket id and will not look at the entire 101 (at least for this global depth). Each bucket will also have a local depth which denotes the number of bits it uses to identify itself from others. The splitting of buckets will occur when the local depth is larger than the global depth, then the directory size (indicator of number of buckets) will be doubled and the entries will be redistributed with the corresponding number of bits needed. It is worth noting that if the distribution of hash value are skew, the directory may end up growing pretty large, and if the directory fits in the memory, equality searches may be answered with 1, or else 2 disk accesses. For deletion in extendible hashing, if the removal of a data entry results in a empty bucket, it can be merged its split image. If we have each directory element pointing to its split image, we can reduce the directory to half. • Get link • Facebook • Twitter • Pinterest • Email • Other Apps Popular Posts • Get link • Facebook • Twitter • Pinterest • Email • Other Apps 3 comments • Get link • Facebook • Twitter • Pinterest • Email • Other Apps 1 comment
{"url":"http://www.gokulab.com/2017/11/understanding-database-7-hash-based.html","timestamp":"2024-11-05T08:55:48Z","content_type":"text/html","content_length":"99008","record_id":"<urn:uuid:01cf611d-d80c-4a4b-9337-bb7f89c061a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00555.warc.gz"}