content
stringlengths
86
994k
meta
stringlengths
288
619
Precalculus - Online Tutor, Practice Problems & Exam Prep Hey, everyone. We just learned how trig functions relate angles to their corresponding point on the unit circle, where x and y values are cosine and sine values of that angle, respectively. And that's where memorization comes in. Now, memorization and math can be really tricky, but here I'm going to walk you through 2 different ways to memorize the trig values of the 3 most common angles, 30, 45, and 60 degrees. With this knowledge, you'll be able to solve pretty much any trig problem that gets thrown your way. So let's go ahead and get started here with what you may hear referred to as the 1, 2, 3 rule. No matter how you choose to memorize these values, you're always going to start in the same way with the square root over 2. Because all of these trig values are the square root of something over 2, and our job is just to memorize what that something is. So memorizing that with the 1, 2, 3 rule, you may be wondering why it's called that. And it's because we're going to start with our x values and count 1, 2, 3 going clockwise. Then for our y values, we're going to count 1, 2, 3 going counterclockwise. Now, what exactly do I mean by that? Well, we're going to start in this upper left corner with the x value of 60 degrees, and we're going to start counting from 1, going 1, 2, 3 clockwise around our unit circle for our x values. Then we're going to go back counterclockwise and count 1, 2, 3 back up for our y values. These are all of our trig values. We've gone 1, 2, 3 clockwise, 1, 2, 3 counterclockwise, and we're done. Now we can do a bit more simplification here because we know that the square root of 1 is just 1. So this x value or this cosine of 60 is really just 1/2, and the sine value of 30 degrees or the y value is also just 1/2. Now, remember that these values, these x and y values, also represent the base and height of the corresponding triangle. And we also don't want to forget about our tangent value. Remember that the tangent of any angle can be found by simply dividing sine by cosine, so once we have those sine and cosine values, we can find our tangent pretty easily. So looking at 30 degrees here, if I take my sine value, 1/2, and divide it by my cosine value to get tangent, here I'm really just effectively dividing those numerators because they have the same exact denominators. So for my tangent, I get one over the square root of 3. Or with my denominator rationalized, I get 33 as my tangent. Now you can find the tangent value of these other two angles the same exact way, and you can feel free to pause here and try that on your own. Now this was the 1, 2, 3 method, but remember that I promised you 2 different methods of memorizing this. So let's take a look at one more, a bit more visual hands-on approach to memorizing these values referred to as the left hand rule. So for our left-hand rule, you're going to take, you guessed it, your left hand and put it in front of your face like this. Now to you, your hand will look something like this, and I want you to consider your pinky as being at 0 degrees and your thumb as being 90 degrees, effectively making your left hand the first quadrant of the unit circle. So your other three fingers are going to represent those three common angles. So 30 degrees, 45 degrees, and 60 degrees. Now really imagine your left hand as being that first quadrant of the unit circle. And from here, we can find our trig values by simply counting on our fingers. So let's go ahead and focus in on the angle 30 degrees. Now to do that, you're going to look at your hand and you're going to take that finger closest to your pinky that represents 30 degrees and you're going to fold it inward. Now we're going to count the number of fingers are above that folded-in finger and below that folded-in finger in order to find our trig values. So for our sine and cosine of 30 degrees, we're going to start in that same way. Remember, the square root of something over 2, and our number of fingers is going to tell us what that something is. So with our finger folded in here, we're going to count the number of fingers that are above that folded-in 30 degree finger and put that under our square root in order to get the cosine of 30 degrees. So the cosine of 30 degrees here, counting those fingers, I have 3 fingers above. So this is the square root of 3 over 2. The cosine of any angle is the square root of your fingers above divided by 2. Now for our sine, we're instead going to look at the number of fingers below, in this case, just one, our pinky. So we get the sine of 30 degrees as being the square root of 1 over 2 or just 1/2. Now for our tangent, remember that we can always just take the sine divided by the cosine. Or here, we can also rely on counting our fingers again. So for the tangent of an angle, we're going to take the square root of the fingers below that folded-in finger and divide it by the square root of the fingers above. So here the fingers below again were 1. So the square root of 1 over the square root of 3. So the tangent of 30 degrees, we get 1 over the square root of 3. Or with that denominator rationalized, we end up with 33. This left-hand rule will work for any angle of the first quadrant, any finger you can fold in, and use this to find your trig values. Now that we've seen these trig values of these common angles, let's get a bit more practice in that first quadrant. Thanks for watching, and I'll see you in the next one.
{"url":"https://www.pearson.com/channels/precalculus/learn/patrick/9-unit-circle/common-values-of-sine-cosine-and-tangent?chapterId=24afea94","timestamp":"2024-11-10T22:42:16Z","content_type":"text/html","content_length":"309726","record_id":"<urn:uuid:b5f067bd-72ba-4780-bc17-5f3947171cb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00536.warc.gz"}
Thinking Mathematically Seventh Edition Chapter 2 Set Theory Download presentation Thinking Mathematically Seventh Edition Chapter 2 Set Theory Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 1 Section 2. 1 Basic Set Concepts Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 2 Objectives 1. Use three methods to represent sets. 2. Define and recognize the empty set. 3. Use the symbols and 4. Apply set notation to sets of natural numbers. 5. Determine a set’s cardinal number. 6. Recognize equivalent sets. 7. Distinguish between finite and infinite sets. 8. Recognize equal sets. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 3 Sets A set is collection of objects whose contents can be clearly determined. Elements or members are the objects in a set. A set must be well-defined, meaning that its contents can be clearly determined. The order in which the elements of the set are listed is not important. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 4 Methods for Representing Sets Capital letters are generally used to name sets. A word description can designate or name a set. Use W to represent the set of the days of the week. Use the roster method to list the members of a set. Commas are used to separate the elements of the set. Braces, are used to designate that the enclosed elements form a set. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 5 Example 1: Representing a Set Using a Description Write a word description of the set: Solution Set P is the set of the first five presidents of the United States. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 6 Example 2: Representing a Set Using the Roster Method Write using the roster method: Set C is the set of U. S. coins with a value of less than a dollar. Express this set using the roster method. Solution Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 7 Set-Builder Notation We read this notation as “Set W is the set of all elements x such that x is a day of the week. ” Before the vertical line is the variable x, which represents an element in general. After the vertical line is the condition x must meet in order to be an element of the set. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 8 Example 3: Converting from Set. Builder to Roster Notation Express set using the roster method. Solution There are two months, namely March and May. Thus, Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 9 The Empty Set The empty set, also called the null set, is the set that contains no elements. The empty set is represented by These are examples of empty sets: Set of all numbers less than 4 and greater than 10 Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 10 Example 4: Recognizing the Empty Set (1 of 2) Which of the following is the empty set? a. No. This is a set containing one element. b. 0 No. This is a number, not a set. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 11 Example 4: Recognizing the Empty Set (2 of 2) Which of the following is the empty set? c. No. This set contains all numbers that are either less than 4, such as 3, or greater than 10, such as 11. d. Yes. There are no squares with three sides. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 12 Notations for Set Membership The Notations and The symbol is used to indicate that an object is an element of a set. The symbol is used to replace the words “is an element of. ” The symbol is used to indicate that an object is not an element of a set. The symbol the words” is not an element of. ” is used to replace Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - Example 5: Using the Symbols Epsilon and Epsilon Crossed Out Determine whether each statement is true or false: a. True b. True c. False. is a set and the set is not an element of the set Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 14 Sets of Natural Numbers The Set of Natural Numbers The three dots, or ellipsis, after the 5 indicate that there is no final element and that the list goes on forever. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 15 Example 6: Representing Sets of Natural Numbers Express each of the following sets using the roster method: a. Set A is the set of natural numbers less than 5. b. Set B is the set of natural numbers greater than or equal to 25. c. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 16 Inequality Notation and Sets (1 of 2) Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 17 Inequality Notation and Sets (2 of 2) Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 18 Example 7: Representing Sets of Natural Numbers Express each of the following sets using the roster method: a. Solution: b. Solution: Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 19 Cardinality and Equivalent Sets Definition of a Set’s Cardinal Number The cardinal number of a set A, represented by is the number of distinct elements in set A. The symbol is read “n of A. ” Repeating elements in a set neither adds new elements to the set nor changes its cardinality. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 20 Example 8: Cardinality of Sets Find the cardinal number of each of the following sets: a. b. c. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 21 Equivalent Sets (1 of 3) Definition of Equivalent Sets Set A is equivalent to set B means that set A and set B contain the same number of elements. For equivalent sets, Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 22 Equivalent Sets (2 of 3) These are equivalent sets: The line with arrowheads, indicate that each element of set A can be paired with exactly one element of set B and each element of set B can be paired with exactly one element of set A. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 23 Equivalent Sets (3 of 3) One-to-One Correspondences and Equivalent Sets 1. If set A and set B can be placed in one-to-one correspondence, then A is equivalent to B: 2. If set A and set B cannot be placed in one-to-one correspondence, then A is not equivalent to B: Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 24 Example 9: Determining If Sets Are Equivalent (1 of 3) This figure shows the preferred age difference in a mate in five selected countries. A = the set of five countries shown B = the set of the average number of years women in each of these countries prefer men who are older than themselves. Are these sets equivalent? Explain. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 25 Example 9: Determining If Sets Are Equivalent (2 of 3) Method 1: Trying to set up a One-to-One Correspondence. Solution: The lines with the arrowheads indicate that the correspondence between the sets is not one-to-one. The elements Poland Italy from set A are both paired with the element 3. 3 from set B. These sets are not equivalent. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 26 Example 9: Determining If Sets Are Equivalent (3 of 3) Method 2: Counting Elements Solution: Set A contains five distinct elements: Set B contains four distinct elements: Because the sets do not contain the same number of elements, they are not equivalent. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 27 Finite and Infinite Sets Finite Sets and Infinite Sets Set A is a finite set if (that is, A is the empty set) or is a natural number. A set whose cardinality is not 0 or a natural number is called as infinite set. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 28 Equal Sets Definition of Equality of Sets Set A is equal to set B means that set A and set B contain exactly the same elements, regardless of order or possible repetition of elements. We symbolize the equality of sets A and B using the statement A = B. Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 29 Example 10: Determining Whether Sets Are Equal Determine whether each statement is true or false: a. True b. False Copyright © 2019, 2015, 2011 Pearson Education, Inc. All Rights Reserved Slide - 30
{"url":"https://slidetodoc.com/thinking-mathematically-seventh-edition-chapter-2-set-theory/","timestamp":"2024-11-06T02:27:55Z","content_type":"text/html","content_length":"149951","record_id":"<urn:uuid:af3475f7-f9c0-41c9-96ae-308f14bae66e>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00370.warc.gz"}
Multiplication Table Worksheet Multiplication Table Worksheet Help Table of Contents Number Set The Number set allows you to select how large the multiplication table will display. The larger the table, the more fact families that will be used. For example, a number set of "1 to 5" includes the fact families from 1 to 5 (25 total facts) whereas the number set from "1 to 12" would include fact families up through 12 for a total of 144 facts. The selection of a difficulty level indicates how many facts you want the student to complete in the multiplication table. The more difficult the level, the more blank facts that the student will have to complete in the worksheet. If you select the "Wow!" level, the student will be required to fill in almost the entire multiplication table. The following are options for creating a custom multiplication worksheet: • Randomize Table - By selecting this option, your multiplication table will have the order of the fact families in a random order. For example, rather than the fact families ordered like 1-2-3-4-etc, the randomized worksheet might be 3-1-4-2-etc. A randomized worksheet is considered more difficult to complete. Fact Families When a multiplication table worksheet is generated, the facts to be completed by the student are randomly selected across all the fact families. By selecting individual fact families, you are able to create worksheets that are tailored to practicing specific fact families. For example, if you want a worksheet that focuses on the 3 and 7 fact families, only select the "3" and "7" checkboxes. For each set of options you select on the build page, you will get the same problems each time you click the Generate button. If you are not satisfied with the problems generated for you, then you can click the Reshuffle button. Once you click Reshuffle, you can re-click the Generate button to generate a new set of problems. There is no limit to the number of times you can reshuffle your
{"url":"https://www.mathfactcafe.com/Worksheet/MultTable/Help","timestamp":"2024-11-09T12:30:55Z","content_type":"text/html","content_length":"10463","record_id":"<urn:uuid:1071e1ba-a9fb-4789-a2f4-9bfa92e73e9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00715.warc.gz"}
Excel Formula for Time Comparison and Cell Color In this tutorial, we will learn how to write an Excel formula in Python that compares time values and turns a cell green if the time is greater than a specific value. This can be achieved using the IF function and conditional formatting in Excel. By following the step-by-step explanation provided below, you will be able to implement this formula in your own Excel spreadsheets. To begin, we will use the TIME function in Excel to create a time value of 00:06:59. This function takes three arguments: hours, minutes, and seconds. In our case, we will set the hours to 0, minutes to 6, and seconds to 59. Next, we will use the IF function to perform the comparison. The IF function checks if the time in a specified cell is greater than the time value we created using the TIME function. If the condition is true, the formula returns 'yes'. If the condition is false, it returns an empty string. To visually indicate the result, we can apply conditional formatting to the cell. This can be done by selecting the cell, going to the Home tab, and choosing the 'Conditional Formatting' option. Then, select 'New Rule' and choose the option to format only cells that contain specific text. Enter 'yes' as the text and choose the green fill color. By following these steps, you can create an Excel formula in Python that compares time values and turns a cell green if the time is greater than a specific value. This can be useful for various applications where time comparisons are required, such as tracking deadlines or monitoring time-based events. An Excel formula =IF(A1>TIME(0,6,59), "yes", "") Formula Explanation This formula uses the IF function to check if the time in cell A1 is greater than 00:06:59. If it is, it returns "yes", otherwise it returns an empty string. Step-by-step explanation 1. The TIME function is used to create a time value of 00:06:59. The arguments for the TIME function are hours, minutes, and seconds. 2. The IF function is used to perform the comparison. It checks if the time in cell A1 is greater than the time value created by the TIME function. 3. If the condition is true (the time is greater than 00:06:59), the formula returns "yes". If the condition is false, it returns an empty string. 4. You can apply conditional formatting to the cell to turn it green if the formula returns "yes". This can be done by selecting the cell, going to the Home tab, and choosing the "Conditional Formatting" option. Then, select "New Rule" and choose the option to format only cells that contain specific text. Enter "yes" as the text and choose the green fill color. For example, if cell A1 contains the time 00:07:30, the formula =IF(A1>TIME(0,6,59), "yes", "") would return "yes". The cell can be formatted to turn green using conditional formatting. If cell A1 contains the time 00:05:30, the formula would return an empty string and the cell would not be turned green.
{"url":"https://codepal.ai/excel-formula-generator/query/GNdbWXsu/excel-formula-time-comparison","timestamp":"2024-11-08T15:39:46Z","content_type":"text/html","content_length":"90991","record_id":"<urn:uuid:45c2c6df-5b48-4af4-8971-561e0bc8a967>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00548.warc.gz"}
FreeAds2 Mysittingbourne.co.uk Blackjack is more than just a playing game of guessing, which is contrary to what many gamblers might believe. The majority of casinos are games that involve guessing. There are two ways to play blackjack. Basic strategy is the correct strategy. A mathematically optimized strategy is the correct strategy. It maximizes your winnings while minimizing your losses over the course of time.Many other card games like poker do not have a fundamental strategy. For instance, there is, no basic strategy for poker. Poker players choose their hands according to whether they believe their opponent has a strong or weak hand.There isn't a standard strategy for any game of cards that is as long as your opponent has the ability to make decisions on what to do with his hands. For hundreds of years, there was no basic blackjack strategy as it was not a game played in a casino that required the dealer to display a single card and play his hand according to house rules. It was more of an poker game where both the dealer's and player's cards hidden. The dealer was able to play as he wanted and the players could attempt to bluff the dealer.The American casinos introduced significant changes in the rules for Twenty-One . They required players to show the dealer's cards and require that the dealer follow the hit/stand method. The game was fundamentally transformed from a poker-style game that was based on psychology to a purely mathematical game-as far as the player's strategy was concerned.What is the reason that the basic Strategy is Effective ... The "Odds" and the reason why Basic Strategy WorksFor our purposes we'll begin with an assumption that today's dealers are playing a fair game. No sleight-of-hand, no chicanery. We're not likely to forget the First Rule of Professional Gamblers however, we'll temporarily ignore it until we can understand the logic behind the game and expose the strategy behind it that will eliminate the majority of the house's mathematical edge. The majority of the games that are played in casinos are fair and trustworthy. If you are faced with an unfavorable game, it's not on your level, don't even try to beat it.Computers were used by mathematicians to study every single hand and upcard of the dealer to figure out the most basic strategy for the honest dealt game. The fact that the base strategy was nearly perfect shocked the mathematicians who were the first to use computers for analysis. This was because of four GIs who had desk jobs in the 1950s and plenty of time. Although they didn't have computers, they had used for three years mechanical add machines to calculate the various outcomes. It could have been the most price Uncle Sam ever got from four GIs' wages!It is also known that decent hints of the correct fundamental strategy were worked out by a variety of professional gamblers in Nevada decades before computers appeared into the picture. These guys figured out the strategy by trading hands with each other on their kitchen tables. Certain decisions involved hundreds, thousands, or even thousands and sometimes hundreds of thousands of hand. Like most professional gamblers, these guys never published their strategies. Blackjack was their lifeline, and they'd spent hundreds of hours working it out. Why would they divulge to anyone else about what they had learned?One thing for certain is that casinos did not know the proper strategy for the game, nor did players who read the most highly-regarded books on the subject. The Hoyle's guidebooks of the past advised that players stick the totals at 15 and 16, regardless of what the dealer's card was splitting tens and never split nines, and to not hold onto to a soft 17. The "smart" players of the time, meaning those who had read one of these books on gambling by an authority of this kind generally made all sorts of games that we recognize as extremely costly.A lot of people aren't aware of the logic of basic strategy. Let me give an example. If my hand totals 14, and the dealer shows a 10 upcard, blackjack strategy says to hit. This is the mathematically correct way to play. Sometimes you'll hit the 14 and draw an 8 9, 10, or 10 and then bust. Then you will see that the dealer changed his hole card, which is an 8. This means that in order to hold your 14 the dealer has to be at 16. He would have flopped with the 10. Thus, by making the "mathematically correct" play, you lost a hand you would have won had you not breached the fundamental strategy.There are some players who argue that there is no one strategy that is perfect all the times. Blackjack is, according to them an art of playing guesswork.To understand basic strategy, you need to start thinking as a professional player, and that means you have be aware of "the blackjack odds."Let me explain the fundamental logic behind the strategy with a different scenario that illustrates how the mathematics of statistics and probability works. Let's say I have a jar that contains 100 marbles. Fifty of the marbles are white, while fifty are black. You must reach in blindfolded and pull out one marble. But prior to doing so you have to place a $ 1 bet on whether the marble you take out will be black or white. You will win $ 1 , if you pick the right color. If not then you'll lose $ 1.Are you guessing?Absolutely. How do you know what color marble you're planning to pick out ahead of time? If you win, it's just good luck, and if you lose it's bad luck.What if you found out that 90 percent of the marbles listed in this list are black, and 10% are white? Would you rather put your money on white or black prior to drawing? A smart person would pick black. There is a chance, of course, to get the white marble, however, it is much more unlikely to pull out one that is white than one of black. It's a guessing game and it's possible to lose $1 in the event that a white marble gets removed. If you're betting on black, the chances are in your favor you.Professional gamblers earn their money thinking in terms "the odds", only placing bets when they favor his side. The bet involves the gambler betting on black as the odds of winning are 9: 1. If you choose to bet on white, odds are 9-1 against you.There is a chance that you will lose If the dealer has an upcard of 10. However, the odds are against your chances in the event that the dealer has the number 10.If you decide using your intuition You may be able to get lucky however you'll lose more hands in the long run. The mathematics is the only method of making a right decision about any play. The laws of probability determine your expectations for each possibility.
{"url":"https://freeads2.mysittingbourne.co.uk/bookmarks/view/327248/basic-blackjack-strategy","timestamp":"2024-11-01T18:57:30Z","content_type":"application/xhtml+xml","content_length":"30675","record_id":"<urn:uuid:0caf418e-5f6c-4d79-9a72-d27d7ef5a4f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00558.warc.gz"}
An object of the class Gmpzf is a multiple-precision floating-point number which can represent numbers of the form \( m*2^e\), where \( m\) is an arbitrary precision integer based on the GMP library, and \( e\) is of type long. This type can be considered exact, even if the exponent is not a multiple-precision number. This number type offers functionality very similar to MP_Float but is generally faster. The significand \( m\) of a Gmpzf is a Gmpz and is reference counted. The exponent \( e\) of a Gmpzf is a long. (Note that these are not member functions.) std::ostream & operator<< (std::ostream &out, const Gmpzf &f) writes a double approximation of f to the ostream out. std::ostream & print (std::ostream &out, const Gmpzf &f) writes an exact representation of f to the ostream out. std::istream & operator>> (std::istream &in, Gmpzf &f) reads a double from in, then converts it to a Gmpzf.
{"url":"https://doc.cgal.org/5.5.2/Number_types/classCGAL_1_1Gmpzf.html","timestamp":"2024-11-06T15:24:24Z","content_type":"application/xhtml+xml","content_length":"17839","record_id":"<urn:uuid:15409a9b-4feb-44dd-8917-b06744beb4c1>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00765.warc.gz"}
Afrina Islam Following on from my last article, we’ll now have a look at some of the more difficult and prominent topics in the year 12 mathematics extension 1 syllabus, with some hot tips and my personal recommendations on how to master them. So without further ado, let’s dive right in! What this topic involves: • You will be asked to prove statements using mathematical induction – this is an inductive method following a series of steps to prove a statement is true • These questions can be to prove sum results or divisibility results • You can also be asked to find the mistake in a line of proof/identify where it doesn’t work How to approach these questions: • The key to proof questions is really to set out your working in the best way possible, making sure the marker is able to see your reasoning along the way I like to think of proof questions as equal parts logic and equal parts math – include sentences/conclusions that show you have understood the process and what this really means • Set out your proof using the following 3 steps: • Step 1. Prove the statement is true for the first possible case • Step 2. Assume the statement is true for some n = k, where k is any positive integer • Step 3. Prove the statement is true for the next term n = k + 1 using the assumption • The key to these questions is also the assumption that you make – ensure you make the assumption for the correct amount of terms and similarly prove step 1 for one or more terms as needed • Don’t forget the final conclusive statement in order to complete the proof!! • Note: even if you are struggling with a proofs question, I highly recommend you do as many steps as you can as you can get one mark just for showing step 1, and so on. So attempt these questions even if you can’t see the answer right away. Topic 2: Trigonometric Equations • Solving trig equations using auxiliary angles • Solving quadratic trigonometric equations • Using the compound angle/double angle formulae and t formulae to solve trig equations How to approach these questions: • Firstly, familiarise yourself with the trigonometry section of the formula sheet: o This contains each of the formulas you will be using • When using the t formulae, remember to sub π back into your equation as tan(90) is undefined • When solving quadratic equations involving trigonometry, be aware of the specified domain in which to list your answers, and also sub each one back in to ensure it truly satisfies the equation • Rookie mistake: but ensure calculator is in radian mode, not degrees! Topic 3: Binomial Distribution What does this topic involve: • Bernoulli trials • The binomial distribution and finding the mean and variance • The normal approximation for the sample proportion and finding the mean and variance • How to approach these questions: • Firstly, understand that a Bernoulli trial is simply an experiment with only two possible outcomes, either a success or a failure • The success is given a probability of 1 and the failure a probability of 0 • A binomial distribution makes use of the Bernoulli trial except it models the probability distribution if this experiment was conducted n number of times • When calculating probabilities, first convert to z-scores and if you aren’t given a standardised probability chart, then think about the normal curve as shown on the formula sheet and use that to estimate probabilities within a certain number of standard deviations from the mean: And that’s all from me folks! Once again, I hope these tips have been helpful and that you’re cruising steadily through your revision and preparation in the lead up to the exam. In the next article, I’ll be covering some tips on exactly that! Commenting has been turned off.
{"url":"https://www.connecteducation.education/post/year-12-tips-and-tricks-to-guide-you-through-the-toughest-questions","timestamp":"2024-11-09T01:05:14Z","content_type":"text/html","content_length":"1050494","record_id":"<urn:uuid:f429095f-dcb9-447e-9c40-665245c10f90>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00411.warc.gz"}
Contribute to Denoise profiles? I have an EOS R8 and I’m kinda bummed out that Denoise (profiled) is missing presets for that model, especially when that module is working amazingly good for my other cameras. What should I do to provide samples to developers? 1 Like Sweet. My Saturday is saved! Thanks in advance for your contribution! P.S. Please note that you’ll likely have to replace “Canon EOS R8” by just “EOS R8” as the model string in the produced output… I can assure anyone who reads this post that the linked article is old as… Just getting the noise tools in place was a nice tour in cmake and make, and there are more things needed to get things compiled than in the guide. The masking of a screen to get proper photos works but the open slit must be way larger nowadays to get the noise tools to accept the files as an input. I got it working with an open window about 10 cm (4 ") to get enough overexposed area in the images. And prepare to take several batches of photos as the tools are kinda picky about them. But after quite a lot if fiddeling, I can now contribute a denoise profile for R8s! 2 Likes Congratulations on your results and thanks for the contribution! Can you document the process, while you still have the details on building etc. fresh in your head? 1 Like This is the template I ended up with to get enough over exposure in the images. This is one of the accepted images, the white area takes up quite a chunk (believe it or not, I did use a tripod! All done in a completely dark room: I set my lens to f2.8, eternity focus and ISO 100. Approx 1.5 m from the monitor (that is 0.8 Eagles for you Muricans, or 5’ ) I lowered the shutter to a value where the white was fully over exposed, which is obvious with a histogram check in the camera preview. I ended up with 1/8 for ISO 100. I then took one image at ISO 100, raised the ISO one step and one step faster shutter. I.e next image was ISO 125 and 1/10. Just keep an eye on the preview that the over exposure is there and the histogram has a peak pushed all the way to the right. It should be fairly consistent over the ISO range. Do this for every ISO until you get to the top value. With Ubuntu 24.04: sudo apt install clang clang-18 clang-tools g++-13 libstdc++-14-dev git clone https://github.com/darktable-org/darktable cd darktable git submodule init # Following command takes awful a lot of time git submodule update mkdir build cd build # If following cmake command complains about missing packages, sudo apt install them and run the cmake command again cmake -DCMAKE_INSTALL_PREFIX=/opt/darktable -DBUILD_NOISE_TOOLS=ON .. cd share/darktable/tools make && sudo make install This should hopefully give you a stack of tools in /opt/darktable/… Now go to the folder where your noise images are and execute /opt/darktable/libexec/darktable/tools/darktable-gen-noiseprofile -d $(pwd) If your images are OK, this will proceed and give you all the instructions you need. If the images are not good enough, there will be warnings and you need to re-make all or some of the images until the process is happy with the entire ISO range. There you go, denoise profiles anno 2024 for ya. 1 Like What should I do to get the profile imported as a default in DT? I would like to have the profiles included without starting with the --noiseprofiles parameter. I added the JSON data from my config into /opt/darktable/share/darktable/noiseprofiles.json file (under Canon as a new model) but it doesn’t take effect. You should make a PR to the project to get in included, but for the meantime, you need to make sure the metadata is matching the camera name and make as read by exiv2. It is already in an accepted PR ready for next release, and the config do matches the camera. The question was how to get the profiles working with my already installed DT, but without any extra command line params. Editing the default profiles config makes no difference. Pull your PR patch and build it yourself. I can’t tell my non tech friends to do that, just to get a config accessible. If it doesn’t work to config your noiseprofile.json, then something is wrong. For example darktable doesn’t read that file and is using noiseprofile.json file somewhere else or that you need to change the maker/model name in the config file. I had to change the names for my OnePlus Noise Profile OnePlus Nord 3 5G · Issue #16192 · darktable-org/darktable · GitHub 1 Like The name is fine, it is matching the camera. But I found the problem. If you apt install DT, /opt/darktable is not the config location. The config resides in /usr/share/darktable. Merging the new profile into that config made it pop up in DT. All good in da neibahood! 1 Like That’s actually the correct location for system-installed software, /opt/darktable is for “optional” software. The difference is that a system update can overwrite /usr/share/darktable, but should leave /opt/darktable alone. That also means that adding your additions in /usr/share/darktable may not be optimal… Ideally, darktable should also look in /usr/local/share or somewhere under your home directory, for this kind of cases. But I have no idea if darktable actually looks in any of those places for additional noise profiles Yeah I don’t think you’re meant to edit that file, that’s why it is where it is. Of course you can edit it, because freedom. Hi, I’ve been trying to generate the noise profile for a Canon G1 X for weeks, but unfortunately without success (I don’t have much experience with the terminal). I kindly ask if anyone can generate it, I have already taken the images. Thank you in advance. If you can upload the images somewhere so I can download them, I can give it a shot.
{"url":"https://discuss.pixls.us/t/contribute-to-denoise-profiles/46123","timestamp":"2024-11-05T00:32:08Z","content_type":"text/html","content_length":"45962","record_id":"<urn:uuid:a929cdcc-3299-4108-976c-fabcdf12eb2a>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00179.warc.gz"}
Understanding RSA - part 1 Published on Understanding RSA - part 1 RSA, introduced by Ron Rivest, Adi Shamir, and Leonard Adleman in this paper, is a well-known algorithm for secure data transmission. It's straightforward, requiring minimal mathematical knowledge to understand its workings and a bit more to grasp its principles. In this series, I will discuss both and explain all necessary mathematical concepts along the way. Anyone with foundational mathematical knowledge can follow along. RSA, like most encryption schemes, consists of 2 functions: 1. The encryption function which uses an encryption key to produce a ciphertext from the message. 2. The decryption function that utilizes the decryption key to recover the original message from the encrypted text (A key is a secret used to encrypt/decrypt the data). RSA is an example of an asymmetric key encryption algorithm in which the encryption key (also called a public key) differs from the decryption key (called a private key). To understand the encryption and decryption functions as well as the key generation process one needs to understand a few notions from number theory. We say that a number a is a divisor of b if a divides b. We denote this relation as a|b. Examples: Note that, although the symbol is symmetric the relation is clearly not e.g. 5 divides 10 but not the other way around. Prime numbers A number is prime if it has exactly two divisors: 1 and itself. Examples of prime numbers are 2, 3, 5, 31, 1000003. Note that 1 is not a prime number as it has only one divisor. Greatest common divisor A number g is referred to as the greatest common divisor of a and b (abbreviated as gcd(a,b)) if it is the largest number that divides both a and b. Examples: gcd(6, 12) = 6 gcd(6, 9) = 3 gcd(100, 52) = 2 If gcd(a, b) = 1 we say that a and b are coprime. Example: 13 and 24 are coprime. Modular arithmetic Modular arithmetic is a system of arithmetic for integers, where two integers are considered equal if they have the same remainder when divided by some fixed number (called the modulus). In this system: a (modn) represents the remainder when a is divided by n. So, for example, 5 (mod 3) is 2, and 7 (mod 6) is 1 Two numbers are considered equivalent (congruent) under a modulus n if they have the same remainder when divided by n. This is written as a ≡ b(modn). For example: 7 ≡ 12 (mod 5) as 7 = 1 * 5 + 2 and 12 = 2 * 5 + 2 6 ≡ 0 (mod 6) as 6 = 1 * 6 + 0 5 + 2 ≡ 1 (mod 6) 3 * 4 ≡ 0 (mod 12) You can think of modular arithmetic as moving around the circle by some fixed angle as illustrated in the picture below (The picture illustrates arithmetics modulo 7): Below I list two useful properties of numbers congruent modulo n: 1. If a ≡ b (mod n) then n | a - b 2. If a ≡ b (mod n) then there exists a (not necessarily positive) number k such that a = k * n + b 3. If 0 < a < n then a (mod n) is just a That's all of the math needed to understand the implementation of RSA. The algorithm The algorithm consists of three parts which include key generation as well as encryption and decryption functions. Key generation To generate both keys follow the steps below: 1. Pick two prime numbers p and q and let N = p * q 2. Choose a number e > 1 such that gcd(e, (p-1)(q-1)) = 1 3. Find a number d such that e * d ≡ 1 (mod (p-1)(q-1)) Then the keys can be represented as tuples: public_key = (e, N) private_key = (d, N) Decryption and encryption The decryption and encryption functions are given by: enc(public_key, x) = x^e (mod N) dec(private_key, y) = y^d (mod N) The algorithm ensures that dec(private_key enc(public_key, x)) ≡ x (mod N). We would be more interested in encrypting text than numbers; however, this doesn't require additional work. Every character is represented as a number so if we choose N that is bigger than the highest number that can represent a character we get that dec(private_key enc(public_key, x)) = x. Simple implementation Below I present the simplest implementation of the RSA algorithm in Python (it differs from the one presented in the original paper and works efficiently only with small prime numbers): from math import gcd def key_gen(p, q): n = p * q phi = (p - 1) * (q - 1) e = 2 while gcd(e, phi) != 1: e += 1 d = 2 while (d * e) % phi != 1: d += 1 return {'public_key': (e, n), 'private_key': (d, n)} # 3rd argument of the pow() function is the modulus # https://www.w3schools.com/python/ref_func_pow.asp def encrypt(public_key, message): e, n = public_key return [pow(ord(c), e, n) for c in message] def decrypt(private_key, cipher_text): d, n = private_key return ''.join([chr(pow(c, d, n)) for c in cipher_text]) def main(): p = 47 q = 59 keys = key_gen(p, q) print("private_key: ", keys['private_key']) # private_key: (1779, 2773) print("public_key: ", keys['public_key']) # public_key: (3, 2773) message = "Hello, World!" print("Encrypted message: ", encrypt(keys['public_key'], message)) # Encrypted message: [1666, 1518, 770, 770, 542, 1994, 2265, 1302, 542, 762, 770, 1720, 2661] print("Decrypted message: ", decrypt(keys['private_key'], encrypt(keys['public_key'], message))) # Decrypted message: Hello, World! if __name__ == '__main__': We've explored the basics of RSA, including key generation, encryption, and decryption, along with the necessary number theory. This foundation helps us understand how RSA secures communication through asymmetric key encryption. In the next posts, we will prove the correctness of the algorithm, explore the original implementation from the RSA paper, and discuss its security, explaining all necessary mathematics along the way. Stay tuned for a deeper dive into the workings of RSA.
{"url":"https://jcabala.xyz/blog/rsa/understanding-rsa-part-1","timestamp":"2024-11-09T00:22:46Z","content_type":"text/html","content_length":"113986","record_id":"<urn:uuid:b5b72478-cb70-4678-aa83-25afe7b90757>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00833.warc.gz"}
Linear programming model to maximize budget Reference no: EM1321799 Reference no: EM1321799 Q1) Betty Malloy, owner of Eagle Tavern in Pittsburgh, is making for Super Bowl Sunday, and she should find out how much beer to stock. Betty stocks 3 brands of beer - Yodel, Shotz, and Rainwater. Cost per gallon (to the tavern owner) of each brand is as follows: │Brand │Cost/gallon │ │Yodel │$1.50 │ │Shotz │0.90 │ │Rainwater │0.50 │ Tavern has budget of $2,000 for beer for Super Bowl Sunday. Betty sells Yodel at rate of $3.00 per gallon, Shotz at $2.50 per gallon, and Rainwater at $1.75 per gallon. Based on past football games, Betty have determined maximum customer demand to be 400 gallons of Yodel, 500 gallons of Shotz, and 300 gallons of Rainwater. Tavern has capacity to stock 1,000 gallons of beer; Betty wishes to stock up completely. Betty desires to find out number of gallons of each brand of beer to order so as to maximize profit. a) Create a linear programming model for this problem. b) Solve this problem using computer. Prepare detailed agenda for meeting participants : You would need the store manager's approval to do this, but you decide to get your co-workers on board first. Prepare a detailed agenda to send to the other meeting participants. Confidence interval for mean-known standard deviation : Construct and interpret 90 percent on confidence interval for the mean. Be sure to interpret the meaning of this interval. Finding a simple random sample : Finding a simple random sample Writing down the script : Write down a script known as whichdaemon.sh that checks if the httpd and init daemons are running over your system. If an httpd is running. Linear programming model to maximize budget : Tavern has budget of $2,000 for beer for Super Bowl Sunday. Betty desires to find out number of gallons of each brand of beer to order so as to maximize profit. Create a linear programming model for this problem. Testing equality of two means-small sample : use the following sample data to test whether the GPA are different at the 0.05 level of significance. Designing an e-r diagram : Assume that at PVF, every product (explained by Product No., Description, and Cost) is composed of at least three elements (explained by Component No., Description, and Unit of Measure). Design an E-R diagram for this situation. Presentation on collective bargaining : Imagine a simulated scenario in which you will make a presentation to your HR Department in preparation for a move by the employees to introduce collective bargaining into your company. Goodness of fit test for chi-square distribution : At the .01 significance level, is there a difference in the use of the four entrances? Statistics-probability assignment MATH1550H: Assignment: Question: A word is selected at random from the following poem of Persian poet and mathematician Omar Khayyam (1048-1131), translated by English poet Edward Fitzgerald (1808-1883). Find the expected value of the length of th.. What is the least number MATH1550H: Assignment: Question: what is the least number of applicants that should be interviewed so as to have at least 50% chance of finding one such secretary? Determine the value of k MATH1550H: Assignment: Question: Experience shows that X, the number of customers entering a post office during any period of time t, is a random variable the probability mass function of which is of the form What is the probability MATH1550H: Assignment:Questions: (Genetics) What is the probability that at most two of the offspring are aa? Binomial distributions MATH1550H: Assignment: Questions: Let’s assume the department of Mathematics of Trent University has 11 faculty members. For i = 0; 1; 2; 3; find pi, the probability that i of them were born on Canada Day using the binomial distributions. Caselet on mcdonald’s vs. burger king - waiting time Caselet on McDonald’s vs. Burger King - Waiting time Generate descriptive statistics Generate descriptive statistics. Create a stem-and-leaf plot of the data and box plot of the data. Sampling variability and standard error Problems on Sampling Variability and Standard Error and Confidence Intervals Estimate the population mean Estimate the population mean Conduct a marketing experiment Conduct a marketing experiment in which students are to taste one of two different brands of soft drink Find out the probability Find out the probability Linear programming models
{"url":"https://www.expertsmind.com/library/linear-programming-model-to-maximize-budget-521799.aspx","timestamp":"2024-11-09T17:49:47Z","content_type":"text/html","content_length":"68854","record_id":"<urn:uuid:2ef2220c-a0af-4208-879c-9757cdaf291e>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00144.warc.gz"}
How to Delete All Rows Of Group In Pandas If the Group Meets Condition? To delete all rows of a group in pandas if the group meets a certain condition, you can use the groupby() function to group the data by a specific column or criteria, and then apply a filtering condition to each group using the filter() function. Within the filter condition, you can specify the criteria that the group must meet in order for its rows to be deleted. By using this approach, you can remove rows from each group that satisfy the given condition while preserving the rest of the data. What is the best way to remove rows from a pandas groupby object that meet a condition? One way to remove rows from a pandas groupby object that meet a condition is to first filter the original DataFrame using the condition, and then groupby the desired column and perform any aggregation or transformation necessary. Here's an example code snippet that demonstrates this approach: 1 import pandas as pd 3 # Create a sample DataFrame 4 data = {'Group': ['A', 'A', 'B', 'B'], 5 'Value': [1, 2, 3, 4]} 6 df = pd.DataFrame(data) 8 # Groupby the 'Group' column 9 grouped = df.groupby('Group') 11 # Remove rows where 'Value' is greater than 2 12 filtered = df[df['Value'] <= 2] 14 # Groupby the 'Group' column again on the filtered DataFrame 15 new_grouped = filtered.groupby('Group') 17 # Now you can perform any aggregation or transformation on the new_grouped object 18 # For example, you can calculate the sum of 'Value' for each group 19 sum_value = new_grouped['Value'].sum() 21 print(sum_value) In this example, we first create a sample DataFrame and group it by the 'Group' column. Then, we filter the DataFrame using the condition 'Value' <= 2. Finally, we group the filtered DataFrame by the 'Group' column again and perform any necessary aggregation or transformation. How to drop rows from a pandas groupby object if they satisfy multiple criteria? You can drop rows from a pandas groupby object that satisfy multiple criteria by using the filter method. Here's an example of how you can do this: 1 import pandas as pd 3 # Create a sample DataFrame 4 data = {'A': ['foo', 'bar', 'foo', 'bar', 'foo', 'bar'], 5 'B': [1, 2, 3, 4, 5, 6], 6 'C': [7, 8, 9, 10, 11, 12]} 7 df = pd.DataFrame(data) 9 # Group by column 'A' 10 grouped = df.groupby('A') 12 # Define the criteria for dropping rows 13 def criteria(x): 14 return (x['B'] < 4) & (x['C'] > 8) 16 # Drop rows that satisfy the criteria 17 filtered = grouped.filter(lambda x: ~criteria(x)) 19 print(filtered) In this example, we're grouping the DataFrame by the column 'A' and then using the filter method to drop rows where column 'B' is less than 4 and column 'C' is greater than 8. The criteria function defines the criteria for dropping rows, and the lambda function passed to filter excludes rows that satisfy the criteria. After running this code, the filtered DataFrame will contain the rows that do not satisfy the criteria specified. How to filter out groups from a pandas dataframe based on a given condition? To filter out groups from a pandas dataframe based on a given condition, you can use the groupby function in combination with the filter method. Here is an example of how you can do this: 1 import pandas as pd 3 # create a sample dataframe 4 data = {'group': ['A', 'A', 'B', 'B', 'C', 'C'], 5 'value': [1, 2, 3, 4, 5, 6]} 6 df = pd.DataFrame(data) 8 # filter out groups where the sum of values is greater than 5 9 filtered_df = df.groupby('group').filter(lambda x: x['value'].sum() > 5) 11 print(filtered_df) In this example, we group the dataframe df by the 'group' column and apply a lambda function to each group to check if the sum of values in that group is greater than 5. The filter method then returns only the groups that satisfy this condition. You can replace the lambda function with any custom condition that you want to use for filtering the groups in your dataframe. What is the simplest way to drop rows from a pandas groupby object if they meet multiple criteria? One way to drop rows from a pandas groupby object that meet multiple criteria is to use the filter() method along with a lambda function that specifies the criteria to be met. For example, if we have a pandas DataFrame called df that has been grouped by a column called 'group', and we want to drop rows where the value in column 'A' is less than 5 and the value in column 'B' is greater than 10, we can do the following: 1 df_filtered = df.groupby('group').filter(lambda x: (x['A'] >= 5) & (x['B'] <= 10)) This will filter out rows that meet the specified criteria from each group in the groupby object. How to eliminate rows from a pandas groupby object if they meet a certain condition? You can eliminate rows from a pandas GroupBy object based on a certain condition by using the filter() function. The filter() function applies a function to each group in the GroupBy object and returns only the groups that satisfy the condition. Here's an example on how to eliminate rows from a pandas GroupBy object if they meet a certain condition: 1 import pandas as pd 3 # create a sample DataFrame 4 df = pd.DataFrame({ 5 'group': ['A', 'A', 'B', 'B', 'C', 'C'], 6 'value': [1, 2, 3, 4, 5, 6] 7 }) 9 # group the DataFrame by the 'group' column 10 grouped = df.groupby('group') 12 # define a function to filter out groups where the sum of 'value' column is greater than 5 13 def filter_func(x): 14 return x['value'].sum() <= 5 16 # apply the filter function to each group 17 filtered_groups = grouped.filter(filter_func) 19 # print the filtered DataFrame 20 print(filtered_groups) In this example, the filter_func function checks if the sum of values in each group is less than or equal to 5. The filter() function is then applied to the grouped object, which returns a new DataFrame with only the groups that satisfy the condition. You can modify the filter_func function to define any condition you want to filter out rows based on your specific criteria. What is the simplest way to drop rows from a pandas groupby object if they meet a specific criteria? One way to drop rows from a pandas groupby object if they meet a specific criteria is to use the filter function. Here's an example: 1 import pandas as pd 3 # Create sample data 4 data = {'A': ['foo', 'bar', 'foo', 'bar'], 5 'B': [1, 2, 3, 4], 6 'C': [5, 6, 7, 8]} 7 df = pd.DataFrame(data) 9 # Group by column A 10 grouped = df.groupby('A') 12 # Define criteria for dropping rows 13 def criteria(x): 14 return x['B'].sum() < 5 16 # Use the filter function to drop rows that meet the criteria 17 filtered_grouped = grouped.filter(criteria) 19 print(filtered_grouped) In this example, rows are dropped if the sum of column 'B' within each group is less than 5. You can adjust the criteria function to meet your specific requirements.
{"url":"https://almarefa.net/blog/how-to-delete-all-rows-of-group-in-pandas-if-the","timestamp":"2024-11-09T15:35:37Z","content_type":"text/html","content_length":"427715","record_id":"<urn:uuid:687189e1-2e2f-4f4b-a25b-496d0d447ef2>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00320.warc.gz"}
AP Exam Review - Week 3 - Wednesday | Calc Medic top of page AP Exam Review Week 3 - Wednesday (Day 13) Focus Area • Concepts from Unit 5 and Unit 6 (analytical applications of derivatives; basic integrals)​ Teaching Tips Students will work in small groups to go over their Unit 5 and Unit 6 tests. We will have them use the “Going over Tests Protocol” document to guide their work. For more information about how we maximize student learning when looking at past assessments, check out our previous post here. Review Activity 2: Flashcards Materials: Notecards or slips of paper Teaching Tips Unit 5 and Unit 6 are FULL of key ideas that students need to master before the AP Exam. For this reason, we have students create flashcards to review justifications, theorems, and integral rules. We’ve found that some students have never made flashcards before so they might need some instructions about what goes on the front side, what goes on the back side, and how to quiz themselves using Although flashcards are used primarily for memorizing vocabulary or formulas, they are excellent for remembering justifications. For example, one side might say “relative minimum” and the other side would say “f’ changes from negative to positive”. This will help students internalize exactly what is needed to justify these function features in a free response question. Basic antiderivative rules are another great item for flashcards, where one side has the indefinite integral expression and the other has the antiderivative (+C of course!). You could even have students write an integral expression on one side, and its interpretation on the other. Have students quiz themselves and each other. To make this activity competitive, have students time themselves and see how many they can get through in one minute. We generally use 3x5 notecards for this activity, but you may wish to simply cut up pieces of computer paper or use cardstock, as long as both sides can be written on. bottom of page
{"url":"https://www.calc-medic.com/apexamreview-week3-wednesday","timestamp":"2024-11-08T05:13:11Z","content_type":"text/html","content_length":"741764","record_id":"<urn:uuid:75c8ddd5-6601-40c0-8367-5af99e4e84ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00348.warc.gz"}
Numerical Mathematics The course is not on the list Without time-table Code Completion Credits Range Language 2011049 Z,ZK 4 2P+2C Czech Garant předmětu: Numerical solution of systems of linear equations, iterative methods. Numerical solution of nonlinear algebraic equations. Least squares method. Numerical solution of ordinary differential equations, initial and boundary value problems. Numerical solution of basic linear partial differential equations by finite difference method. Syllabus of lectures: 1. Norm and spectral radius of matrices. Principle of iterative methods. Fixed point iterative method. 2. Jacobi and Gauss-Seidel iteration method. Convergence. 3. Minimization of function and gradient methods. Steepest descent method. Least squares method and system of normal equations. 4. Systems of nonlinear equations, existence and uniqueness of solutions. Contract mapping and fixed point iterations. Newton's method. 5. Numerical solution of ordinary differential equations (ODE). Explicit and implicit Euler method. Collatz method. 6. One-step methods, local discretization error, global error, order of the method. 7. One-step methods of Runge-Kutty type. Higher order methods. 8. Boundary value problem for 2nd order linear ODEs in selfadjoint form. Existence a solution uniqueness. Numerical solution of the problem by the finite difference method. 9. Principle of 2D finite difference methods. Taylor expansion. Boundary value problem for Poisson equation with Dirichlet condition. 10.-11. Initial boundary value problem for heat conduction equation. Numerical solution of heat conduction problem, explicit and implicit scheme. Convergence and stability of the method. 12.-13. Initial boundary value problem problem for wave equation. Numerical solution by finite difference method, explicit and implicit scheme. Convergence and stability of the method. Syllabus of tutorials: 1. Norm and spectral radius of matrices. Principle of iterative methods. Fixed point iterative method. 2. Jacobi and Gauss-Seidel iteration method. Convergence. 3. Minimization of function and gradient methods. Steepest descent method. Least squares method and system of normal equations. 4. Systems of nonlinear equations, existence and uniqueness of solutions. Contract mapping and fixed point iterations. Newton's method. 5. Numerical solution of ordinary differential equations (ODE). Explicit and implicit Euler method. Collatz method. 6. One-step methods, local discretization error, global error, order of the method. 7. One-step methods of Runge-Kutty type. Higher order methods. 8. Boundary value problem for 2nd order linear ODEs in selfadjoint form. Existence a solution uniqueness. Numerical solution of the problem by the finite difference method. 9. Principle of 2D finite difference methods. Taylor expansion. Boundary value problem for Poisson equation with Dirichlet condition. 10.-11. Initial boundary value problem for heat conduction equation. Numerical solution of heat conduction problem, explicit and implicit scheme. Convergence and stability of the method. 12.-13. Initial boundary value problem problem for wave equation. Numerical solution by finite difference method, explicit and implicit scheme. Convergence and stability of the method. Study Objective: 1. Matrices; System of linear equations - direct methods; Gauss elimination for tri-diagonal systems; Principle of iterative methods; norms and spectral radius., 2. Simple and Jacobi iterative method; Gauss-Seidel method; convergence conditions., 3. Systems of nonlinear equations; Problems of existence and uniqueness of the solution; Iterative methods - Newton method; Analogy of 1D problem., 4. Principle of interpolation; Interpolation by algebraic polynomials; Existence and uniqueness of the polynomial; Interpolation by spline functions; Advantages of this interpolation; Practical 5. Least squares approximation - principle of approximation by an algebraic polynomial; Derivation of the system of normal equations;, 6-8. Numerical solution of the Cauchy problem for the 1st order equation and for a system in normal form; Cauchy problem for the nth order equation; Principle of one-step methods of Euler &amp; Runge-Kutta; Convergence; Practical application;, 9-10. The problems of the solution of the boundary value problems for an 2nd order ordinary differential equation, comparison with the Cauchy problem; Existence and uniqueness; Dirichlet problem; Principle of the mesh methods (finite difference methods), convergence; Existence and uniqueness of the solution of the associated system of linear equations; Shooting method;, 11-13. Numerical solution of the linear partial differential 2nd order equations in 2D -mesh methods; Classes of equations; Formulation of elementary problems for the equations of the mathematical physics (Laplace and Poisson equation; Heat transfer equation, Wave equation); Difference substitutions of the first and second derivative order of the approximation; Principle of the mesh method for the solution of individual types of problems; Convergence and stability; Study materials: 1. Mathews, J. H.: Numerical Methods for Mathematics, Science and Engineering, Prentice Hall International, 2nd edition,1992, 2. Gerald, C.F., Wheatley, P.O.: Applied Numerical Analysis, Addison Wesley, 6th edition, 1999 Further information: No time-table has been prepared for this course The course is a part of the following study plans:
{"url":"https://bilakniha.cvut.cz/next/en/predmet10509002.html","timestamp":"2024-11-03T13:56:18Z","content_type":"text/html","content_length":"15937","record_id":"<urn:uuid:30756c13-4625-44b4-9e46-98091d682fa4>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00456.warc.gz"}
A manufacturer claims that the calling range (in feet) of its 900-MHz cordless telephone is greater... A manufacturer claims that the calling range (in feet) of its 900-MHz cordless telephone is greater... A manufacturer claims that the calling range (in feet) of its 900-MHz cordless telephone is greater than that of its leading competitor. A sample of 11 phones from the manufacturer had a mean range of 1240 feet with a standard deviation of 24 feet. A sample of 18 similar phones from its competitor had a mean range of 1230 feet with a standard deviation of 28 feet. Do the results support the manufacturer's claim? Let ?1 be the true mean range of the manufacturer's cordless telephone and ?2 be the true mean range of the competitor's cordless telephone. Use a significance level of ? = 0.01 for the test. Assume that the population variances are equal and that the two populations are normally distributed. Step 1. State the null and alternative hypotheses for the test. Step 2. Compute the value of the ? test statistic. Round your answer to three decimal places. Step 3. Determine the decision rule for rejecting the null hypothesis ?0. Round your answer to three decimal places. Step 4. State the test's conclusion. A) Reject Null Hypothesis B) Fail to Reject Null Hypothesis
{"url":"https://justaaa.com/statistics-and-probability/450681-a-manufacturer-claims-that-the-calling-range-in","timestamp":"2024-11-12T16:38:04Z","content_type":"text/html","content_length":"42585","record_id":"<urn:uuid:fc9b818c-67fc-43c4-b7ca-034fc29ad561>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00704.warc.gz"}
Consequences of the null result of the Michelson-Morley experiment:the demise of the stationary aether, the rise of Special Relativity,and the heuristic concept of the photon The first principle of SR posits that there is no absolute motion referenced to an unchanging frame of Space, since all translatory motion is relative to an observer at rest in its own inertial frame of reference. As the covariant complement to the contraction of length in the direction of motion is the objective dilation of time, SR is constrained to 'spatialize Time' by reducing it to a length of Space. Subsequently, once all lengths are treated as covariant intervals, the simultaneity of events is no longer invariant, nor, for that matter, is the notion of spatial coincidence. Synchronicity is considered to be an actual impossibility, and the function for a spatio-temporal continuum remains dissociated from any concept of energy, being flattened onto four-dimensional SR's second principle postulates that the speed of light is a constant for every inertial frame of reference, that is, is the same in all directions and for all observers, and independent of the motion of the source of light or the motion of the receiver, for as long as we are considering solely 'substantial translatory motions'. SR's position in this respect is somewhat paradoxical: one can say that it satisfies Machian relationism by positing all electromagnetically valid observers as being at rest in inertial frames of translation, their speeds being all relative and none absolute. But with the second principle, it explicitly recognizes some form of absolute velocity, an absolute speed of radiation in vacuo which is constant for all inertial frames in 'sufficient translation'. Where SR had proposed that one should conclude from the MM experiment that there is no stationary aether, and that the propagation of light is independent from the inertial frame of the observer, Einstein's photon theory proposes a new model where there is no need to take recourse to an aether in order to explain the propagation of light. The authors propose that, whereas SR was correct in positing the abolition of the stationary aether and in postulating as invariant the electromagnetic speed c, its requirements for adoption of the Lorentz transformations rely entirely upon the classical electrodynamic interpretation of the Kauffman-Bucherer-Bertozzi-type experiments. Since there are critical alternative evaluations of these experiments, this is a tenuous foundation for the abolition of synchronicity. Furthermore, the authors propose that Einstein's heuristic hypothesis be taken as factual - the result being that electromagnetic radiation becomes secondary to an energy continuum that is neither electromagnetic nor amenable to a four-dimensional reduction. It follows that the second principle of SR only applies to photon production, which is always and only a local discontinuity. It does not apply to non-electromagnetic radiation, nor, a fortiori, to the propagation of energy responsible for local photon production.
{"url":"http://aetherometry.com/DISPLAY/AS3-I.1","timestamp":"2024-11-05T00:53:01Z","content_type":"text/html","content_length":"7045","record_id":"<urn:uuid:64e090ef-f147-4c97-aceb-ebefb8840a04>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00221.warc.gz"}
Subtract 3 Digit Numbers With Regrouping Worksheets Subtract 3 Digit Numbers With Regrouping Worksheets serve as foundational devices in the realm of mathematics, offering an organized yet versatile system for learners to explore and grasp mathematical ideas. These worksheets use a structured method to understanding numbers, supporting a solid foundation whereupon mathematical efficiency grows. From the simplest counting workouts to the complexities of advanced calculations, Subtract 3 Digit Numbers With Regrouping Worksheets deal with students of diverse ages and skill levels. Revealing the Essence of Subtract 3 Digit Numbers With Regrouping Worksheets Subtract 3 Digit Numbers With Regrouping Worksheets Subtract 3 Digit Numbers With Regrouping Worksheets - Subtraction Worksheets 3 Digits Subtraction 3 Digit Numbers This page has printables for teaching three digit subtraction There are lots of traditional worksheets as well as card games and math riddle puzzles 3 Digit Subtraction Worksheets Subtraction Practice 3 Digits Regrouping FREE Step 1 Subtract the Ones digit of the subtrahend from the Ones digit of the minuend Write the number of Ones below the line in the Ones place If the Ones digit underneath is greater than the Ones digit above then take a ten from the minuend and convert it At their core, Subtract 3 Digit Numbers With Regrouping Worksheets are lorries for theoretical understanding. They encapsulate a myriad of mathematical concepts, assisting learners via the maze of numbers with a collection of engaging and deliberate exercises. These worksheets transcend the borders of conventional rote learning, urging active interaction and cultivating an user-friendly grasp of numerical relationships. Supporting Number Sense and Reasoning Three Digit Subtraction With Regrouping Worksheets Three Digit Subtraction With Regrouping Worksheets Welcome to The Subtracting 3 Digit from 3 Digit Numbers With Some Regrouping 49 Questions A Math Worksheet from the Subtraction Worksheets Page at Math Drills This math worksheet was created or last revised on 2023 02 10 and has been viewed 694 times this week and 1 297 times this month Subtracting 3 digit numbers with regrouping Grade 3 Subtraction Worksheet Find the difference 1 90 82 8 2 419 12 407 3 625 174 451 4 664 63 601 5 559 416 143 6 915 10 910 74 836 11 882 57 825 12 756 510 246 Title Math 3rd grade subtraction third grade 3 subtraction worksheet Author K5 Learning Subject Math 3rd grade The heart of Subtract 3 Digit Numbers With Regrouping Worksheets lies in cultivating number sense-- a deep understanding of numbers' definitions and affiliations. They motivate exploration, welcoming students to explore math procedures, analyze patterns, and unlock the mysteries of series. Through thought-provoking challenges and logical puzzles, these worksheets end up being entrances to developing reasoning abilities, nurturing the analytical minds of budding mathematicians. From Theory to Real-World Application 3 Digit Subtraction With Regrouping Printable 3 Digit Subtraction With Regrouping Printable Three Digit Subtraction 3 Interactive Worksheet Three Digit Subtraction Part 1 Worksheet Subtraction Place Value Blocks Worksheet 3 Digit Subtraction Using Regrouping Interactive Worksheet Subzero Three Digit Subtraction with Regrouping Its quite simple really you add the answer from the subtraction problems to one of the numbers in the subtraction problem and if the number equals the other number then u did it right for an example 20 15 5 so you do 5 15 20 if this is correct then your answer is correct 4 votes Upvote Flag Subtract 3 Digit Numbers With Regrouping Worksheets serve as conduits linking academic abstractions with the palpable truths of day-to-day life. By instilling sensible scenarios right into mathematical workouts, students witness the significance of numbers in their surroundings. From budgeting and dimension conversions to comprehending statistical data, these worksheets empower pupils to wield their mathematical expertise beyond the confines of the class. Varied Tools and Techniques Flexibility is inherent in Subtract 3 Digit Numbers With Regrouping Worksheets, utilizing an arsenal of pedagogical devices to satisfy diverse knowing designs. Aesthetic help such as number lines, manipulatives, and digital sources act as companions in imagining abstract concepts. This varied technique guarantees inclusivity, accommodating learners with various choices, toughness, and cognitive designs. Inclusivity and Cultural Relevance In a progressively diverse world, Subtract 3 Digit Numbers With Regrouping Worksheets embrace inclusivity. They go beyond cultural limits, integrating instances and troubles that reverberate with learners from varied backgrounds. By integrating culturally appropriate contexts, these worksheets foster an environment where every student feels stood for and valued, boosting their connection with mathematical concepts. Crafting a Path to Mathematical Mastery Subtract 3 Digit Numbers With Regrouping Worksheets chart a course in the direction of mathematical fluency. They instill determination, vital reasoning, and analytical abilities, essential attributes not just in mathematics but in various aspects of life. These worksheets empower learners to navigate the intricate terrain of numbers, nurturing a profound appreciation for the sophistication and logic inherent in maths. Embracing the Future of Education In an age noted by technical improvement, Subtract 3 Digit Numbers With Regrouping Worksheets perfectly adjust to digital platforms. Interactive interfaces and digital sources enhance typical knowing, offering immersive experiences that transcend spatial and temporal borders. This combinations of conventional approaches with technological developments proclaims an encouraging period in education and learning, cultivating an extra dynamic and engaging discovering setting. Final thought: Embracing the Magic of Numbers Subtract 3 Digit Numbers With Regrouping Worksheets characterize the magic inherent in maths-- a charming journey of exploration, exploration, and proficiency. They transcend standard pedagogy, acting as drivers for sparking the fires of interest and inquiry. Via Subtract 3 Digit Numbers With Regrouping Worksheets, learners embark on an odyssey, opening the enigmatic globe of numbers-- one trouble, one remedy, at a time. Maths Class 4 Subtraction Of 3 Digits Numbers With Regrouping Worksheet 1 Subtract 3 Digit Numbers With Regrouping Worksheets Check more of Subtract 3 Digit Numbers With Regrouping Worksheets below Three Digit Vertical Addition And Subtraction With Regrouping Worksheets Free Addition And Subtraction Worksheets 3 Digit With Regrouping Free4classrooms 2 Digit Plus Subtraction With Regrouping Worksheets Download Free Printable Worksheets Free Printable Subtraction With Regrouping Printable World Holiday Free Subtraction With Regrouping Worksheets Three Digit Subtraction Worksheets With Regrouping 3 Digit Subtraction Worksheets Math Salamanders Step 1 Subtract the Ones digit of the subtrahend from the Ones digit of the minuend Write the number of Ones below the line in the Ones place If the Ones digit underneath is greater than the Ones digit above then take a ten from the minuend and convert it Three digit Subtraction With Regrouping K5 Learning 3 digit subtraction with borrowing K5 provides a step by step guide for subtracting three digit numbers with regrouping Free worksheets for practice are also provided Step 1 Subtract the Ones digit of the subtrahend from the Ones digit of the minuend Write the number of Ones below the line in the Ones place If the Ones digit underneath is greater than the Ones digit above then take a ten from the minuend and convert it 3 digit subtraction with borrowing K5 provides a step by step guide for subtracting three digit numbers with regrouping Free worksheets for practice are also provided Free Printable Subtraction With Regrouping Printable World Holiday Free Addition And Subtraction Worksheets 3 Digit With Regrouping Free4classrooms 2 Digit Plus Free Subtraction With Regrouping Worksheets Three Digit Subtraction Worksheets With Regrouping Three Digit Subtraction Worksheets With Regrouping Free Printable 3 Digit Subtraction With Regrouping Worksheets Free Printable 3 Digit Subtraction With Regrouping Worksheets Subtraction With Borrowing Regrouping 3 Digits EdBoost
{"url":"https://szukarka.net/subtract-3-digit-numbers-with-regrouping-worksheets","timestamp":"2024-11-08T01:54:19Z","content_type":"text/html","content_length":"27327","record_id":"<urn:uuid:f26c6c8f-3f21-4b8e-b524-0aeac0d1199c>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00879.warc.gz"}
Attributes Of Quadratic Functions Worksheet Answers - Function Worksheets Characteristics Of Quadratic Functions Worksheet 1 – The Quadratic Capabilities Worksheet may help students to learn the qualities of quadratic features. This worksheet is helpful … Read more Characteristics Of Quadratic Functions New Worksheet Answers Characteristics Of Quadratic Functions New Worksheet Answers – The Quadratic Characteristics Worksheet can help college students to understand the features of quadratic capabilities. This worksheet … Read more
{"url":"https://www.functionworksheets.com/tag/attributes-of-quadratic-functions-worksheet-answers/","timestamp":"2024-11-08T14:49:45Z","content_type":"text/html","content_length":"97901","record_id":"<urn:uuid:78d26c1c-7768-42dd-a87e-239a249c6242>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00549.warc.gz"}
Mechanical pressure switches: How does one set the switch point? Mechanical pressure switches are often given a factory-set switch point. Nevertheless, some users desire to get this to configuration themselves, as, for instance, the pressure conditions in the application form can change. Regarding pressure switches with a measuring element in a bellows, diaphragm or piston design, the manual setting of the switch point, reset point or hysteresis (depending on model) requires a certain amount of effort. But it?s not witchcraft. This article explains, step by step, how to do it. Users need the next equipment: a pressure source (e.g. a dead-weight tester), a control pressure gauge, a continuity tester or perhaps a control lamp as well as a tool for the adjustment screw (incidentally, WIKA supplies the correct Allen key using its instruments; a conventional screwdriver is enough for setting the hysteresis). Setting plan view for the model PSM02 mechanical pressure switch from WIKA. The pin assignment 1 and 3 (small picture) corresponds to the switching function ?normally closed?, the assignment 2 and 3 will be for ?normally open?. Setting of the switch point First, screw the pressure switch into the pressure source connection and connect its electrical connections to the continuity tester or control lamp. Turn Ecstatic for the switch point in completely. This sets the switch indicate optimum value. Now apply the required switching pressure to the pressure switch until the control pressure gauge indicates the value. After that, turn the adjustment screw out until the instrument switches and the continuity tester or the control lamp reacts. Then lower the pressure until the pressure switch switches back and then increase it again to check the switch point. If necessary, the switch point must be corrected by turning the adjustment screw. Repeat the procedure until the desired switch point is reached. Setting of the reset point In a few applications, the reset point is defined instead of the switch point, e.g. for minimum pressure monitoring. Here the task is comparable to the switch point, but without turning in the adjustment screw. Increase the pressure until the instrument switches. Then lower the pressure until the instrument switches back and adjust the position of the adjustment screw accordingly. The rule of thumb here is: Set point < Actual value: Turn the screw out Set point > Actual value: Turn the screw in Setting of the hysteresis (not with all models) To do this, first set the switch point and check the reset point (see above). Depending on the reset point, the hysteresis screw is turned in or out. The switch point must then be checked, as changes to the hysteresis screw also affect the switch point. An overview of the various mechanical pressure switches and also further technical information are available on the WIKA website. In case you have any questions, your contact will gladly help you. Also read our articles Mechanical vs. electronic pressure switches: Functionality Mechanical vs electronic pressure switches: Application areas Pressure switches in booster pumps SIL pressure switch: Safety in tyre manufacturing You must be logged in to post a comment.
{"url":"https://techtvzone.com/mechanical-pressure-switches-how-does-one-set-the-switch-point-3/","timestamp":"2024-11-02T11:02:25Z","content_type":"text/html","content_length":"24323","record_id":"<urn:uuid:4216beaf-1725-4d73-97df-d48b11b491b0>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00698.warc.gz"}
: Discounting Definition for : Discounting Discounting is the calculation of the Present value of a sum. Discounting is thus the inverse to . Discounting makes it possible to compare sums received or paid out at different dates. Discounting is calculated with the Required rate of return of the investor. The discounting formula runs as follows: V0 = Vn / (1 + k)n, where Vn is the Future Cash flow , V0 - the initial , t – Discount rate , n – of the To know more about it, look at what we have already written on this subject
{"url":"http://www.vernimmen.com/Practice/Glossary/definition/Discounting.html?iframe","timestamp":"2024-11-03T17:12:49Z","content_type":"text/html","content_length":"19486","record_id":"<urn:uuid:b23a51ac-9492-4a12-a532-89d85ddd0e8f>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00585.warc.gz"}
Finding Angles Using Trig Ratios - Angleworksheets.com Finding Angles Using Trig Ratios Worksheet – There are many resources that can help you find angles if you’ve been having trouble understanding the concept. These worksheets will help you understand the different concepts and build your understanding of these angles. Students will be able to identify unknown angles using the vertex, arms and arcs … Read more
{"url":"https://www.angleworksheets.com/tag/finding-angles-using-trig-ratios/","timestamp":"2024-11-15T02:44:29Z","content_type":"text/html","content_length":"47212","record_id":"<urn:uuid:acc99530-03e9-4757-ab3b-a62dedfab62e>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00820.warc.gz"}
HAVAL160,4 Online Calculator HAVAL160,4 hash calculator Here you can calculate online HAVAL160,4 hashes for your strings. Put your string into form below and press "Calculate HAVAL160,4 hash". As a result you will get HAVAL160,4 hash of your string. If you need another hash calculators, for example: HAVAL128-3, HAVAL160-4, SHA1 or TIGER160-4 you can find it into appropriate section. About HAVAL hash algorithms HAVAL is a cryptographic hash function. Unlike MD5, but like most modern cryptographic hash functions, HAVAL can produce hashes of different lengths - 128 bits, 160 bits, 192 bits, 224 bits, and 256 bits. HAVAL also allows users to specify the number of rounds ( 3, 4, or 5) to be used to generate the hash. HAVAL was broken in 2004. HAVAL was invented by Yuliang Zheng, Josef Pieprzyk, and Jennifer Seberry in 1992. Research has uncovered weaknesses which make further use of HAVAL (at least the variant with 128 bits and 3 passes with 26 operations) questionable. On 17 August 2004, collisions for HAVAL (128 bits, 3 passes) were announced by Xiaoyun Wang, Dengguo Feng, Xuejia Lai, and Hongbo Yu. © Wikipedia Other Online Hash Calculators Top hashed strings 000000 MD2 hash 11111111 SNEFRU256 hash 123123 XXH128 hash 12341234 CRC32C hash 555555 FNV164 hash admin321 SHA1 hash asdf XXH32 hash daniel SHA384 hash dsa SHA1 hash abc987 XXH128 hash george SHA384 hash jesus SHA384 hash johsua FNV1A32 hash jordan SHA3-384 hash london HAVAL192,3 hash matrix HAVAL128,5 hash maverick MD5 hash mustang FNV1A32 hash pass HAVAL160,4 hash qwe SHA512/224 hash testtest XXH3 hash About project You've visit right place if you want to calculate HAVAL160,4 hashes. Put string or even text into String to encode field above and press "Calculate HAVAL160,4 hash". You will get HAVAL160,4 hash of your string in seconds. You can also copy this hash right to your clipboard using the appropriate button. Keep in mind that our website has a lot of other calculators, like MD2, MD4, MD5, SHA1, SHA224, SHA256, SHA384, SHA512-224, SHA512, RIPEMD128, RIPEMD160, RIPEMD256, RIPEMD320, WHIRLPOOL, SNEFRU, SNEFRU256, GOST, ADLER32, CRC32, CRC32B, CRC32C, FNV132, FNV1A32, FNV164, FNV1A64, JOAAT, MURMUR3A, MURMUR3C, MURMUR3F, XXH32, XXH64, XXH3, XXH128, etc. So all what you need to calculate any of these hashes is remeber our web site address - SHA256Calc.com
{"url":"https://sha256calc.com/hash/haval160-4","timestamp":"2024-11-02T07:22:39Z","content_type":"text/html","content_length":"18125","record_id":"<urn:uuid:6eb5dfae-fd7c-48a9-b3ba-85defa276c54>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00531.warc.gz"}
ETIPS - Make Thinking Visible Added Value Technology use provides added value to teaching and learning. Educational technology does not possess inherent value but rather it offers immense potential when intentionally coupled with grounded planning and solid teaching. Technology should not be used as filler for lesson plans or as down time for the teacher but rather to enhance student learning on a particular topic. This resource area explores specific options for adding value to classroom through technology but is by no means an exhaustive list. For learning, educational technology may support the accessing of data, processing of information, or communicating of knowledge by making these processes more feasible. Educational technology can increase access to people, perspectives, or resources and to more current information. Many times, software's interface design allows learner interaction or presents information in a multi-sensory format. Seek/Navigate Information Added Value: • Supports connections to other content areas • Encourages use of non-numerical information to support mathematics problem solving solutions Tools: Internet, CD/DVD, and PDA's Gather Real Life Data Added Value: • Encourages connection to other content areas • Provides a more diverse set of problems that are relevant and valuable for students • Engages students in complex unpredictable mathematics and problem solving Tools: Internet databases, TI interactive software Educational technology can support students learning-by-doing or aid them in constructing mental models, or making meaning, by scaffolding their thinking. For example, a database can allow students to compare, contrast and categorize information through query features. Define and Research Problem / Create Concept Maps or Mental Models Added Value: • Provides efficient ways to visualize problem in a flexible environment that allows revisions • Links visual images with language • Provides multiple and dynamic representations of problem Tools: Dynamic geometry software (Geometers' Sketchpad), data explorer software (TI Interactive), graphing calculators, system modeling tools, manipulatives, graphical organizers (Inspiration), word processing with drawing component. Gathering, Explaining and Interpreting Data / Predicting or Hypothesizing about Solutions and Events Added Value: • Provides ownership of current and/or real life data over time that students gathered using precise instrumentation • Provides multiple representations of data efficiently and consistently • Encourages students to explore how changes in data change outcomes Tools: Spreadsheets, databases, data explorer software (TI Interactive or Table Top), PDA's, video to connect software such as Measurement in Motion, graphing calculators, data collection devices (CBR's and CBL's) Finding and Interpreting Solutions to Problem Situations / Computing Added Value: • Increases accuracy of solutions • Provides multiple representations including both symbolic and visual models of problem solutions • Extends range of problems accessible to students to more complex real-life problems with messy data • Encourages students to develop multiple solutions Tools: Calculators (four function, scientific or graphing), spreadsheets, databases, drawing software, TI Interactive and data explorer software, dynamic geometry software and systems modeling tools Extend and Evaluate Problem Solutions Added Value: • Provides more time for reflection, revision and metacognition about multiple solutions or mathematics generalizations and concepts rather than the doing of the calculations and mathematical • Encourages connections between language and symbolic and representations that deepens understanding of mathematics • Encourages connections to related problems both within mathematics and in other content areas Tools: Word processing, graphical organizers, systems modeling tools, expert systems, Internet resources Modeling Problem Situations Added Value: • Increases opportunities for students to explore related problem solutions • Provides formal structure for visualizing and processing information about a complex problem situation • Encourages creative problem solving in an environment that readily Tools: Drawing software, graphical organizers, data explorer and dynamic geometry software, systems modeling tools, video, and programmable calculators Using Simulation Added Value: • Allows efficient data generation for large data sets or trials of and experiment • Provides ready to use models of common problem situations • Increases time spent on interpretation of data rather than gathering and displaying data Tools: Internet (NCTM's Illuminations), software with pre-prepared simulation components (Measurement in Motion), dynamic geometry software, systems modeling tools, video, CD/DVD resources, programmable calculators with linking device for sharing programs Assessing Student Performance Added Value: • Increases efficiency of assessment of low level mathematical skills • Provides customized assessment for students at different performance levels • Encourages on-going feedback and revision of products • Encourages gathering of multiple assessment information that can be stored Tools: Word processing tools, databases, hypermedia software, PDA's, video, and graphing calculators With educational technology students are able to create more authentic and professional communication, and in the style and format appropriate for the topic, whether to their peers or outside Conduction Asynchronous or Synchronous Discussions Added Value: Increases amount of discourse that occurs about mathematics Develops appropriate use of mathematical language Encourages students to question peers about mathematical thinking Tools: Internet discussion boards, word processing, sharing features of software, PDA's Presenting and Publishing Results Added Value: Supports formal, organized presentations of problem solutions Provides opportunities for students to receive feedback on solutions to mathematics problems from multiple perspectives Encourages students to use multiple media to enhance the communication of problem solutions Tools: Internet, word processing, spreadsheets, databases, graphical organizers, digital portfolios, TI Interactive, systems modeling tools, video, graphing calculators with links to computers, presentation, hypermedia, data explorer dynamic geometry and drawing software Collaborating through Cooperative Groups Added Value: Increases amount of group time spent discussing concepts and solutions rather than carrying out computations and other processes Provides environment for creating a shared vision of mathematics and problem solving situations Encourages generation of ideas and solutions that incorporate strengths of all members of group Tools: Internet, dynamic geometry software, systems modeling tools, graphing calculators Establishing and Maintaining Parent and Community Communication Added Value: Increases information communicated to the community beyond the school environment Provides alternative method for parents to learn about their student's performance and the teacher's expectations Encourages 24-7 connections between school and home Tools: Internet, databases, PDA's, video Collecting and Analyzing Data with Spreadsheets Added Value with Technology: Communicate and reason about data using multiple representations Projector connected to one computer for group presentations Computer for each group of two or three students Spreadsheet software (Excel or Clarisworks) Meter sticks Introduction: Students will investigate the relationship between two measures. Is there a relationship between height and arm span? If I know your height, can I guess your arm span? Students work in cooperative groups to measure each other's height and arm span and combine their data with two other groups. Spreadsheet tools are used to record the data and create an X-Y scatter graph. Students describe their data and graph, predict for data not on the graph or table and provide reasons using the experiment, table and graph. Collect, organize and display data and use graphical representations of data Describe and make predictions about relationships between two variables Use graphs to analyze change Use tools to determine measurements Reason using multiple representations to model physical phenomena Discuss how archeologists use clues to determine information about people. Can height be determined from the length of an arm bone? How can we set up an experiment to make a prediction? Students collect data in cooperative groups and use computer spreadsheets to organize and graph the data. Students communicate reasons for predicting the height or arm span for a measurement not on their graph or table. Students conduct their own experiment to answer a new question about the relationship of height to another measurement (foot length, smile width, wrist circumference, etc.). Investigate the relationship between height and foot length Collect, organize and graph data Predict and provide reasons for the prediction using the experiment table and graph Sample of Student Work: Contributed by Marcia L. Horn Math Stories Added Value with Technology: Reflect on thinking using multiple representations Communicate math ideas using multiple representations Computer with microphone and projection system for teacher modeling and student presentation Computer with microphone for each group of three students KidPix and KidPix Slideshow software Introduction: Math number stories are a routine part of K-3 mathematics. Using multimedia software students tell math stories using pictures, voice and text. The students individual math story pages are assembled to make a slideshow of a related set of number stories. These electronic number story books are displayed for all students to enjoy. Create and use multiple representations to communicate math ideas Use the language of mathematics to express math ideas Understand operations Model a number story for the students using the KidPix program. Assign or have groups select a "teen" number and unit theme (zoo, farm, garden, etc.). Have students work in groups and help each other create and save a KidPix story page using the group number and theme. After each group member has saved a page, model how to record and save a voice reading the text. Have students work in groups and help each other add their voice reading and save with their story page. Model how to assemble story pages and add transitions (but NOT sound) in a KidPix Slideshow. Have students work in groups and help each other add their story page and a transition to the group slideshow and save the completed book. Play their slideshows using looping. Move from computer to computer in small groups and view slideshows. Exchange compliments about each slideshow as they are shown on the group presentation system. Peer assessment of group work Presentation of group number story slideshow with text, drawings and voice Sample of Student Work: Contributed by Marcia L. Horn Journey North Using Mathematics Added Value with Technology Provides access to relevant and real life data and to scientific experts Supports student learning by: Encouraging continuous analysis and revision of conjectures about a real life situation based on primary source data Organizing and consolidating mathematical and scientific thinking through various communication technologies Changes instructional strategies by connecting mathematical understanding to geographical and/or scientific knowledge and data Grade Level: 6 - 8 Regular access to the Internet Journey North Computer spreadsheet with graphing capability that includes line graphs, scatterplots, and lines of best fit or graphing calculator with ability to save tables of data and complete linear regression Word processing software or journal (optional) TI Interactive software or similar software to download archived data from web for easy analysis Standards/Benchmarks Impacted by Technology NCTM Principles and Standards for School Mathematics Data Analysis and Probability: Formulate questions that can be addressed with data and collect, organize, and display relevant data to answer them Develop and evaluate inferences and predictions that are based on data MN Middle Level Chance and Data Handling Standard: Formulate a question and design an appropriate data investigation Organize raw data and represent it in more than one way Analyze data by selecting and applying appropriate data measurement concepts Prerequisite Skills: Uses electronic spreadsheet or table to organize and display data in different ways Understand concept of line of best fit Use scatter plots to investigate relationships among data Formulates questions to be answered through systematic observation Setting the Stage: Register at Journey North website http://www.learner.org/jnorth/. Journey North is a science education program that uses the Internet to track migration and signs of spring. You can collect local data about a dozen different migrations and signs of spring to submit to Journey North. You also have access to the same data collected by students across North America during the current season or since 1995 (in the archives). The data can be used to answer questions formulated about migration or spring. This activity can be done collaboratively between mathematics and science classes, geography classes may also be included. Invite outside experts, preferably with email, to help groups of students. Seek out scientists in the community, retired teachers or adults with good questioning skills, organizations such as Women Scientists, etc. The experts may also be students who have had experience with the Journey North data in previous years. (December or January) Identify different strategies for using the data collected on website. Brainstorm how the data could be used to make predictions about what is going to happen during the spring season. (January) Formulate questions and develop hypothesis. Form two to four student groups based on interests. Each group does the activity "What Do You Know?" (or similar activity). Have students use a word processing program on computer to record their brainstorming. Contact outside experts to review and help refine questions. Each group develops and records a hypothesis (or prediction) about what they think will happen. (Late January) Design investigation to help answer question. Organize small groups of three to four students to investigate one question formulated by the larger groups. Contact outside experts to review and help refine investigations. (February to May) Collect and analyze data. (late May to early June or when data is complete) Summarize and report investigation. For the Full Version, Print the Word Document Assessment of the Added Value: Checklist of Observable Student Behaviors: Independently uses alternative graphic representations generated with technology of data to show different aspects of investigation. Collects and organizes data for investigation efficiently and independently using appropriate technology. Revises and refines plan and hypothesis regularly Contributed by Margaret Biggerstaff Play the Factor Game Added Value with Technology: Provides access to an expert who analyzes best move in a game and reports it back to student. Student has to reason why that was best move. Supports student learning by: explaining the reason some moves are incorrect showing how the selection of each number affects game score for each player Enhances learning climate by motivating students to work towards a winning strategy. For the full version, print the Word Document Grades: 6 - 8 Internet access The Factor Game from the NCTM Illuminations site Word processing software (optional) Concept mapping software (optional) (i.e., Inspiration or Kidspiration) Standards/Benchmarks Impacted by Technology: NCTM Principles and Standards for School Mathematics Reasoning and Proof: Make and investigate mathematical conjectures. NCTM Principles and Standards for School Mathematics Number and Operations: Understand numbers, ways of representing numbers, relationships among numbers, and number systems Grade 6 - 8: use factors, multiples, prime factorization, and relatively prime numbers to solve problems. MN Middle Level Preparatory Standards Number Sense: Demonstrate understanding of number concepts including place value, exponents, prime and composite numbers, multiples, and factors; fractions, decimals, percents, integers, and numbers in scientific notation by translating among equivalent forms; and compare and order numbers within a set Prerequisite Skills Know and identify factors of a number. Uses computer software to construct concept maps and/or flow charts. (optional) Uses computer software to construct tables. (optional) Setting the Stage: Before beginning this activity, there is one thing that needs to be done as a class and one thing that needs to be an on-going part of this activity. The Factor Game engages students in a friendly contest in which winning strategies involve distinguishing between numbers with many factors and numbers with few factors. The game is available on the Illuminations site linked to the NCTM standards as an i-Math Investigation at: http://illuminations.nctm.org/tools/FactorGame/factor.html. i-Math Investigations are ready-to-use, online, interactive, multimedia math investigations. Complete i-Maths include student investigations, teacher notes, answers, and related professional development activities. Show students the Factor Game and play two or three games against students to show them how it works. After the first game, if students have not made the error of selecting a number that has no factors, the teacher should make this error and discuss what happens without giving away a winning strategy. After each step of instruction, encourage students to revisit lists started in 1a and 2b of the instruction section. Encourage them to use a word processing or other software format so they can add to and refine their information. Have students play the Factor Game multiple times in various combinations beginning with student versus computer and student versus student. One Student Vs. Computer Encourage beginning students to use the smallest playing grid. (Ask students to write about some questions in their journal or use a word processing program available to on the computer to keep a log for this activity.) Student Pairs Vs. Computer Pairs of students share their strategies before each move. Student Pair Vs. Student Pair Pairs of students play the Factor Game with two human players. They should talk about what is the best move available for each player and try those moves working together. Student Pair Vs. Student Pair with Layer Grid Pairs of students play with two human players and a larger grid of numbers. They should try to generalize their strategies from the smaller grid to the larger grids and talk about what is the best move available for each player during the game. Student Pair Vs. Computer Students talk about what is the best move for the human player and try to predict what they think the computer will do next. Student Team Vs. Student Team Each team tries to beat the other team by discussing their moves among team members and predicting their opponent's next move. After playing several games, each team shares their strategies with the other team. Options for Struggling Students: If students are struggling to identify winning strategies in step 3, use the structure of the student investigations to help guide them. May be done as a whole class or pairs of students as needed. Many student behaviors and products could be used to assess student performance on this activity. The following list of observable student behaviors focuses only on what is related to the added value of technology: Identifies a reason for each "Best Move" selected by the computer based on mathematical reasoning Explains alternative number selection or move when computer gives error message Adds to, revises, and refines conjectures regularly For the full version, print the Word Document Contributed by Margaret Biggerstaff Lesson Plans and Activities for Mathematical Concepts & Applications URL: http://www.wcom.com/marcopolo Description: MarcoPolo is a service of the MCI WorldCom Foundation and is designed to provide no-cost, standards-based Internet content for K-12 teachers. A comprehensive search engine allows teachers to search for lessons and content by any combination of subject, grade level or keyword. Sponsoring Organization: MCI WorldCom Foundation URL: http://illuminations.nctm.org Description: The site includes i-Math investigations (online, interactive lessons), Internet-based lesson plans and selected web resources designed to "illuminate" the NCTM standards. Sponsoring Organizations: National Council for Teachers of Mathematics - www.nctm.org MarcoPolo - www.wcom.com/marcopolo The AskERIC Lesson Plan Database URL: http://ericir.syr.edu/Virtual/Lessons Description: The AskEric Lesson Plan Database contains over 1100 unique lesson plans in many areas including Mathematics. Browse by subject, search the database or explore other sources. URL: http://www.aplusmath.com Description: Interactive drill and practice activities for addition, subtraction, multiplication, division, square roots, algebra, rounding and more can be found on APlusMath. Features include flashcards, games, worksheets and other activity formats. Science NetLinks URL: http://www.sciencenetlinks.com Description: the site strives to be a comprehensive "homepage" for K-12 science educators. It also includes a sections on "The Nature of Mathematics" and "The Mathematical World" which may be of interest to Mathematics teachers. Users must click into the site and perform a search for desired activities. Sponsoring Organizations: American Association for the Advancement of Science - www.aaas.org MarcoPolo - www.wcom.com/marcopolo The ArtsEdge Online Teaching Materials Area URL: http://www.artsedge.kennedy-center.org/teaching_materials/artsedge.html Description: The ArtsEdge Online Teaching Materials Area includes curriculum units, lesson plans, web links and other ideas for integrating the arts into classroom teaching across subject areas, including Mathematics. The units and lessons are often correlated to national standards. Math Goodies URL: http://www.mathgoodies.com Description: This free educational website featuring interactive math lessons, homework help, puzzles, calculator activities and more is a great resource for teachers. Problem of the Week Homepage URL: http://www.wits.ac.za/ssproule/pow.htm Description: While we try not to include "lists of lists" on the Ed-U-Tech site, this one is a fabulous resource for mathematics teachers who want to incorporate some problem solving activities into their classroom using the web. A must see for Middle School or High School teachers. Mathematics in Minnesota URL: http://www.scimathmn.org/math_title.htm Description: View information on Minnesota's K-12 Mathematics Framework, NSF-sponsored curriculum projects, the graduation standards and more. Sponsoring Organization: SciMathMN www.scimathmn.org The Geometry Center URL: http://www.geom.umn.edu Description: The Geometry Center provides instructional projects, downloadable software, video/animated demonstrations and more. Sponsoring Organization: University of Minnesota PBS Teacher Source URL: http://www.pbs.org/teachersource Description: One of the most comprehensive Web sites ever created for preK-12 educators, PBS Teacher Source aggregates the educational services that PBS and its local stations provide and helps teachers learn effective ways to incorporate video and the Web in the classroom. Resources are grouped into five subject areas, are searchable by subject, grade level and keyword, and correlated to many sets of national and state educational standards. Math Central URL: http://MathCentral.uregina.ca Description: Math Central contains a searchable database of resources and lessons. Sponsoring Organization: University of Regina www.uregina.ca/ The Math Forum URL: http://forum.swarthmore.edu Description: A comprehensive mathematics education site developed and hosted by Swarthmore College. This site has areas for students, teachers and researchers and includes on-line problems/contests, lessons/activities and much, much more. Web Math URL: http://www.webmath.com Description: This site will allow students (or teachers) to get answers and explanations to mathematics problems of all types. Annenberg/CPB Projects Exhibits Collection URL: http://www.learner.org/exhibits Description: This site offers high quality interactive learning experiences with topics such as "Math in Daily Life", and "Renaissance" with mathematical implications. These are excellent multimedia projects that can be used as the basis for lessons. ENC Lessons and Activities URL: http://www.enc.org/classroom/lessons/nf_lesmath.htm Description: Math lessons, activities and web sites indexed by the Eisenhower Clearinghouse are available at ENC Lessons and Activities. World of Escher URL: http://www.worldofescher.com/ Strand: Geometry, History Grades: Middle and High School Description: This site provides M. C. Escher stories and quotes in the Library and through a monthly newsletter, and offers high quality products promoting his intriguing work. Professor Roger Penrose is also featured in the library. The homepage suggests questions to ponder while exploring the site. MAT Tours to Support Learning Through Investigation URL: http://web.hamline.edu/~lcopes/SciMathMN/ Strand: Discrete Mathematics Grade Levels: High School Description: This site delivers a discrete mathematics curriculum in an inquiry-based, hypertext format. It engages learners in the processes of math with graph theory, combinatorics, and recursion. Learners will need instructor guidance, supplemental activities, and other written materials. Sponsoring Organization: SciMathMN www.scimathmn.org Virtual Polyhedra: The Encyclopedia of Polyhedra URL: http://www.georgehart.com\virtual-polyhedra/vp.html Strand: Geometry Grade Levels: High School and up Description: This is a tutorial, reference work, and object library for people interested in polyhedra. View virtual polyhedra and read mathematical background info. Directions for constructing paper models are available, as well as problem solving exercises (with solutions). URL: http://math.rice.edu/~joel/NonEuclid/ Strand: Geometry Grade Levels: High School and up Description: Non-Euclid is a Java software simulation offering ruler and compass constructions in both Poincare disk and the upper half-plane models of hyperbolic geometry. This could provide an opportunity for students to do self-directed activities and exploration in an environment that is not a typical 2-D plane. Sponsoring Organization: Rice University http://www.rice.edu/ Young Investor's Website URL: http://www.younginvestor.com Grades: Elementary Description: Students can choose a character to guide them through a stock-market learning activity online. This is a fun lesson for young children.
{"url":"http://leadership.etips.info/case_topics/math/2added","timestamp":"2024-11-11T14:25:51Z","content_type":"text/html","content_length":"38508","record_id":"<urn:uuid:078e2499-b684-47e6-87ff-cb22668d6126>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00181.warc.gz"}
Completeness and Geodesic Distance Properties for Fractional Sobolev Metrics on Spaces of Immersed Curves We investigate the geometry of the space of immersed closed curves equipped with reparametrization-invariant Riemannian metrics; the metrics we consider are Sobolev metrics of possible fractional-order q∈[0,∞). We establish the critical Sobolev index on the metric for several key geometric properties. Our first main result shows that the Riemannian metric induces a metric space structure if and only if q>1/2. Our second main result shows that the metric is geodesically complete (i.e., the geodesic equation is globally well posed) if q>3/2, whereas if q<3/2 then finite-time blowup may occur. The geodesic completeness for q>3/2 is obtained by proving metric completeness of the space of H^q-immersed curves with the distance induced by the Riemannian metric. Bibliographical note Publisher Copyright: © The Author(s) 2024. • 35A01 • 35G55 • 58B20 • 58D10 • Completeness • Fractional Sobolev space • Geodesic distance • Global well-posedness • Immersions • Infinite-dimensional Riemannian geometry Dive into the research topics of 'Completeness and Geodesic Distance Properties for Fractional Sobolev Metrics on Spaces of Immersed Curves'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/completeness-and-geodesic-distance-properties-for-fractional-sobo","timestamp":"2024-11-14T07:21:09Z","content_type":"text/html","content_length":"48754","record_id":"<urn:uuid:6c475f6a-bf9d-4e17-bc9b-e3075e6e76f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00041.warc.gz"}
101 careers in mathematics pdf Martha ruttle bridges math program the math learning. See updates on hunters response to the coronavirus from president raab. Bridges in mathematics is a standardsbased k5 curriculum that provides a unique blend of concept development and skills practice in the context of problem solving. It also looks at the varied and exciting sorts of jobs that make use of the mathematics powering your magic. This list is not exhaustive but it provides a solid idea of what fellow graduates have gone on to do and what potential careers an applied mathematics degree can offer. Employment in occupations related to stemscience, technology, engineering, and mathematicsis projected to grow to more than 9 million between 2012 and 2022. How mathematics earns its keep is through practical applications. A collection of 101 short essays written by people who work in many different fields, as a demonstration of the enormous number of ways mathematics is used in the modern world. Review sheets basic mathematics math 010 a summary of concepts needed to be successful in mathematics the following sheets list the key concepts that are taught in the specified math course. Bond covenants and other security features of revenue bonds. No career counselor should be without this valuable resource. Below is a sample list of some future choices to explore following studies in applied mathematics. Understand the origin of banking and how it has evolved. According to the mathematical association of america, math professions are becoming increasingly attractive. An undergraduate mathematics education can be an entryway to many rewarding and engaging career opportunities. Career in math project annvillecleona school district. How have careers and career decisions changed since 1996. The class concludes with an overview of the order of operations and grouping symbols. The class math fundamentals covers basic arithmetic operations, including addition, subtraction, multiplication, and division. With 114 completely new profiles since the third edition, the careers featured within accurately reflect current trends in the job market. The sheets present concepts in the order they are taught and. Our world rests on mathematical foundations, and mathematics is unavoidably embedded in our global culture. Most courses that can be used to satisfy the requirements for the departmental major have calculus i and ii as prerequisites, and at least calculus iii as a co. Applications of mathematics publishes original research papers of high scientific level that are directed towards the use of mathematics in different branches of science. More detailed information can be found in this informative book 101 careers in mathematics, by andrew sterrett editor, maa publication, 1996which i recommend to anyone interested in knowing what opportunities a degree in mathematics can can open up. Needleman, rates the mathematician at the very first spot on a ranking of the best jobs in the u. The wall street journal article doing the math to find the good jobs, by sarah e. Since 2008 this mathematics lecture is o ered for the master courses computer science, mechatronics and electrical engineering. The authors of the essays in the this volume describe. Most math careers go far beyond just crunching numbers. This second edition contains updates on the career paths of individuals profiled in the first edition, along with many new profiles. Careers in mathematics the department of mathematics. Isbn 9780883857045 101 careers in mathematics direct. Chris fisher, denis hanson, judith macdonald, and j. Math 101 is a firstlevel course in the fulfillment of the mathematics requirement for graduation at the university of kansas. This book addresses this question with 125 career profiles written by people with degrees and backgrounds in mathematics. The brochures present case studies drawn from leading industries nationwide to illustrate the advanced mathematics knowledge and skills. The demand for mathematics experts has grown exponentially in a number of careersand so has the interest in these jobs. Find 9780883857045 101 careers in mathematics by sterrett at over 30 bookstores. None is not acceptable what other tests or classes are also required. Harley weston, coauthors of editions 1 through 7 of mathematics 101. The focus for year 11 students is confirming and managing their career action plan. Whether they are using math to solve business problems or help an individual make investments that will fund their retirement nest egg, students who love math can use their degrees in a number of ways after graduation. Table a1 shows entrylevel army moss and the asvab line scores required to qualify for the jobs. Specifically identify what math classes are required. In this book, 101 individuals ranging from lawyers and doctors to software engineers talk about the importance of mathematics in their careers. The association for women in mathematics maintains a career resources page bureau of labor statistics occupational outlook for mathematicians and math occupations more generally. The journal is published by the institute of mathematics, czech academy of sciences and distributed by springer. This third edition of the immensely popular 101 careers in mathematics contains updates on the career paths of individuals profiled in the first and second editions, along with many new profiles. Prs401c10102014 tutorial letter 10102014 mathematics teaching prs401c year module department of early childhood. New kinds of careers in mathematics related fields have developed in the last decade in mathematical finance, data science, biotechnology. The only reason you dont always realize just how strongly your life is a. Us news ranks the 100 best jobs in america by scoring 7 factors like salary, work life balance, long term growth and stress level. It is proposed that upon the completion of this program, students will earn 12 applied math credit that can be used for graduation I 1 introduction to the 8th wncp edition the goal of this introduction to finite mathematics i text is, as it has been with previous editions, to provide a textbook for a course in mathematics concepts and skills at a level suitable for mathematics teachers in elementary grade k8 schools in canada. This third edition of the immensely popular 101 careers in mathematics contains updates on the career paths of individuals profiled in the first and second edit. Dec 30, 2014 buy 101 careers in mathematics classroom resource materials 3rd revised edition by andrew sterrett editor isbn. We emphasize to our students that learning mathematics is synonymous with doing mathematics, in the sense of solving problems, making conjectures. Delve into mathematical models and concepts, limit value or engineering mathematics and find the answers to all your questions. Students and careers mathematical association of america. Now check nda 2020 mathematics syllabus details below. Jan 03, 2003 this second edition of the immensely popular, 101 careers in mathematics, contains updates on the career paths of individuals profiled in the. By studying this group of graduates, gathering both qualitative and quantitative data through surveys and interviews, i have examined the effectiveness of action research. All that is required of the learner are a computer, a connection to the internet, a calculator or spreadsheet, skills to use these tools, and a willingness to learn. This list is not exhaustive but it provides a solid idea of what fellow graduates have gone on to do and what potential careers an applied mathematics. Employment in occupations related to stemscience, technology, engineering, and mathematics is projected to grow to more than 9 million between 2012 and 2022. The careers in mathematics project will be spread across a 2week period. Discuss the essential elements of electronic banking and funds transfers. They need to ensure that it reflects their current personal profile, including skills, abilities, attitudes and academic performance. Mar 23, 2020 a great benefit of studying mathematics in college is the variety of career paths it provides. There are 25 new entries in this new edition that bring the total number of profiles to 146. Cebulla,is a mathematics education doctoral student at the university of iowa. They are interesting and challengingand they pay well. Also see the amssiam mathematical careers bulletin board and the informs student union career center. Mathematics at work series following up on the work of adp, achieve has produced a series of mathematics at work brochures to examine how higherlevel mathematics is used in todays workplaces. The association for women in mathematics maintains a site about mathematical careers. The authors of the essays in the this volume describe a wide variety of. Thats an increase of about 1 million jobs over 2012. For a general introduction to the following topics, visit the indicated site from khan academy or math tv. Ask questions at the art of problem solvings careers in mathematics forum. Careers in mathematics university of northern british columbia. Rate covenants under a rate covenant, the issuer pledges that rates will be set at a level sufficient to. Ian jacques mathematics for economics and business seventh edition contents f preface xiii guided tour of the. We will have netbooks in the classroom to allow for research and design. Explain the role of banks in the creation of money. Applied math content from the curriculum was aligned to the 2007 mississippi math framework revised academic benchmarks. Army enlisted jobs the army calls its enlisted jobs military occupational specialties moss, and about 150 such specialties exist for entrylevel recruits. Everyday low prices and free delivery on eligible orders. Nda maths syllabus 2020 pdf download with shortcut formulas. This course takes you through an overview of the wonderful world of business mathematics. After a repetition of basic linear algebra, computer algebra and calculus, we will treat numerical calculus, statistics and function approximation, which are the most important mathematics basic topics for engineers. This second edition of the immediately popular 101 careers in mathematics contains updates on the career paths of individuals profiled in the first edition, along with many new profiles. Cambridge core recreational mathematics 101 careers in mathematics edited by andrew sterrett skip to main content accessibility help we use cookies to distinguish you from other users and to provide you with a better experience on our websites. Buy 101 careers in mathematics classroom resource materials 3rd revised edition by andrew sterrett editor isbn. Nda question paper mathematics is an objective type question paper you should mark one answer in four alternatives and for every wrong answer, onethird of the marks assigned to that question will be deducted as the penalty. Before teaching,she worked as a research chemical engineer. The ten best resources for maths teachers to inspire their students all teachers have a role in helping students to prepare for their next steps in learning and work whether it is in further and higher education, apprenticeships or employment. Cambridge core recreational mathematics 101 careers in mathematics edited by andrew sterrett. The mathematical sciences career information site is a joint project of the ams, maa and siam. It offers specific information about nonacademic employment. What can you do with a math degree other than teaching. She received her bachelor of science degree in mathematics and chemical engineering from. Mathematics, statistics, economics, and computer science teachers alike have the ability to inspire college students and get them excited about math careers. Studying mathematics seriously prepares you for almost any career not just highschool or college teaching or pure mathematics research. All the tricks in this book are selfworking, which means you dont need to know any clever sleight of hand, like dealing cards from the bottom of a. This second edition of the immensely popular, 101 careers in mathematics, contains updates on the career paths of individuals profiled in the. Thats an increase of about 1 million jobs over 2012 employment levels. Here are some links to check out which support this theme with actual data. The degrees earned by the authors profiled here are a good mix of bachelors, masters, and phds. The course has only high school mathematics as a prerequisite. Mathematics books for free math questions and answers. Jan 03, 2003 101 careers in mathematics by andrew sterrett, january 3, 2003, mathematical association of america maa edition, paperback in english 2 edition. Additionally, it introduces the concept of negative numbers and integers. The course is designed for a person of any age and anywhere in the world. The 101 careers of the title is best regarded as meaning lots of careers. The following quotes are taken from 101 careers in mathematics, published by the mathematical association of america. Students will use technology internet and email to research mathematics in careers in order to increase their awareness, appreciation, and application of mathematics in the real world. This second edition of the immediately popular 101 career. Readers may be surprised that many of these professionals come from a wide range of careers, including doctors, lawyers, economists, consultants, and more. Whats been the effect of a couple of minor recessions and one big one. The book covers the mechanics of searching for a job e. These teachers create a curriculum, as well as assignments and tests, designed to educate students on the class topic and challenge them to get the most out of the course. Pdf mathematics is based on deductive reasoning though mans first experience with mathematics was of an inductive nature. Intro to tomorrows jobs bureau of labor statistics. Understanding the basics after reading this chapter, you will be able to. Students will be presenting the infographics to the class as the culmination of this activity. Though the union of mathematics and cryptology is old, it really came to the fore in connection with the powerful encrypting methods used during the second world war and their. For those who have a head for figures, pursuing a job related to mathematics is a choice that can add up to a rewarding and lucrative career. Department of mathematics and statistics university of regina acknowledgements this text would not be possible without the contribution of my former colleagues j.
{"url":"https://giotrenquartai.web.app/617.html","timestamp":"2024-11-10T04:55:08Z","content_type":"text/html","content_length":"19440","record_id":"<urn:uuid:0158f23d-8725-45d3-8087-9743457ef09e>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00080.warc.gz"}
A Short Review on Computational Hydraulics in the Context of Water Resources Engineering A Short Review on Computational Hydraulics in the Context of Water Resources Engineering () 1. Introduction Hydraulics is a subfield of fluid mechanics that studies the behavior of water at rest and in motion. This term is frequently used in connection with hydrodynamics to refer to the forces acting between water and its boundaries and their effects on subsequent flow patterns. To measure and predict flow behavior, it is critical to apply physical conservation rules, which will be discussed in detail in the subsequent section. Although they attempted to solve numerous fluid motion problems strictly theoretically, assuming an ideal fluid (frictionless or non-viscous), their results were still extremely effective in specific scenarios and conditions. On the other hand, hydraulics developed mathematical formulas to handle real-world fluid flow problems using experimental data from numerous laboratory experiments and field observations. Ludwig Prandtl’s essential idea of boundary layer combined hydrodynamics and hydraulics. Additionally, the illustrious research of Reynolds, Froude, Prandtl, and von Karman persuades us that studying fluids requires a combination of theory and experiment. Here, we can trace the evolution of hydraulics all the way back to 200 BCE [1] ( Figure 1). Hydraulics originated around the time of Archimedes, who established the now-famous rules of floating bodies. Following Galileo (1564-1642), Gastelli (1577-1644), Torricelli (1600-1647), and Guglielmini (1655-1710) developed mathematical models for barometers, flow from containers, steady flow, and resistance in rivers. When Newton (1642-1727) published his famed laws of motion, they expanded our quantitative understanding of fluid resistance in terms of velocity gradient and drag on spheres. Following Newton, Daniel Bernoulli, Leonhard Euler, Clairaut, d’Alembert, Lagrange (1736-1813), Laplace (1749-1827), and Gerstner (1756-1832) contributed significant concepts on fluid flow and surface waves to hydrodynamics. Additionally, De Pitot invented a tube for measuring speed; Chezy devised a formula for open channel resistance; Borda conducted orifice-related experiments; Bossut installed a towing tank; and Venturi Figure 1. Contributors in hydrodynamics and hydraulics (sources: Google image). experimented with the flow in varying cross-sections, all of which contributed to the field of hydraulics’ enrichment [1]. Coulomb (1736-1806) conducted flow resistance studies concurrently with Ernst Brothers (1795-1878) and Weber (1804-1891), all of whom made significant contributions to the field of hydraulics. As a result, the Buron (1790-1873), Fourneyman (1802-1867), Coriolis (1792-1843), Francis (1815-1892), and Russel (1808-1882) all made significant contributions to engineering. The Hagen (1797-1889), Poiseuille (1799-1869), and Sazon Weisbach (1806-1871) all contribute significantly to pipe flow, whereas Saint-Venant (1797-1886) contributes to open channel flow. On the other hand, Dupuit (1804-1866), Bresse (1822-1883), Basin (1829-1917), Darcy (1803-1858), and Manning (1816-1897) conducted research on both open channel hydraulics and pipe flow. William Froude (1810-1879) and Robert Froude (1846-1924) in particular provide a valuable criterion for classifying the flow based on their ship model tests. Additionally, Osborne Reynolds did a successful experiment based on streamline analysis to discern between the laminar and turbulent flow. Navier (1785-1876), Cauchy (1789-1857), Poisson (1781-1840), Saint-Venant (1797-1886), Boussinesq (1842-1929), Stokes (1819-1903), Lord Rayleigh (1842-1919), Lamb (1849-1934), Helmholtz (1821-1894), and Kirchoff (1824-1887) all made significant contributions to the development of theoretical and applied hydrodynamics during the nineteenth century. Additionally, Euler’s equation of motion for an ideal (non-viscous) fluid had advanced significantly. However, it does not account for certain significant observed results, such as the decrease in pipe pressure. As a result, engineers in practice created their own empirical hydraulics. Recent computational hydraulics is one of several domains of science in which computer technologies enable an intermediate mode of operation between pure theory and experiment. This discipline, which is frequently referred to as computational hydraulics, is more of a synthesis of hydraulics and hydrodynamics than an independent development. The objective of computational hydraulics is to use computers to mimic various physical processes involving seas, estuaries, rivers, canals, and reservoirs. We will explore fundamental hydraulic processes and their quantification in this article, focusing on water resource applications such as open channels, pipes, groundwater, and coastal waves [1]. 2. Hydraulics of Open Channels Open channel flows deal with the flow of water in open channels, where the pressure at the surface of the water is ambient and the pressure at each portion is proportional to the depth of the water. Pressure head is the ratio of pressure and the specific weight of water ( $P/\gamma$ ). Elevation head (Z) is the height of the section under consideration above a datum and Velocity head ( ${V}^{2}/ 2g$ ) is attributed to the average velocity of flow in that vertical section. Hence, the total head can be expressed by the Equation (1). $\mathcal{H}=Z+\frac{P}{\gamma }+\frac{{V}^{2}}{2g}$(1) In an open channel, water flow is primarily due to the gradient of the head and gravity. The Open Channels are primarily used for irrigation supply, industrial and domestic water supply [2]. 2.1. Conservation Laws Mass Conservation, Momentum Conservation and Energy Conservation are the main conservation laws used in open Channel. A control volume commonly considered under the Conservation of Mass, the overall mass change in the control volume due to inflow and outflow, is equal to the net rate of weight change in the control volume. This results to the classical continuity equation balancing the inflow, out flow and the storage change in the control volume. Since we only find water that is regarded as incompressible, that is, it is possible to disregard the impact on density [2]. The rate of change of momentum in the control volume under the Conservation of Momentum is equal to the net forces acting on the control volume. Since the water under consideration is flowing, external forces are acting upon it. This leads ultimately to the Newton’s second law of motion [2]. Energy conservation states that neither energy can be generated nor destroyed, it only changes its form. The energy would be in the form of potential energy and kinetic energy, primarily in open channels. The elevation of the water parcel is due to potential energy, while the kinetic energy is due to the movement of the parcel. In the context of open channel flow the total energy due these factors between any two sections is conserved, that leads to the classical Bernoulli’s equation. This equation must account for the energy loss between the two sections when used between two sections, which is due to the resistance of bed shear etc. to the flow [2]. 2.2. Types of Open Channel Flows The flow in an open channel is classified as Sub critical flow, Super Critical flow, and Critical flow depending on the Froude number ( ${F}_{r}$ ), where the Froude number can be described as 5: Types of open channel flow can be further categorized on the basis of time and space parameters (see Figure 2(a)). Flow is said to be steady when discharge Figure 2. (a) Types of open channel flows; (b) Specific energy curve for a given discharge. does not change along the course of the channel flow. On the other hand, as the discharge varies over time, flow is said to be unsteady. When both the depth and discharge are the same at any two sections of the channel, flow is said to be uniform. Furthermore, flow is said to be gradually varied whenever the depth changes gradually along the channel, whenever the flow depth changes rapidly along the channel the flow is termed rapidly varied flow. Whenever the flow depth gradually varies due to the change in discharge, the flow is called a spatially varied flow [2]. Steady uniform flow (kinematic wave), steady non-uniform flow (diffusion wave) and unsteady non-uniform flow (dynamic wave) can be seen as types of possible flow based on the above classification. 2.3. Specific Energy and Specific Force Specific energy is defined as the energy acquired by a section of water due to its depth and the velocity with which it is flowing, specific Energy E is given by, where y is the depth of flow at that section and v is the average velocity of flow. Specific energy is minimum at critical condition (see point C in Figure 2), and the corresponding depth at point C is termed as critical depth y[c]. This y[c] can be measured for a specific section under steady flow [2]. Specific force is defined as the sum of the momentum of the flow passing through the channel section per unit time per unit weight of water and the force per unit weight of water [2]. The specific forces of two sections are equal provided that the external forces and the weight effect of water in the reach between the two sections can be ignored. The term specific force is very useful to quantify the property of rapidly varied flow such as hydraulic jump [2]. At the critical state of flow the specific force is a minimum for the given discharge. When the specific energy is minimal, flow is critical. Also, the flow has to go through critical conditions if the flow changes from sub critical to super critical or vice versa. Sub-critical flow-the depth of the flow is higher, although the velocity is lower and the super-critical flow-the depth of the flow is lower, but the velocity is higher. Critical flow is the flow over a free over-fall. For Specific energy to be a minimum $\frac{\text{d}E}{\text{d}y}=0$ : However, $\text{d}A=T\ast \text{d}y$, where T is the width of the channel at the water surface, then applying $\frac{\text{d}E}{\text{d}y}=0$, will result in following: For a rectangular channel $\frac{{A}_{c}}{{T}_{c}}={y}_{c}$. Following the derivation for a rectangular channel, The same notion applies to trapezoidal and other cross-sections [2]. The critical flow state describes an unique relationship between depth and discharge that is very useful in the design of flow measurement structures. 2.4. Uniform Flows This is one of the most important principles of open channel flow. The most important uniform flow equation is Manning’s equation stated as: $V=\frac{1}{n}\ast {R}^{2/3}\ast {S}^{1/2}$(11) where R = the hydraulic radius = A/p and p = wetted perimeter = $f\left(y,{S}_{0}\right)$, y = depth of the channel bed, S[0] = bed slope (same as the energy slope, S[f]) and n = the Manning’s dimensional empirical constant. Uniform Flow concept is used in most of the open channel flow design. The uniform flow implies that there is no acceleration of the flow due to the weight portion of the flow being balanced by the resistance of the shear of the bed. In terms of discharge the Manning’s Equation (11) is given by: $Q=\frac{1}{n}\ast A\ast {R}^{2/3}\ast {S}^{1/2}$(12) This is a non linear equation in y the depth of flow for which most of the computations will be made. Derivation of uniform flow equation is given below, where $W\mathrm{sin}\theta$ = weight component of the fluid mass in the direction of flow, ${\tau }_{0}$ = bed shear stress and $P\Delta x$ = surface area of the channel. The force balance equation can be written as: $W\mathrm{sin}\theta -{\tau }_{0}p\Delta x=0\to \gamma A\Delta x\mathrm{sin}\theta -{\tau }_{0}p\Delta x=0\to {\tau }_{0}=\gamma \frac{A}{p}\mathrm{sin}\theta$(13) NowA/p is the hydraulic radius,R, and $\mathrm{sin}\theta$ is the slope of the channel S[o]. The shear stress can be expressed as: ${\tau }_{0}=\rho {C}_{f}\frac{{V}^{2}}{2}$(14) where ${c}_{f}$ is resistance coefficient, V is the mean velocity $\rho$ is the mass density. Therefore the Equation (14) can be written as: $\rho {C}_{f}\frac{{V}^{2}}{2}=\gamma R{S}_{o}\to V=\sqrt{\frac{2g}{{C}_{f}}}\sqrt{R{S}_{o}}\to V=\mathcal{C}\sqrt{R{S}_{o}}$(15) where C is Chezy’s constant. For Manning’s equation, 2.5. Gradually Varied Flow Flow is said to be gradually varied whenever the depth of flow changed gradually. The governing equation for gradually varied flow is given by: where the variation of depth y with the channel distance x is shown to be a function of bed slope ${S}_{o}$, Friction Slope ${S}_{f}$ and the flow Froude number ${F}_{r}$. This is a non linear equation with the depth varying as a non linear function (Figure 3). Figure 3. (a) Steady uniform flow in an open channel; (b) Total head at a channel section. Gradually varied flow can be derived from the conservation of energy at two sections of a reach of length $\Delta x$, can be written as: ${y}_{1}+\frac{{V}_{1}^{2}}{2g}+{S}_{o}\Delta x={y}_{2}+\frac{{V}_{2}^{2}}{2g}+{S}_{f}\Delta x$(18) Now, let $\Delta y={y}_{2}-{y}_{1}$ and $\frac{{V}_{2}^{2}}{2g}-\frac{{V}_{1}^{2}}{2g}=\frac{\text{d}}{\text{d}x}\left(\frac{{v}^{2}}{2g}\right)\Delta x$, Then the above equation becomes: $\Delta y={S}_{o}\Delta x-{S}_{f}\Delta x-\frac{\text{d}}{\text{d}x}\left(\frac{{v}^{2}}{2g}\right)\Delta x$(19) Dividing through $\Delta x$ and taking the limit as $\Delta x$, approaches zero gives us: After simplification, can be done in terms of Froude number and the general differential equation can be written as Equations (21) to (23): Numerical integration of the gradually varied flow equation will give the water surface profile along the channel. Depending on the depth of flow where it lies when compared with the normal depth and the critical depth along with the bed slope compared with the friction slope different types of profiles are formed such as M (mild), C (critical), S (steep) profiles. M (mild)-If the slope is so small that the normal depth (Uniform flow depth) is greater than critical depth for the given discharge, then the slope of the channel is mild. C (critical)-if the slope if the slope’s normal depth equals its critical depth, then we call it a Critical slope, denoted by C. S (steep)-if the channel slope is so steep that a normal depth less than critical is produced, then the channel is Steep, and water surface profile designated as S (see [1] [3] ). 2.6. Rapidly Varied Flow This flow has very pronounced curvature of the streamlines. It has pressure distribution that cannot be assumed to be hydrostatic. The rapid variation in flow regime often takes place in short span. When rapidly varied flow occurs in a sudden-transition structure, the physical characteristics of the flow are basically fixed by the boundary geometry of the structure as well as by the state of the flow. Channel expansion and channel contraction, Sharp crested weirs and Broad crested weirs can be seen as examples. Specific force before and after the flow regime can be considered same i.e., ${F} _{1}={F}_{2}$ [2]. 2.7. Unsteady Flows When the flow conditions vary with respect to time, we call it unsteady flows. Some terminologies such as wave is defined as a temporal or spatial variation of flow depth and rate of discharge. Wave length is the distance between two adjacent wave crests or trough. Amplitude is the height between the maximum water level and the still water level [4]. Wave celerity ( $\mathcal{C}$ ) is the relative velocity of a wave with respect to fluid in which it is flowing with V. Absolute wave velocity ( ${V}_{w}$ ) is the velocity with respect to fixed reference as below: Plus sign indicates the wave is traveling in the flow direction and vice versa. For shallow water waves $\mathcal{C}=\sqrt{g{y}_{0}}$, where y[0] = undisturbed flow depth. Unsteady flows occur due to following reasons: surges in power canals or tunnels, surges in upstream or downstream channels produced by starting or stopping of pumps and opening and closing of control gates. Furthermore, navigation looks can also generate waves in the navigation channel. Flood waves in streams, rivers, and drainage channels due to rainstorms and snow-melt, tides in estuaries, bays and inlets. Unsteady flow commonly encountered in an open channels and deals with translatory waves. A translatory wave is a gravity wave that propagates in an open channel and results in appreciable displacement of the water particles in a direction parallel to the flow. For purpose of analytical discussion, unsteady flow is classified into two types, namely, gradually varied and rapidly varied unsteady flow. In gradually varied flow the curvature of the wave profile is mild, and the change in depth is gradual. In the rapidly varied flow the curvature of the wave profile is very large and so the surface of the profile may become virtually discontinuous. Continuity equation for unsteady flow [5] in an open channel becomes: ${D}_{f}\frac{\partial V}{\partial x}+V\frac{\partial y}{\partial x}+\frac{\partial y}{\partial t}=0$(25) For a rectangular channel of infinite width, the Equation (25) may be written as: $\frac{\partial q}{\partial x}+\frac{\partial y}{\partial t}=0$(26) When the channel is to feed laterally with a supplementary discharge of ${q}^{\prime }$ per unit length, for instance, into an area that is being flooded over a dike, then the equation becomes: $\frac{\partial q}{\partial x}+\frac{\partial y}{\partial t}+{q}^{\prime }=0$(27) The general dynamic equation for gradually varied unsteady flow is given by (see [5] [6] ): $\frac{\partial y}{\partial x}+\frac{\alpha V}{g}\frac{\partial V}{\partial x}+\frac{1}{g}\frac{\partial {V}^{\prime }}{\partial t}=0$(28) 3. Hydraulics of Pipe Flows Pipe flows are mainly due to pressure difference between two sections. The total head here also consists of the pressure head, elevation head and velocity head. The principle of continuity, energy, momentum is also used in this type of flow. For example, when designing a pipe, we use continuity and energy equations to obtain the appropriate pipe diameter. Then, applying the momentum equation [5] [6], for a given discharge, we get the forces that act on bends. The key factors in the design and operation of a pipeline are head losses, pressures and stresses acting on the material of the pipe, and discharge. Head loss for a given discharge relates to flow efficiency; i.e., an optimum size of pipe will yield the least overall cost of installation and operation for the desired discharge. Choosing a small pipe results in low initial costs, but due to high energy costs from significant head losses, subsequent costs may be excessively high. The design of conduit should be such that it needs least cost for a given discharge. The hydraulic aspect of the problem requires applying the one dimensional steady flow form of the energy equation. Energy equation can be written $\frac{{P}_{1}}{\gamma }+\frac{{\alpha }_{1}{V}_{1}^{2}}{2g}+{Z}_{1}+{h}_{p}=\frac{{P}_{2}}{\gamma }+\frac{{\alpha }_{2}{V}_{2}^{2}}{2g}+{Z}_{2}+{h}_{t}+{h}_{L}$(29) where, $\frac{P}{\gamma }$ = pressure head, $\frac{\alpha {V}^{2}}{2g}$ = velocity head, Z = elevation head, h[p] = head supplied by a pump, h[t] = head supplied by a turbine and h[L] = head loss between 1 and 2. The kinetic energy correction factor is given by $\alpha$, and it is defines as, where u = velocity at any point in the section. $\alpha =\frac{\int \left({u}^{3}\text{d}A\right)}{{V}^{3}A}$(30) $\alpha$ has minimum value of unity when the velocity is uniform across the section. $\alpha$ has values greater than unity depending on the degree of velocity variation across a section. For laminar flow in a pipe, velocity distribution is parabolic across the section of the pipe, and $\alpha$ has value of 2.0. However, if the flow is turbulent, as is the usual case for water flow through the large conduits, the velocity is fairly uniform over most of the conduit section, and $\alpha$ has value near unity (typically: $1.04<\alpha <1.06$ ). Therefore, in hydraulic engineering for ease of application in pipe flow, the value of $\alpha$ is usually assumed to be unity, and the velocity head is then simply $\frac{{V}^{2}}{2g}$. The head supplied by a pump is directly related to the power supplied to the flow as ${P}_{p}=Q\gamma {h}_{p}$, similarly, if head is supplied to turbine, the power supplied to the turbine will be $ {P}_{t}=Q\gamma {h}_{t}$. The head loss term ${h}_{L}$ accounts for the conversion of mechanical energy to internal energy (heat), when this conversion occurs, the internal energy is not readily converted back to useful mechanical energy, therefore it is called head loss. Head loss results from viscous resistance to flow (friction) at the conduit wall or from the viscous dissipation of turbulence usually occurring with separated flow, such as in bends, fittings or outlet works. Head loss is due to friction between the fluid and the pipe wall and turbulence within the fluid. The rate of head loss depends on roughness element size apart from velocity and pipe diameter. Further the head loss also depends on whether the pipe is hydraulically smooth, rough or somewhere in between. In water distribution system, head loss is also due to bends, valves and changes in pipe diameter. Head loss for steady flow through a straight pipe can be written as below: ${\tau }_{0}{A}_{w}=\Delta P{A}_{r}$(31) $\Delta P=4\mathcal{L}{\tau }_{0}/D$(32) ${\tau }_{0}=f\rho {V}^{2}/8$(33) $h=\frac{\Delta P}{\gamma }=f\frac{\mathcal{L}}{D}\frac{{V}^{2}}{2g}$(34) This is known as Darcy-Weisbach equation. $h/L=S$, is slope of the hydraulic and energy grade lines for a pipe of constant diameter. Head loss in laminar flow can be calculated using Hagen-Poiseuille equation, which gives: $S=\frac{32V\mu }{{D}^{2}\rho g}$(35) Combining above equations with Darcy-Weisbach equation provides: $f=\frac{64\mu }{\rho VD}$(36) That can be further written in terms of Reynolds number: This relation is valid for ${R}_{e}<1000$. In turbulent flow, the friction factor is a function of both Reynolds number and pipe roughness. As the roughness size or the velocity increases, flow is wholly rough and f depends on the relative roughness. Where graphical determination of the friction factor is acceptable, it is possible to use a Moody diagram. This diagram gives the friction factor over a wide range of Reynolds numbers for laminar flow and smooth, transition, and rough turbulent flow. The quantities shown in Moody Diagram (see details in [2] ) are dimensionless so they can be used with any system of units. Minor losses caused by valves, bends and changes in pipe diameter. This is smaller than friction losses in straight sections of pipe and for all practical purposes ignored. Minor losses are significant in valves and fittings, which creates turbulence in excess of that produced in a straight pipe. A minor loss coefficientK may be used to give head loss as a function of velocity head. A flow coefficient C[v] which gives a flow that will pass through the valve at a pressure drop of 1psi may be specified. Given the flow coefficient the head loss can be calculated as: The flow coefficient can be related to the minor loss coefficient by: Major losses are due to friction between the moving fluid and the inside walls of the duct. The Darcy-Weisbach formula is generally considered to calculate major losses in pipes. This method is generally considered more accurate than the Hazen-Williams method. Additionally, the Darcy-Weisbach method is valid for any liquid or gas. Moody Friction Factor f can be calculated as below: $f=\left\{\begin{array}{l}\frac{64}{{R}_{e}}\text{\hspace{0.17em}}{R}_{e}=\frac{VD}{u }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\ text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{ }\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\hspace{0.17em}}{R}_{e}\le 2100\to \left(\text{i}\text{.e}\text{.}\text{\hspace{0.17em}} \text{laminar}\text{\hspace{0.17em}}\text{flow}\right)\\ \frac{1.325}{\mathrm{ln}\left(\frac{e}{3.7D}+\frac{5.74}{{R}_{e}^{0.9}}\right)}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{for}\text{\ hspace{0.17em}}5000\le {R}_{e}\le {10}^{8}\to \left(\text{i}\text{.e}\text{.}\text{\hspace{0.17em}}\text{turbulent}\text{\hspace{0.17em}}\text{flow}\right)\to \text{i}\text{.e}\text{.}\to {10}^{-6}\ le \frac{e}{D}\le {10}^{-2}\end{array}$(41) Major loss in pipes can also be calculated Using Hazen-Williams friction loss equation: $V=k{C}_{HW}{R}^{0.63}{S}^{0.54}\text{\hspace{0.17em}}\text{ }\text{where}\text{ }\text{\hspace{0.17em}}S=\frac{{h}_{f}}{\mathcal{L}}\text{\hspace{0.17em}}\text{ }\text{and}\text{ }\text{\hspace {0.17em}}R=\frac{D}{4}\text{\hspace{0.17em}}\text{ }f\text{or}\text{ }\text{\hspace{0.17em}}\text{ }\text{circular}\text{ }\text{\hspace{0.17em}}\text{ }\text{pipe}$(42) Hazen-Williams is only valid for water at ordinary temperatures (40˚ - 75˚F). The Hazen-Williams method is very popular, especially among civil engineers, since its friction coefficient (C[HW]) is not a function of velocity or duct (pipe) diameter. Hazen-Williams is simpler than Darcy-Weisbach for calculations where one can solve for flow-rate, velocity, or diameter. When the flow conditions are changed from one steady state to another, the intermediate stage flow is referred to as transient flow. This occurs due to design or operating errors or equipment malfunction. This transient state pressure causes lots of damage to the network system [4] [7]. Pressure rise in a close conduit caused by an instantaneous change in flow velocity. If the flow velocity at a point does vary with time, the flow is unsteady. The terms fluid transients and hydraulic transients are used in practice [4]. Consider a pipe length of length L Water is flowing from a constant level upstream reservoir to a valve at downstream. Assume valve is instantaneously closed at time $t={t}_{0}$ from the full open position to half open position. This reduces the flow velocity through the valve, thereby increasing the pressure at the valve. The increased pressure will produce a pressure wave that will travel back and forth in the pipeline until it is dissipated because of friction and flow conditions have become steady again. This time when the flow conditions have become steady again, let us call it ${t}_{1}$. So the flow regimes can be categorized into: 1) Steady flow for $t<{t}_{0}$ ; 2) Transient flow for ${t}_{0}<t<{t}_{1}$ ; and 3) Steady flow for $t>{t}_{1}$. Transient Transient-state pressures are sometimes reduced to the vapor pressure of a liquid that results in separating the liquid column at that section; this is referred to as liquid-column separation. If the flow conditions are repeated after a fixed time interval, the flow is called periodic flow, and the time interval at which the conditions are repeated is called period. The analysis of transient state conditions in closed conduits may be classified into two categories: lumped may be classified into two categories: lumped-system approach and distributed system approach. In the In the lumped system lumped system approach the conduit walls are assumed rigid and the liquid in the conduit is assumed incompressible, so that it behaves like a rigid mass, other way flow variables are functions of time only. In the distributed system approach the liquid is assumed slightly compressible. Therefore flow velocity varies along the length of the conduit in addition to the variation in time. The $1-D$ form of momentum equation for a control volume that is fixed in space and does not change shape may be written as: $\sum \mathcal{F}=\frac{\text{d}}{\text{d}t}\int \rho AV\text{d}x+{\left(\rho A{V}^{2}\right)}_{out}-{\left(\rho A{V}^{2}\right)}_{in}$(43) If the liquid is considered to be incompressible and the pipe is rigid, then at any instant the velocity along the pipe will be same, i.e., ${\left(\rho A{V}^{2}\right)}_{out}={\left(\rho A{V}^{2}\ right)}_{in}$. Substituting for all the forces acting on the control volume we get: $\mathcal{P}A+\gamma A\mathcal{L}\mathrm{sin}\alpha -{\tau }_{0}\pi D\mathcal{L}=\frac{\text{d}}{\text{d}t}\left(\rho AV\mathcal{L}\right)$(44) where, $\mathcal{P}=\gamma \left(h-\frac{{V}^{2}}{2g}\right)$, $\alpha$ = pipe slope, D = pipe diameter, $\mathcal{L}$ = pipe length, $\gamma$ = specific weight of fluid and ${\tau }_{0}$ = shear stress at the pipe wall. Frictional force is replaced by $\gamma {h}_{f}A$, and ${H}_{0}=h+\mathcal{L}\mathrm{sin}\alpha$ and ${h}_{f}$ from Darcy-Weisbach friction equation. The resulting equation yields: When the flow is fully established $\frac{\text{d}V}{\text{d}t}=0$. The final velocity ${V}_{0}$ will be: We use the above relationship to get the time for flow to establish: If the flow changes are rapid, it is important to take into account fluid compressibility [4]. Changes are not instantaneous in the system, because in the piping system, pressure waves travel back and forth. The walls of the pipe must be rigid and the liquid slightly compressible [2] [4]. Assume that the flow velocity at the downstream end is changed from V to $V+\Delta V$, thereby changing the pressure from P to $P+\Delta P$. The change in pressure will produce a pressure wave that will propagate in the upstream direction. By assuming that the velocity reference system travels with the pressure wave, the unsteady flow condition can be translated into steady flow. Using momentum equation with control volume approach to solve for $\Delta p$ (see details in [2] [4] ). The system is now steady, the momentum equation now yield: $PA-\left(P+\Delta P\right)A=\left(V+a+\Delta V\right)\left(\rho +\Delta \rho \right)\left(V+a+\Delta V\right)A-\left(V+a\right)\rho \left(V+a\right)A$(48) 4. Hydraulics of Ground Water Flow Velocity, discharge and head are the essential quantities used to characterize the flow of groundwater. They are followed by the Darcy’s law and some other metrics related to soil physical characteristics. The specific discharge is defined as the volume of water flowing through a unit area of soil per unit time, which has the unit of velocity. Specific discharge usually denoted by q and is defined as below: where Q is the flow rate of water through a cross sectional area A. The average velocity of fluid at a certain point of the porous medium is called Seepage velocity. The seepage velocity is represented by v, and is defined as: where n[e] is the effective porosity of the soil. The effective porosity, n[e] is defined as ratio of the volume of continuous pore spaces (open to groundwater flow) of a soil sample to the total volume of the sample. For ground water flow, the total head denoted by h is defined as the sum of the pressure head, and the elevation head, z as follows: $h=\frac{P}{\rho g}+z={h}_{p}+z$(51) where P is the pressure at any point, $\rho$ is the density of water at the prevailing temperature. If streamlines are not perpendicular to the vertical piezometer, then h[p] may include the contribution of the vertical component of the velocity head for most groundwater flows, the velocity head constitutes a very small part of the total head and is usually neglected. The important relationship is that the specific discharge at a certain point is proportional hydraulic gradient at that point and the discharge occurs in the direction of decreasing gradient, which is known as Darcy’s law: where q[x] is the specific discharge along the x direction, $\frac{\text{d}h}{\text{d}x}$ is the gradient of head causing the flow along the x direction, and K is known as hydraulic conductivity. The hydraulic conductivity can be further expressed as: $K=\frac{\rho g}{\mu }$(53) If the hydraulic conductivity K is independent of position within a geologic formation, the formation is homogeneous. In this case, K is a constant. On the other hand, if K is dependent of position then it is called heterogeneous formation. If the hydraulic conductivity K is independent of the direction of measurement at a point in the geologic formation, the formation is isotropic at that point. On the other hand, if K varies with the direction of measurement at a point, the formation is anisotropic at that point. In an aquifer with layered heterogeneity, flow may occur parallel to the layers or across the layers or both. To simplify the flow calculations, it is possible to use estimates of equivalent K. If the layers are horizontal then the equivalent K for a flow parallel to the layers, K[x], and the same for a flow across the layers, K[z], are given by the following: ${K}_{x}=\underset{i=1}{\overset{n}{\sum }}\frac{{K}_{i}{D}_{i}}{\mathcal{D}}$(54) ${K}_{z}=\frac{\mathcal{D}}{{\sum }_{i=1}^{n}\frac{{D}_{i}}{{K}_{i}}}$(55) where $\mathcal{D}={\sum }_{i=1}^{n}{D}_{i}$ ; ${D}_{i}$ and ${K}_{i}$ are thickness and hydraulic conductivity of the i^th homogeneous layer. The Darcy law is for a homogeneous and isotropic medium. Assuming that the principal axes of K are aligned with the coordinate axes then the governing equation for a three dimensional anisotropic confined aquifer can be written as: $\frac{\partial }{\partial x}\left({K}_{x}\frac{\partial h}{\partial x}\right)+\frac{\partial }{\partial y}\left({K}_{y}\frac{\partial h}{\partial y}\right)+\frac{\partial }{\partial z}\left({K}_{z}\ frac{\partial h}{\partial z}\right)={S}_{s}\frac{\partial h}{\partial t}+\mathcal{R}$(56) Here, $\mathcal{R}$ is source or sink term. If the medium is homogeneous and isotropic, ${K}_{x}={K}_{y}={K}_{z}=K$. In addition, if the aquifer is horizontal and has a constant thickness b, then $\ mathcal{T}=Kb$, and $S=b{S}_{S}$, in two dimensions the equation becomes, $\frac{{\partial }^{2}h}{\partial {x}^{2}}+\frac{{\partial }^{2}h}{\partial {y}^{2}}=\frac{S}{\mathcal{T}}\frac{\partial h}{\partial t}$(57) where $\mathcal{T}$ is called the transmissivity of the aquifer, and S is called the storage coefficient. The above equation called the diffusion equation. For the steady state condition this equation becomes the well known Laplace equation: $\frac{{\partial }^{2}h}{\partial {x}^{2}}+\frac{{\partial }^{2}h}{\partial {y}^{2}}=0$(58) The two dimensional flow equation for an unconfined aquifer is given by: $\frac{\partial }{\partial x}\left({K}_{x}h\frac{\partial h}{\partial x}\right)+\frac{\partial }{\partial y}\left({K}_{y}h\frac{\partial h}{\partial y}\right)={S}_{y}\frac{\partial h}{\partial t}+\ Here, ${S}_{y}$ is called the specific yield, which is the quantity of water that can be drained out of a saturated volume of porous medium due to unit lowering of the water table. The above equation is also known as the Boussinesq equation. It is a nonlinear, second order partial differential equation but linear in ${h}^{2}$. 5. Water Wave Dynamics Small-amplitude or linear wave theory is the most elementary wave theory developed by Airy (1845). With a large range of wave parameters, this theory is easier to apply, offering a reasonable approximation of wave characteristics. It is necessary to assert the key assumptions of this theory as below. Homogeneous, ideal and incompressible fluid, irrotational flow, surface tension and Coriolis effect is neglected, pressure at the free surface is uniform and constant, small amplitude and bed is a horizontal, fixed, impermeable boundary. Two dimensional irrotationality can be stated by Equations (60) and (61): $u=\frac{\partial \varphi }{\partial x}$(60) $w=\frac{\partial \varphi }{\partial z}$(61) where, $\varphi$ is known as velocity potential is a scaler function [8]. In addition, incompressible flow implies that there is another function termed as the stream function $\psi$ (62-63). $\frac{\partial \varphi }{\partial x}=\frac{\partial \psi }{\partial z}$(62) $\frac{\partial \varphi }{\partial z}=-\frac{\partial \psi }{\partial x}$(63) $\psi$ is orthogonal to the potential function $\varphi$. The lines of constant values of the potential function (i.e., equipotential lines) are mutually perpendicular to the lines of constant values of the stream function. Both $\varphi$ and $\psi$ satisfy the Laplace equation which governs the flow of an ideal water (see Equations (64) and (65) waves). $\frac{{\partial }^{2}\varphi }{\partial {x}^{2}}+\frac{{\partial }^{2}\varphi }{\partial {z}^{2}}=0$(64) $\frac{{\partial }^{2}\psi }{\partial {x}^{2}}+\frac{{\partial }^{2}\psi }{\partial {z}^{2}}=0$(65) The speed at which a wave form propagates is termed the phase velocity or wave celerity $C=L/T$. Where, the length L is the horizontal distance between corresponding points on two successive waves and the period T is the time for two successive crests to pass a given point. The most commonly used terms to describe water wave dynamics are depicted in Figure 4, where, a progressive Figure 4. Definition of elementary terms used to define a progressive sinusoidal wave [8]. wave represented by the spatial variable x, temporal variable t and $\eta$ denotes the displacement of the water surface relative to the SWL. As a result their phase can be defined as $\theta =kx-\ omega t$, where k and $\omega$ are described as wave number ( $k=\frac{2\pi }{L}$ ) and angular frequency ( $\omega =\frac{2\pi }{T}$ ) respectively. In this consequence, the wave profile can be written as following Equation (55): $\eta =a\mathrm{cos}\left(\frac{2\pi x}{L}-\frac{2\pi t}{T}\right)=\frac{H}{2}\mathrm{cos}\left(kx-\omega t\right)$(66) Therefore, the amplitude of wave can be depicted as $a=\frac{H}{@}$, where, H is the wave height. Wave motion can be often defined in terms of some dimensionless parameters such as wave steepness ( $\frac{H}{L}$ ), relative depth ( $\frac{d}{L}$ ) and relative height ( $\frac{H}{d}$ ). It is important to point out that sometimes relative depth and relative height quantified as kd and ka since they differ only by a constant factor of 2π. Water waves usually classified into three following categories based on relative depth: shallow water wave ( $0<\frac{d}{L}<\frac{1}{20}$i.e., $0<k\ast d<\frac{\pi }{10}$ ), transitional water wave ( $\frac{1}{20}<\frac{d}{L}<\frac{1}{2}$i.e., $\frac{\pi }{10}<kd<\pi$ ) and deep water wave ( $\frac{1}{2}<\frac{d}{L}<\infty$i.e., $\pi <kd<\infty$ ). In general, wave phase speed can be expressed as: $\mathcal{C}=\frac{gT}{2\pi }\mathrm{tanh}\left(kd\right)$(67) Therefore, the measurement of wave length can be: $L=\mathcal{C}T=\frac{g{T}^{2}}{2\pi }\mathrm{tanh}\left(kd\right)$(68) In addition to different wave parameter, the local flow velocities and accelerations during the passage of a water wave must often be calculated using Equations (69)-(72). $u=\frac{H}{2}\frac{gT}{L}\frac{\mathrm{cosh}\left[k\left(z+d\right)\right]}{\mathrm{cosh}\left(kd\right)}\mathrm{cos}\left(kx-\omega t\right)$(69) $w=\frac{H}{2}\frac{gT}{L}\frac{\mathrm{sinh}\left[k\left(z+d\right)\right]}{\mathrm{cosh}\left(kd\right)}\mathrm{sin}\left(kx-\omega t\right)$(70) ${\alpha }_{x}=\frac{\partial u}{\partial t}=\frac{g\pi H}{L}\frac{\mathrm{cosh}\left[k\left(z+d\right)\right]}{\mathrm{cosh}\left(kd\right)}\mathrm{sin}\left(kx-\omega t\right)$(71) ${\alpha }_{z}=\frac{\partial w}{\partial t}=-\frac{g\pi H}{L}\frac{\mathrm{sinh}\left[k\left(z+d\right)\right]}{\mathrm{cosh}\left(kd\right)}\mathrm{cos}\left(kx-\omega t\right)$(72) Water particle displacements from mean position for shallow/transitional-water (i.e., $\frac{d}{L}<\frac{1}{2}$ ) and deep-water waves (i.e., $\frac{d}{L}>\frac{1}{2}$ ) can be tracked by following $\xi =-\frac{H}{2}\frac{\mathrm{cosh}\left[k\left(z+d\right)\right]}{\mathrm{sinh}\left(kd\right)}\mathrm{sin}\left(kx-\omega t\right)$(73) $\zeta =\frac{H}{2}\frac{\mathrm{sinh}\left[k\left(z+d\right)\right]}{\mathrm{sinh}\left(kd\right)}\mathrm{cos}\left(kx-\omega t\right)$(74) Assuming, $\mathcal{A}=\frac{H}{2}\frac{\mathrm{cosh}\left[k\left(z+d\right)\right]}{\mathrm{sinh}\left(kd\right)}$ and $\mathcal{B}=\frac{H}{2}\frac{\mathrm{sinh}\left[k\left(z+d\right)\right]}{\ mathrm{sinh}\left(kd\right)}$, we can rewrite Equation (73) and 74 in the form of $\frac{{\xi }^{2}}{{\mathcal{A}}^{2}}+\frac{{\zeta }^{2}}{{\mathcal{B}}^{2}}=1$. Therefore, water particle displacements follows elliptical orbit for shallow/transitional-water while circular orbit for deep-water waves. In other words, for shallow/transitional-water $\mathcal{A}=\frac{H}{2}\frac{L}{2\pi d}$ and $\mathcal{B}=\frac{H}{2}\frac{z+d}{d}$ ; for deep-water waves $\mathcal{A}=\mathcal{B}=\frac{H}{2}\mathrm{exp}\frac{2\pi z}{L}$. Water pressure waves are classified into three main components: dynamic component due to acceleration, static component of pressure and atmospheric pressure. Mathematically expressed using Equation ${P}^{\prime }=\rho g\frac{H}{2}\frac{\mathrm{cosh}\left[k\left(z+d\right)\right]}{\mathrm{cosh}\left(kd\right)}\mathrm{cos}\left(kx-\omega t\right)-\rho gz+{P}_{a}$(75) Therefore, relative pressure will be $P={P}^{\prime }-{P}_{a}$. If pressure response factor is denoted by ${K}_{z}=\frac{\mathrm{cosh}\left[k\left(z+d\right)\right]}{\mathrm{cosh}\left(kd\right)}$ then the Equation (75) becomes ${P}^{\prime }=\rho g\left(\eta {K}_{z}-z\right)$. Besides, pressure force, it is possible to divide water wave energy into traditional potential ( ${E}_{p}=\frac{1}{16}\rho g{H}^{2}L$ ) and the ( ${E}_{k}=\frac{1}{16}\rho g{H}^{2}L$ ) form of energy. Therefore, the total energy per wave length and unit width can be written as $E=\frac{1}{8}\rho g{H}^{2}L$. Furthermore, the rate of energy transfer by water wave is known as energy flux, that can be quantified using the Equation (76). $\stackrel{¯}{P}=\frac{1}{8}\rho g{H}^{2}\frac{1}{2}\left[1+\frac{2kd}{\mathrm{sinh}\left(2kd\right)}\right]\frac{L}{T}$(76) The Equation (76) can simply written as the form of $\stackrel{¯}{P}=En\mathcal{C}=E{C}_{g}$, where ${C}_{g}$ is commonly known as group velocity. Therefore, group velocity can be quantified using Equation (77). Coastal hydraulics usually considers problems near the shoreline and the key processes that can affect a wave as it propagates from deep into shallow water includes: shoaling, refraction, reflection, diffraction, breaking and damping. The transformation of the wave form due to interaction with bathymetry is known as wave shoaling. Wave refraction is the changes in the direction of wave propagation due to differences in wave velocity along the crest, which are generally illustrated by lines drawn perpendicular to the wave crest in the direction of wave propagation known as wave rays. The angle of the wave along a ray follows Snell’s law. Wave diffraction is the bending of wave crests (changes in direction) due to along crest gradients in wave height. The point at which wave form becomes unstable and breaks is known as breaking point and the phenomenon is known as wave breaking. Wave breaking occurs when water particles at the crest travel much faster/farther than water particles in the trough. Wave breaking may be classified in four types as spilling, plunging, collapsing, and surging [8]. 6. Differential Equations for Water Movement Water movement is the key process in Water Resources Engineering. Most mathematical models describing water movement are consisting of differential equations deriving from the fundamental principles of the conservation of mass, energy and momentum. The continuity equation for one-dimensional unsteady gradually-varied flow, the Belanger equation, and unsteady flow equation are some example of differential Equations. The highest derivative is called the order of the differential equation and the power of its derivative of the highest order is called the degree of the differential equation. In general, the differential equations for water movement are second-order partial differential equations. The conservation laws in physics can be described by a PDE of the general form (Equation $a\frac{{\partial }^{2}C}{\partial {x}^{2}}+b\frac{{\partial }^{2}C}{\partial x\partial y}+c\frac{{\partial }^{2}C}{\partial {y}^{2}}+d\frac{\partial C}{\partial x}+e\frac{\partial C}{\partial y} where a, b, c, d, e, f and g may be constants or functions of x and y or of C. If a, b, c, d, e, f and g are constants or functions of x and y, the equation is linear. If $\left({b}^{2}-4ac\right)>0$ , the above equation is hyperbolic. For example wave equation is a hyperbolic equation. If $\left({b}^{2}-4ac\right)=0$, the above equation is parabolic. For example diffusion equation is a parabolic equation. If $\left({b}^{2}-4ac\right)<0$, the above equation is elliptic. The Laplace equation is an elliptic equation. Besides general form of PDE the conservation law can be seen as advection-diffusion process which is discussed in the following section. 7. Modeling Advection-Diffusion Processes Advection-Diffusion process can be expressed by an equation known as advection-diffusion equation which comes from the conservation of contaminant mass. Advection refers to the transport of contaminants by the moving water where contaminants travel at the same velocity as the mass of water in which they are dissolved. Diffusion refers to the spreading of the contaminants in all direction due to the turbulence and non-uniform velocity distribution. Here, molecular diffusion is much smaller than that of turbulent diffusion. In addition to being advected and diffused, the amount of a contaminant dissolved in water may increase or decrease in time due to chemical reactions with other agents present in the environment. When such reactions are absent, the contaminant process is called a conservative process meaning that the total mass of the dissolved substance remains constant through the transport process. When such reaction processes are present, the contaminant process is called non-conservative indicating that the mass of the contaminant is growing or decaying at a certain rate. $\frac{\partial C}{\partial t}=-u\frac{\partial C}{\partial x}+{D}_{f}\frac{{\partial }^{2}C}{\partial {x}^{2}}-rC$(79) Wave equation, diffusion equation and Laplace equation is also example of Advection-Diffusion process. Analytical solutions for advection-diffusion-reaction processes can be found only for simplified problems. For complicated boundary conditions or initial conditions, one needs numerical methods to get solution for the problem. 8. Transport of Particles in Water Movement In addition to water movement, transports of sediment, salt and pollutants lead to differential equations. Advection-diffusion process is a common process in transports of sediment, salt and pollutants. Advection refers to the transport of contaminants by the moving water where contaminants travel at the same velocity as the mass of water in which they are dissolved. Diffusion refers to the spreading of the contaminants in all direction due to the turbulence and non-uniform velocity distribution. In an open channel flow, molecular diffusion is much lower compared to turbulent diffusion. The general differential form of conservation of contaminant mass equation can be expressed as: $\frac{\partial C}{\partial t}+\frac{\partial {q}_{x}}{\partial x}+\frac{\partial {q}_{y}}{\partial y}+\frac{\partial {q}_{z}}{\partial z}=s$(80) where, C is the mass concentration per control volume. In addition, Fick’s law define the flux ( ${q}_{x},{q}_{y}$ and ${q}_{z}$ ) due to an advection and a diffusion process, that can be written as: ${q}_{x}=uC-{D}_{x}\frac{\partial C}{\partial x}$(81) ${q}_{y}=vC-{D}_{y}\frac{\partial C}{\partial y}$(82) ${q}_{z}=wC-{D}_{z}\frac{\partial C}{\partial z}$(83) Apart from the general type of contaminants transporting particle, some other dissolved gas transport along with their reaction rate can also be part of hydraulics, which is very important for the supply and treatment of water. 9. Hydraulics Used in Water Supply and Treatment The concentration of a gas (such as O[2]) that can be present in solution is governed by: 1) the solubility of the gas as defined by Henry’s Law; 2) temperature; 3) presence of impurities (such as salinity); and 4) the partial pressure of the gas in the atmosphere. The saturation concentration of a gas dissolved in a liquid is a function of the partial pressure of the gas. Henry’s law states that ${P}_{g}=H\frac{{x}_{g}}{{P}_{T}}$, where, P[g] and x[g] is the mole fraction of the gas in air and in the liquid (water) respectively, P[T] is the total pressure and H is the Henry’s constant that depends on the type of gas. For example, dissolved oxygen (DO) is measured in standard solution units such as mmol/L, mg/L, mL/L, or ppt. O[2] saturation is calculated as the percent of DO relative to a theoretical maximum concentration given the temperature, pressure, and salinity of the water. Well-aerated water will usually be 100% saturated. In general, lower temperatures, lower salinity, and higher atmospheric pressures lead to higher values of dissolved O[2]. Higher barometric pressures lead to higher values of saturated DO. The correction for barometric pressure may be written as $D{O}^{\prime }=D{O}_{0}\frac{P-u}{760-u}$, where, P is the barometric pressure and u is the vapor pressure in mm (Hg). Microorganisms in water digest organic material as food in the presence of O[2] that lead to introduce a new term commonly known as the biochemical oxygen demand (BOD). The BOD on a water body depends on the microbial population dynamics which includes a lag phase, a constant growth phase, a stationary phase, and a decay phase. According to the popular mathematical model for the growth of BOD (i.e., the exponential model where the BOD grows asymptotically to the so-called ultimate BOD of the water), BOD can be written as $BO{D}_{t}=BO{D}_ {u}\left(1-{\text{e}}^{-kt}\right)$, where k is the reaction rate constant (day^−^1). The rate constant at T temperatures is given by ${k}_{T}={k}_{20}{\left(1.047\right)}^{T-20}$. When a stream of untreated or partially treated sewage is discharged into a large body of water, the wastewater is diluted and their ultimate BOD (i.e., ${L}_{o}$ ), DO and temperature ( ${T}_{o}$ ) of the river-wastewater immediately after mixing can be given by $\left({Q}_{w}{L}_{w}+{Q}_{r}{L}_{r}\right)/Q$, $\left({Q}_{w}D{O}_{w}+{Q}_{r}D{O}_{r}\right)/Q$ and $\left({Q}_{w}{T}_{w}+{Q}_{r}{T}_{r}\ right)/Q$ respectively, where, $Q={Q}_{w}+{Q}_{r}$, ${Q}_{w}$ = waste water flow rate and ${Q}_{r}$ = river flow rate. Initial oxygen deficit after mixing is given by ${D}_{O}=D{O}_{sat}-D{O}_{0}$. The Streeter-Phelps equations (see 84) describe the DO “sag curve” obtained as a result of simultaneous deoxygenation and reoxygenation that occurs when wastewater is discharged into receiving stream. According to the Streeter-Phelps model 5, the oxygen deficit after time t ( $t=0$ at the instant of mixing) is given by: where, k[d] and k[r] are deoxygenation (day^−^1) and reoxygenation rate (day^−^1) respectively. The time at which the dissolved oxygen is critical is given by: At this time, the critical (maximum) O[2] deficit is given by: The distance from the point of mixing to the critical location is given by ${x}_{c}=v{t}_{c}$, where, v = average stream velocity. In the treatment of wastewater, operations in which some transformation is effected through the use of chemical reactions are termed chemical unit operations. The rate at which a chemical reaction progresses depends on the nature of the reaction and hydraulics used to describe that nature is called reactor hydraulics. Chemical reactions usually classified as zero order, first order, second and saturation reactions. Zero-order reactions may be expressed as $\frac{\text{d}C}{\text{d}t}=-k$ that leading to ${C}_{t}={C}_{0}-kt$. Similarly, for a first-order reaction follows $\frac{\text{d} C}{\text{d}t}=-kC$ that leading to ${C}_{t}={C}_{0}{\text{e}}^{-kt}$ and for a second-order reaction follows $\frac{\text{d}C}{\text{d}t}=-k{C}^{2}$ that leading to $\frac{1}{{C}_{t}}=\frac{1}{{C}_ {0}}+kt$. A reaction is classified as a saturation reaction if the rate of reaction saturates as the reaction progresses. This may be expressed as $\frac{\text{d}C}{\text{d}t}=\frac{kC}{a+C}$ that leading to $a\mathrm{ln}\frac{{C}_{0}}{{C}_{t}}+{C}_{0}-{C}_{t}=kt$. Required hydraulic detention time for various orders of reaction also depicted in Figure 5, which is very useful for water treatment reactor. There are several further computational hydraulics applications that address water quality concerns [9]. 10. Hydraulics of Sediment Transport Sediment transport function is very crucial to predict complex river [10] hydrodynamics and morphodynamics. For decades, various methods have been used to establish sediment transport functions or formulas. They are often differ Figure 5. (a) Dissolved oxygen sag curve (Streeter-Phelps); (b) Hydraulic detention time for various orders of reaction. drastically from each other and from observations in the field. These formulas have been used to solve engineering and environmental problems. The sediment transport mechanics for cohesive and non-cohesive materials are different. Most of the sediment transport functions are proposed under the assumption of non-cohesive sediment. In this section, we will review of the basic concepts and approaches used in the derivation of incipient motion criteria and sediment transport functions. Due to the stochastic nature of sediment movement, incipient motion is critical for the study of sediment transport, channel degradation and stable channel design. In other words, it is difficult to define precisely at what flow condition a sediment particle will begin to move, hence it depends more or less on definition of incipient motion. Significant progress has been made on the study of incipient motion, both theoretically and experimentally. Out of all ranges, the emphasis here is more on Shear Stress Approach and Velocity Approach. A sediment particle is at a state of incipient motion when one of the following conditions is satisfied: ${F}_{L}={W}_{S}$, ${F}_{D}={F}_{R}$ and ${M}_{O}={M}_{R}$, where, overturning moment due to F [D] and F[L], and resisting moment due to F[L] and W[S] (see Figure 6). One of the most prominent and commonly used incipient motion criteria is the Shields diagram (1936) based on shear stress approach, where, where the central premise is the shear stress $\tau$, the difference in density between sediment ( ${\rho }_{S}$ ) and water ( ${\rho }_{w}$ ), the diameter of the particle (d[s]), the kinematic viscosity and the gravitational acceleration can be grouped into two dimensionless quantities that provides seminal plot to determine incipient motion (see Figure 6). ${d}_{s}\frac{{\left(\frac{{\tau }_{c}}{{\rho }_{w}}\right)}^{1/2}}{u }=\frac{d{U}_{*}}{u }$(87) $\frac{{\tau }_{c}}{{d}_{s}\left({\rho }_{S}-{\rho }_{w}\right)g}=\frac{{\tau }_{c}}{{d}_{s}\gamma \left[\left(\frac{{\rho }_{S}}{{\rho }_{w}}\right)-1\right]}$(88) Figure 6. Sediment transport modes and forces acting on a single spherical sediment particle along with Shields and HJulstrom diagram [11]. where, ${\tau }_{C}$ = critical shear stress at initial motion and ${U}_{*}$ = shear velocity. Rouse (1939), White (1940), Brooks (1955), Liu (1958), Yang (1973), Govers (1987), Yalin and Karahan (1979) provides useful changes on the Shields diagram for practical use. While Lane (1953) developed stable channel design curves for trapezoidal channels with different typical side slopes, however, recently USBR (1987) synthesized stable channel design criteria based on the critical shear stress required to move sediment particles in channels under different flow and sediment conditions [11]. On the other hand, Fortier and Scobey (1926) carried out an extensive field survey of the maximum permissible mean velocity values in the canals of different materials. Hjulstrom (1935) carried out a detailed study of the movement of uniform materials at the bottom of the channels, which provides a convenient diagram (see Figure 6) based on the velocity approach. The relationship between sediment size and average flow velocity for erosion, transport and sedimentation can be illustrated by the Hjulstrom curve. Yang (1973) and Vanoni (1977) further improved the velocity approach that Govers (1987) and Talapatra and Ghosh (1983) tested experimentally [11]. Yang used theoretical fluid mechanics to calculate critical velocity by providing F[L], F[D], F[R], and W[S] equations, which involve the fall velocity of the sediment particles. Sediment particle fall velocity is one of the crucial parameters used in most sediment transport functions or formulas. Different methods have been developed for the computation of sediment particle fall velocity. Rubey’s (1933) introduce fall velocity formula ${\omega }_{S}=F\sqrt{{d}_{s}g\left(G-1\right)}$, where, F is a function of particles diameter, specific gravity of sediment, kinematic viscosity of water and g, however for particle greater than 1mm, F can be considered as 0.79 [11]. Yang and Simoes (2002) use a shape factor $SF=\frac{c}{\sqrt{ab}}$ for natural sand, where, a, b, c the length of the longest, the intermediate, and the shortest mutually perpendicular axes of the particle, respectively. In water treatment, fall velocity commonly calculated based on the type of particle settling, that can be classified into four types: discrete particle settling, flocculent settling, hindered or zone settling, and compression settling. In addition, discrete particle fall velocity depends on the laminar, transition or turbulent region of flow that is defined in earlier section. Regime, regression, probabilistic, and deterministic approaches are the basic approaches used in the derivation of sediment transport functions or formulas. The concept of “regime” is similar to the concepts of “dynamic equilibrium” and “hydraulic geometry.” Various collections of regime equations have been formulated by various investigators, such as Blench (1969), Kennedy (1895) and Lacy (1929). Lacy’s (1929) regime equation describing the relationships among channel slope S, water discharge Q, and silt factor ${f} _{S}$ for sediment transport can be written as $S=0.0005423\frac{{f}_{S}^{5/3}}{{Q}^{1/6}}$. Leopold and Maddock’s (1953) hydraulic geometry relationships commonly known as $W=a{Q}^{b}$, $y=c{Q}^{f}$ and $V=k{Q}^{m}$, where, W = channel width, y = channel depth, V = average flow velocity, Q = water discharge and a, b, c, j, k, m are local constants. Yang et. al. (1981) applied the unit stream power theory for sediment transport, and the hydraulic geometry relationships between Q and S as $S=i{Q}^{j}$, where, i, j, constants. Shen and Hung (1972), Karim and Kennedy (1990) proposed sediment transport function adopting regression approach by considering flow velocity, sediment discharge, bed-form geometry, and friction factor. The regression approach may provide fairly accurate results due to the fact that the sediment transport is such a complex phenomenon and no single hydraulic parameter or combination of parameters can be found to describe sediment transport rate under all conditions. Einstein (1950) was a pioneer in sediment transport studies focused on a probabilistic approach, which is under the assumption of the beginning and ceasing of sediment motion can be expressed in terms of probability and the movement of bedload is a series of steps followed by rest periods. In addition to that, Einstein used the hiding correction factor and the lifting correction factor to get the best fitted theoretical result with the observed experimental data. Although Einstein’s bedload function is not common in engineering applications due to complex computational procedures, it has however been used as a theoretical foundation for the formulation of other transport functions. For example, Colby and Hembree (1955) used modified Einstein methods for the computation of total bed-material load. On the other hand, the deterministic approach is the existence of one-to-one relationship between independent and dependent variables. Commonly used independent variables are water discharge, average flow velocity, shear stress, and energy or water surface slope. More recently, the use of stream power and unit stream power have gained increasing attention as important parameters to calculate sediment concentration. Bagnold (1966) introduced the stream power concept based on classical physics, while, Engelund and Hansen (1972), and Ackers and White (1973) later used the concept as the theoretical basis for developing their sediment transport functions. Yang (1972) defines unit stream power as the rate of energy per unit weight of water available for transporting water and sediment in an open channel with reach length x and total drop of y can be written as $\frac{\text{d}y}{\text{d}t}=\frac{\text{d}x}{\text{d}t}\frac{\text{d}y}{\text{d}x}=VS$, where, V = average flow velocity, and S = energy or water surface slope. Besides that, Velikanov (1954) derived his transport function from the gravitational power theory and Pacheco-Ceballos (1989) derived a sediment transport function based on power balance between total power available and total power expenditure in a stream. This principle is also applicable to understanding the evolution of the natural channel network [12] [13] [14] [15] [16]. In this section, we comprehensively reviews basic approaches and theories used in the determination of noncohesive sediment concentration. In the following section we will discuss some basic finite differences numerical techniques to solve governing equations. Finite element and finite volume are another two other numerical technique which will be covered under subsequent sections. 11. Finite Difference Method In Finite difference methods (FDM), partial derivatives are replaced with finite differences approximations, which results in a set of algebraic equations. These algebraic equations can be solved explicitly or implicitly. FDM uses a structured grid and thus requires grid transformation in multidimensional flow case. A typical finite difference grid can be shown in Figure 5. Finite difference method is popular because of its simplicity in formulation and implementation. Based on different ways of discretization, different numerical schemes are developed. Some of them are shown as bellows: In a forward time difference, any partial time derivative of a function C can be expressed as Equation (89): $\frac{\partial C}{\partial t}=\frac{{C}_{j}^{n+1}-{C}_{j}^{n}}{\Delta t}$(89) In a forward space difference, any partial space derivative of a function C can be expressed as Equation (90): $\frac{\partial C}{\partial x}=\frac{{C}_{j+1}^{n}-{C}_{j}^{n}}{\Delta x}$(90) In a backward space difference, any partial space derivative of a function C can be expressed as Equation (91): $\frac{\partial C}{\partial x}=\frac{{C}_{j}^{n}-{C}_{j-1}^{n}}{\Delta x}$(91) In a centered space difference, any partial space derivative of a function C can be expressed as Equation (92): $\frac{\partial C}{\partial x}=\frac{{C}_{j+1}^{n}-{C}_{j-1}^{n}}{2\Delta x}$(92) where $\Delta t$ and $\Delta x$ is the time and space discretization respectively. When space discretizations are done on the current time level, then the scheme is known as an explicit scheme. For example Equations (90) to (92) are the examples of explicit space discretized scheme. On the other hand, when space discretizations are done on the next time level or using both the current and next time levels, the generated scheme is known as an implicit scheme. For example, if Equation (91) is discretized as follows: $\frac{\partial C}{\partial x}=\frac{{C}_{j}^{n+1}-{C}_{j-1}^{n+1}}{\Delta x}$(93) $\frac{\partial C}{\partial x}=\frac{1}{2}\left(\frac{{C}_{j}^{n}-{C}_{j-1}^{n}}{\Delta x}+\frac{{C}_{j}^{n+1}-{C}_{j-1}^{n+1}}{\Delta x}\right)$(94) then both will generate implicit schemes. An explicit scheme will not generate any systems of equation. Therefore, an explicit scheme can be solved directly. That means, unknowns at any nodal value can be calculated without solving other nodal values. On the contrary, an implicit scheme will generate a system of equations, and therefore, unknowns at any nodal value can not be solved independently. Apart from that another popular implicit scheme is Box Finite Difference Scheme, which is also known as four-point implicit scheme. According to this scheme, time derivative can be expressed as: $\frac{\partial C}{\partial t}=\frac{1}{2}\left(\frac{{C}_{j+1}^{n+1}-{C}_{j+1}^{n}}{\Delta t}+\frac{{C}_{j}^{n+1}-{C}_{j}^{n}}{\Delta t}\right)$(95) and space derivative can be expressed as: $\frac{\partial C}{\partial x}=\frac{1}{2}\left(\frac{{C}_{j+1}^{n+1}-{C}_{j}^{n+1}}{\Delta x}+\frac{{C}_{j+1}^{n}-{C}_{j}^{n}}{\Delta x}\right)$(96) and C can be expresses as: In order to improve accuracy, further schemes such as the Crank-Nickelson Scheme, Box Finite Difference Scheme, Lax-Wendroff Scheme, MacCormack Scheme were proposed to solve advection-diffusion processes. Prior to solving the advection-diffusion equation or any governing equation, the initial condition of the system has to be specified. Solution of the governing equations also require boundary conditions. The most common type of boundary conditions are Dirichlet, Neumann, and Cauchy boundary conditions. Dirichlet condition occurs when a portion of the boundary is at a prescribed concentration level (i.e., C specified), Neumann condition occurs when a portion of the boundary has specified flow which is crossing the boundary curve normally (i.e., $\frac{\partial C}{\partial n}$ specified), and Cauchy condition is a mixed boundary condition which represents some combination of the Direchlet and Neumann conditions on the boundary curve. It is important to mention that numerical method used to reformulate governing equations in an approximate manner where continuous operations replaced by discrete operations that causes truncation error. In addition, size of the discretized solution domain (i.e., cells) causes round off error (Figure 7). Model discretization and numerical solution should be consistent (i.e., correct), convergent (i.e., $\Delta x\to 0\to \epsilon \to 0$ ), and stable (i.e., error should not increase). 12. Finite Element Method Another numerical technique used to obtain an approximate solution to a governing differential equation is the Finite Element Method. This is relatively new to the field of computational hydraulics and is supported by rigorous mathematical theory. FEM based on variational form of PDE, which derived using the method of weighted residuals. In this method, under a given boundary condition, the PDE (i.e., strong form) is used and the residuals are determined for the assumed solution due to the fact that the assumed solution is not exact. Then push the residual to zero at various points or intervals using different weighted residual method. There are three commonly used weighted residual methods (i.e., Collocation, subdomain, and Galerkins method) in FEM. Among them Galerkins method is popular, where the integral of residual equation R, multiplied by a weighting function, w, is pushed to be zero. The weighting functions are chosen to be the same form as each part of the approximate solution. If the residual, $R\left(u\right)$, for a given PDE, should equal 0 for the true solution. In other words, for approximate solution, non-zero residual measures accuracy ${u}_{h}\approx u$. Here, we require that the weighted residual equal zero: Figure 7. (a) Schematic 1D space and time discretization; (b) Computational error as a function of $\Delta x$. ${\int }_{\Omega }\text{ }\text{ }wR\left(u\right)\text{d}x=0;\forall w\in W$(98) and ${u}_{h}$ is a linear combination of basic functions, can be expressed as ${u}_{h}\left(x,t\right)=\underset{j}{\sum }\text{ }\text{ }{u}_{j}\left(t\right){\varphi }_{j}\left(x\right)$(99) Eventually, transforms governing equation into a system of linear equations ( $Ax=b$ ), which can be solved using matrix operations. The FEM can be explain by three phases prepossessing, processing/ solution and post-processing. In prepossessing phases, model geometry is discretized or subdivided into a finite number of smaller pieces, each piece is called an element, and all intersection points are called nodes. The process of dividing geometry is called meshing, which is one of the key steps of the FEM. After that, one element is considered at a time and assumes a shape function that depicts the physical behavior of the element. The stiffness matrix of each element is then developed and assembled them into a global matrix system for the entire geometry. Apply the required boundary condition and initial condition to the global matrix system. In solution phase, primary unknown is computed from the global system of linear equation. Finally, in the post-processing phase, other derived variables are determined on the basis of the nodal value of the primary unknowns. For example, SLIM is a numerical model. The full form of SLIM is Second-generation Louvain-la-Neuve Ice-ocean Model. It is a hydrodynamic model based on finite element method (FEM). The special advantage of finite element method is that, user can use this technique for unstructured grids. One can refined computational grid arbitrarily in the areas of interest that focusing the computational power where it is needed. In this model no need of nested grids. A single model is able to resolve both the large-scale features, such as in the open sea and also small-scale phenomena in rivers, coasts and estuaries. SLIM consists of a 1D river model, a 2D depth averaged model and 3D model. For 1D river model consists of linear river where variable river width and cross-section, 2D model the domain is divided into triangular elements allowing accurate representation of complex geometry and 3D model uses triangular prismatic elements that are formed by extending the 2D mesh vertically. 13. Finite Volume Method Besides computational hydraulics, majority of computational fluid dynamics (CFD) codes based on Finite Volume Method (FVM). This method is based on integral form of PDE, where the governing equation integrated over control volume. For example, the integral form of 1D advection-diffusion conservation law can be written as: ${\int }_{v}\frac{\partial C}{\partial t}\text{d}v=-{\int }_{v}\text{ }\text{ }u\frac{\partial C}{\partial x}\text{d}v+{\int }_{v}\text{ }\text{ }{D}_{f}\frac{\partial }{\partial x}\frac{\partial C} {\partial x}\text{d}v$(100) The integral form is based on a control volume v. This v volume can be cell-centered or vertex-centered as shown in Figure 8. Typically, the special model domain divides into discrete control volumes. Then an assumption is made on Figure 8. (a) Cell-centered and (b) Vertex-centered finite volume mesh along with (c) Midpoint approximation rule. how the value of integral changes within the control volume v, usually average value is taken at the center of the control volume v. ${C}_{i}\left(t\right)=\frac{1}{|{V}_{i}|}{\int }_{{V}_{i}}\text{ }\text{ }C\left(x,t\right)\text{d}x$(101) The suitable time integration scheme is selected to solve the integral of Equation (101). Midpoint rule, Trapezoidal rule and Simpson’s rule are the most common approximations of integrals. In this method, all fluxes usually converted to surface integrals. This is an exact procedure due to the fact that the volumes are defined as polygons or polyhedra. 14. Concluding Remarks Hydraulics is a prerequisite for those interested in a career in Water Resources Engineering. The fundamental principles of hydraulics, the computational aspects of hydraulic system analysis and design, and various engineering applications of the concepts are discussed in this manuscript. This manuscript can assist in developing an appropriate learning environment for comprehending and conducting pipe and open channel flow research. The author wishes to express his gratitude to the editor and anonymous reviewers for their constructive suggestions and comments, which substantially improved the manuscript’s presentation and
{"url":"https://scirp.org/journal/paperinformation?paperid=113747","timestamp":"2024-11-04T11:05:51Z","content_type":"application/xhtml+xml","content_length":"334312","record_id":"<urn:uuid:e9608665-2e29-473e-bd91-96cd741a3eb1>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00790.warc.gz"}
Model checking - Wikipedia Elevator control software can be model-checked to verify both safety properties, like "The cabin never moves with its door open",^[1] and liveness properties, like "Whenever the n^th floor's call button is pressed, the cabin will eventually stop at the n^th floor and open the door". In computer science, model checking or property checking is a method for checking whether a finite-state model of a system meets a given specification (also known as correctness). This is typically associated with hardware or software systems, where the specification contains liveness requirements (such as avoidance of livelock) as well as safety requirements (such as avoidance of states representing a system crash). In order to solve such a problem algorithmically, both the model of the system and its specification are formulated in some precise mathematical language. To this end, the problem is formulated as a task in logic, namely to check whether a structure satisfies a given logical formula. This general concept applies to many kinds of logic and many kinds of structures. A simple model-checking problem consists of verifying whether a formula in the propositional logic is satisfied by a given structure. Property checking is used for verification when two descriptions are not equivalent. During refinement, the specification is complemented with details that are unnecessary in the higher-level specification. There is no need to verify the newly introduced properties against the original specification since this is not possible. Therefore, the strict bi-directional equivalence check is relaxed to a one-way property check. The implementation or design is regarded as a model of the system, whereas the specifications are properties that the model must satisfy.^[2] An important class of model-checking methods has been developed for checking models of hardware and software designs where the specification is given by a temporal logic formula. Pioneering work in temporal logic specification was done by Amir Pnueli, who received the 1996 Turing award for "seminal work introducing temporal logic into computing science".^[3] Model checking began with the pioneering work of E. M. Clarke, E. A. Emerson,^[4]^[5]^[6] by J. P. Queille, and J. Sifakis.^[7] Clarke, Emerson, and Sifakis shared the 2007 Turing Award for their seminal work founding and developing the field of model checking.^[8]^[9] Model checking is most often applied to hardware designs. For software, because of undecidability (see computability theory) the approach cannot be fully algorithmic, apply to all systems, and always give an answer; in the general case, it may fail to prove or disprove a given property. In embedded-systems hardware, it is possible to validate a specification delivered, e.g., by means of UML activity diagrams^[10] or control-interpreted Petri nets.^[11] The structure is usually given as a source code description in an industrial hardware description language or a special-purpose language. Such a program corresponds to a finite state machine (FSM), i.e., a directed graph consisting of nodes (or vertices) and edges. A set of atomic propositions is associated with each node, typically stating which memory elements are one. The nodes represent states of a system, the edges represent possible transitions that may alter the state, while the atomic propositions represent the basic properties that hold at a point of execution.^[12] Formally, the problem can be stated as follows: given a desired property, expressed as a temporal logic formula ${\displaystyle p}$, and a structure ${\displaystyle M}$ with initial state ${\ displaystyle s}$, decide if ${\displaystyle M,s\models p}$. If ${\displaystyle M}$ is finite, as it is in hardware, model checking reduces to a graph search. Symbolic model checking Instead of enumerating reachable states one at a time, the state space can sometimes be traversed more efficiently by considering large numbers of states at a single step. When such state-space traversal is based on representations of a set of states and transition relations as logical formulas, binary decision diagrams (BDD) or other related data structures, the model-checking method is Historically, the first symbolic methods used BDDs. After the success of propositional satisfiability in solving the planning problem in artificial intelligence (see satplan) in 1996, the same approach was generalized to model checking for linear temporal logic (LTL): the planning problem corresponds to model checking for safety properties. This method is known as bounded model checking.^[ 13] The success of Boolean satisfiability solvers in bounded model checking led to the widespread use of satisfiability solvers in symbolic model checking.^[14] One example of such a system requirement: Between the time an elevator is called at a floor and the time it opens its doors at that floor, the elevator can arrive at that floor at most twice. The authors of "Patterns in Property Specification for Finite-State Verification" translate this requirement into the following LTL formula:^[15] {\displaystyle {\begin{aligned}\Box {\Big (}({\texttt {call}}\land \Diamond {\texttt {open}})\to &{\big (}(\lnot {\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\lor (({\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\lor ((\lnot {\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\ lor (({\texttt {atfloor}}\land \lnot {\texttt {open}})~{\mathcal {U}}\\&({\texttt {open}}\lor (\lnot {\texttt {atfloor}}~{\mathcal {U}}~{\texttt {open}})))))))){\big )}{\Big )}\end{aligned}}} Here, ${\displaystyle \Box }$ should be read as "always", ${\displaystyle \Diamond }$ as "eventually", ${\displaystyle {\mathcal {U}}}$ as "until" and the other symbols are standard logical symbols, ${\displaystyle \lor }$ for "or", ${\displaystyle \land }$ for "and" and ${\displaystyle \lnot }$ for "not". Model-checking tools face a combinatorial blow up of the state-space, commonly known as the state explosion problem, that must be addressed to solve most real-world problems. There are several approaches to combat this problem. 1. Symbolic algorithms avoid ever explicitly constructing the graph for the finite state machines (FSM); instead, they represent the graph implicitly using a formula in quantified propositional logic. The use of binary decision diagrams (BDDs) was made popular by the work of Ken McMillan,^[16] as well as of Olivier Coudert and Jean-Christophe Madre,^[17] and the development of open-source BDD manipulation libraries such as CUDD^[18] and BuDDy.^[19] 2. Bounded model-checking algorithms unroll the FSM for a fixed number of steps, ${\displaystyle k}$, and check whether a property violation can occur in ${\displaystyle k}$ or fewer steps. This typically involves encoding the restricted model as an instance of SAT. The process can be repeated with larger and larger values of ${\displaystyle k}$ until all possible violations have been ruled out (cf. Iterative deepening depth-first search). 3. Abstraction attempts to prove properties of a system by first simplifying it. The simplified system usually does not satisfy exactly the same properties as the original one so that a process of refinement may be necessary. Generally, one requires the abstraction to be sound (the properties proved on the abstraction are true of the original system); however, sometimes the abstraction is not complete (not all true properties of the original system are true of the abstraction). An example of abstraction is to ignore the values of non-Boolean variables and to only consider Boolean variables and the control flow of the program; such an abstraction, though it may appear coarse, may, in fact, be sufficient to prove e.g. properties of mutual exclusion. 4. Counterexample-guided abstraction refinement (CEGAR) begins checking with a coarse (i.e. imprecise) abstraction and iteratively refines it. When a violation (i.e. counterexample) is found, the tool analyzes it for feasibility (i.e., is the violation genuine or the result of an incomplete abstraction?). If the violation is feasible, it is reported to the user. If it is not, the proof of infeasibility is used to refine the abstraction and checking begins again.^[20] Model-checking tools were initially developed to reason about the logical correctness of discrete state systems, but have since been extended to deal with real-time and limited forms of hybrid Model checking is also studied in the field of computational complexity theory. Specifically, a first-order logical formula is fixed without free variables and the following decision problem is Given a finite interpretation, for instance, one described as a relational database, decide whether the interpretation is a model of the formula. This problem is in the circuit class AC^0. It is tractable when imposing some restrictions on the input structure: for instance, requiring that it has treewidth bounded by a constant (which more generally implies the tractability of model checking for monadic second-order logic), bounding the degree of every domain element, and more general conditions such as bounded expansion, locally bounded expansion, and nowhere-dense structures.^[21] These results have been extended to the task of enumerating all solutions to a first-order formula with free variables. Here is a list of significant model-checking tools:
{"url":"https://www.silverfives416.sbs/wiki/Physician","timestamp":"2024-11-11T04:54:29Z","content_type":"text/html","content_length":"172397","record_id":"<urn:uuid:b25ceeec-0af3-497e-95c0-3e2033a0ef03>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00872.warc.gz"}
Can you explain the Multiplier system for Athlete Items? One of the unique aspects about Athlete Items are their Multipliers. Each Athlete Item has a Multiplier that ranges in value from 1.0x to 1.5x, growing in 1.0x increments (1.0x, 1.1x, 1.2x, 1.3x, 1.4x, 1.5x). • Multiplier Scoring Example: A running back (RB) scored 10 fantasy points. Below is the scoring chart for that specific Item based on each Multiplier Level - Point Totals are equal to Points times Multiplier (Points x Multiplier): □ 10 points (1.0x Multiplier) = 10 points total □ 10 points (1.1x Multiplier) = 11 points total □ 10 points (1.2x Multiplier) = 12 points total □ 10 points (1.3x Multiplier) = 13 points total □ 10 points (1.4x Multiplier) = 14 points total □ 10 points (1.5x Multiplier) = 15 points total Users can also Boost the Multiplier of a specific Athlete Item. If a User applies a Boost to an Item, it will increase the Multiplier of that Item by one level. Users can attain Boosts by leveling up their Franchise Pass, or through Packs themselves. You can find out how many Boosts you have by clicking on Inventory, and clicking on Boosts in the top menu. Boosts can only be applied to an Item with the same Multiplier as the Boost itself. For example, if a User has a 1.1x Boost, that means they can only apply it to a 1.1x Item. That would Boost the Multiplier up of that specific Item up one level to 1.2x. Once a Boost is applied, the Item is permanently Boosted thereafter. Below are the Boost Items with their corresponding leveling system: • Boost Leveling System: □ 1.0x Boost –> Boosts a 1.0x Multiplier to a 1.1x Multiplier □ 1.1x Boost –> Boosts a 1.1x Multiplier to a 1.2x Multiplier □ 1.2x Boost –> Boosts a 1.2x Multiplier to a 1.3x Multiplier □ 1.3x Boost –> Boosts a 1.3x Multiplier to a 1.4x Multiplier □ 1.4x Boost –> Boosts a 1.4x Multiplier to a 1.5x Multiplier
{"url":"https://help.gameblazers.com/hc/en-us/articles/29400314718477-Can-you-explain-the-Multiplier-system-for-Athlete-Items","timestamp":"2024-11-03T07:23:00Z","content_type":"text/html","content_length":"28843","record_id":"<urn:uuid:86b4c7a6-b501-41e2-ba1f-9f3a83eb480c>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00702.warc.gz"}
Mismatch Loss, Etc. Click here to go to our VSWR page Click here to learn about our S-parameter utility spreadsheet that you can download On this page we will discuss mismatch loss, as well as "loss factor" (as described in a recent trade journal article), and a new concept, "efficiency factor". Mismatch loss Updated February 2022... the mismatch formula below was incorrect, for many years. Previously we had(1-Γ^2), when it should have been(1-|Γ|^2). Thanks to alert reader Tom. Note that the absolute value of Γ is typically assigned to Greek letter rho (ρ), which leads us to a simpler, alternative formula. Mismatch loss is the ratio of power delivered to power available, and is a simple function of reflection coefficient. The formula for mismatch loss is simply: mismatch loss=(1-|Γ|^2) mismatch loss=1-ρ^2 By the way, here is a page that summarizes Greek letter usage in Microwave Engineering. We've done the heavy lifting for you on mismatch loss, and already put it into a calculator, check it out! Loss factor This concept is described in recent a Microwave Journal article entitled "Automation and Real-time Verification of Passive Component S-parameter Measurements Using Loss Factor Calculations", by J. Capwell, T. Weller, D. Markell and L. Dunleavy of Modelithics Inc. You can google to it if you like, but to get the full text you might have to join the Microwave Journal web site, which is annoying. You can always read the google "cached" version of the article, that's what we did. They present the concept of "loss factor" as something that will help you determine if an S-parameter measurement of a passive device is good. Quoting the article: "The forward and reverse loss factors are calculated from passive component S-parameters as Forward Loss Factor (FLF) = 1 - |S11|^2 - |S21|^2 Reverse Loss Factor (RLF) = 1 - |S22|^2 - |S12|^2 By these definitions, the loss factors are seen to equal the difference between a normalized input power and the power that is reflected and transmitted to the input and output ports, respectively. (Power loss can occur due to conductor, dielectric and radiation loss mechanisms.) For reciprocal devices S21 = S12, the differences between the forward and reverse loss factor occur due to differences in |S11| and |S22|. For ideal lossless components, the magnitudes of S11 and S22 are equal, but they can deviate from one another whenever loss is present. For passive components such as capacitors, inductors and resistors (and diodes), the electrical behavior is most often symmetrical (S11 = S22); thus, the difference in the forward and reverse loss should be negligible. Significant deviations in the forward and reverse loss can be observed when a component begins to radiate, a commonly observed phenomenon, particularly for inductors. However, the main objective of this article is to illustrate how real-time monitoring of loss factor behavior becomes a useful tool for detecting measurement inconsistencies when, all things functioning properly, the component should exhibit symmetrical characteristics." We agree that loss factor is a one indication of how much power is "disappearing" in a network, through resistive loss or radiation. Efficiency factor This a Microwaves101 concept, which we believe is a more useful quantity than "loss factor" for evaluating passive parts. We define it as: Forward efficiency factor = |S11|^2 +|S21|^2 Reverse efficiency factor = |S22|^2 +|S12|^2 Why is this better? It all makes sense when converted to decibels. A "perfect" circuit has an efficiency factor of 0 dB. A circuit that loses 20 percent of its power has an efficiency factor of -1 dB, etc. etc. Now you have a measurement of how "lossless" a circuit would be if you were able to perfectly impedance match it. This is quite useful when you are designing low-loss networks such as switches. Of course, Microwaves101 ain't making the big bucks like Modelithics is, so you have to factor that in when you read this page... Check out our S-parameter Utility spreadsheet page, it has an example that where mismatch loss, loss factor and efficiency factor are plotted and analyzed.
{"url":"https://www.microwaves101.com/encyclopedias/mismatch-loss-etc","timestamp":"2024-11-08T17:22:28Z","content_type":"application/xhtml+xml","content_length":"38621","record_id":"<urn:uuid:f43da2e2-7327-42fd-bfe3-4dac2d17e400>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00710.warc.gz"}
1. L RC Circuit. An AC circuit has an inductance L -5 mH, a resistance of... Answer #1 given data obe : in urance (L): SMH R 8.sn .6282 (d) Yesonant.fuequency (幻 Ίν |cost 드0.983 | 亡co-16.983〉こ10.328 Ср- 10.33. Hoe sill qet vote '덕 Similar Homework Help Questions • A coil with an inductance of 8.6 mH is in a circuit with an AC frequency... A coil with an inductance of 8.6 mH is in a circuit with an AC frequency of 6047 Hz. What is the inductive reactance in ohms? Give your answer to 2 sf. • An ac circuit with a 60 μF capacitor in series with a coil of resistance 18Ω... An ac circuit with a 60 μF capacitor in series with a coil of resistance 18Ω and inductance 180mH is connected to a 100V, 100 Hz supply is shown below. Calculate inductive reactance capacitive reactance circuit impedance and phase angle θ circuit current I phasor voltages VR, VL, VC and VS resonance circuit frequency construct a fully labeled voltage phasor diagram • A series AC circuit contains a resistor, an inductor of 200 mH, a capacitor of 4.30... A series AC circuit contains a resistor, an inductor of 200 mH, a capacitor of 4.30 µF, and a source with ΔVmax = 240 V operating at 50.0 Hz. The maximum current in the circuit is 180 mA. (a) Calculate the inductive reactance. Ω (b) Calculate the capacitive reactance. Ω (c) Calculate the impedance. kΩ (d) Calculate the resistance in the circuit. kΩ (e) Calculate the phase angle between the current and the source voltage. ° • An RLC circuit has a resistance of 12.7 Ω, an inductance of 15.9 mH, and a... An RLC circuit has a resistance of 12.7 Ω, an inductance of 15.9 mH, and a capacitance of 353.0 μF. By what factor does the impedance of this circuit change when the frequency at which it is driven changes from 60 Hz to 120 Hz? Does the impedance increase or decrease? Free Homework Help App Download From Google Play Scan Your Homework to Get Instant Free Answers Need Online Homework Help? Ask a Question Get Answers For Free Most questions answered within 3 hours.
{"url":"https://www.homeworklib.com/question/1871300/1-l-rc-circuit-an-ac-circuit-has-an-inductance-l","timestamp":"2024-11-10T08:46:26Z","content_type":"text/html","content_length":"53320","record_id":"<urn:uuid:f7f9baed-3cce-4659-a5c8-c202e0461e49>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00120.warc.gz"}
Research Guides: Mathematics: Geometry Geometry for Dummies by Hit the geometry wall? Get up and running with this no-nonsense guide! Does the thought of geometry make you jittery? You're not alone. Fortunately, this down-to-earth guide helps you approach it from a new angle, making it easier than ever to conquer your fears and score your highest in geometry. From getting started with geometry basics to making friends with lines and angles, you'll be proving triangles congruent, calculating circumference, using formulas, and serving up pi in no time. Geometry is a subject full of mathematical richness and beauty. But it's a subject that bewilders many students because it's so unlike the math they've done before--it requires the use of deductive logic in formal proofs. If you're having a hard time wrapping your mind around what that even means, you've come to the right place! Inside, you'll find out how a proof's chain of logic works and even discover some secrets for getting past rough spots along the way. You don't have to be a math genius to grasp geometry, and this book helps you get un-stumped in a hurry! Find out how to decode complex geometry proofs Learn to reason deductively and inductively Make sense of angles, arcs, area, and more Improve your chances of scoring higher in your geometry class There's no reason to let your nerves get jangled over geometry--your understanding will take new shape with the help of Geometry For Dummies. Call Number: QA459 .R8882 2016
{"url":"https://libraryguides.ccbcmd.edu/mathematics/geometry","timestamp":"2024-11-05T06:02:37Z","content_type":"text/html","content_length":"75665","record_id":"<urn:uuid:e8d8047d-4da6-406b-b717-fed9af64eebd>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00798.warc.gz"}
How to Calculate Profit and Loss (P&Ls) for Your Trading Strategies | Python for Financial Analysis How to Calculate Profit and Loss (P&Ls) for Your Trading Strategies (Hint: it’s not as easy as you think) Quantitative Trading Toolbox | Python for Financial Analysis Quantitative Trading Like a Pro: Essential Python Course Calculating Profit and Loss (P&L) for your trading strategy can be surprisingly tricky at times. Traders are often surprised when their calculations of realised and unrealised P&Ls don’t match up. Today let’s see how it can be done by going through a Moving Average (MA) Crossover strategy. To start with we create an arbitrary price series from normally distributed random numbers. Of course, you could also use market data from your favourite instrument, it doesn’t matter so much here as we are focussed on calculating strategy P&L curves. Backtesting for the MA Crossover Strategy In the next step we create a backtest. In this backtest, we loop through our price data, calculate two moving averages and whenever the two moving averages cross we change from short to long and vice
{"url":"https://aaaquants.medium.com/how-to-calculate-profit-and-loss-p-ls-for-your-trading-strategies-python-for-financial-analysis-461f9206ce84","timestamp":"2024-11-08T11:51:04Z","content_type":"text/html","content_length":"93015","record_id":"<urn:uuid:2eb3db61-de32-4add-8f71-7001fa682b9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00662.warc.gz"}
Inequalities Investigation | Math = Love Inequalities Investigation This inequalities investigation was inspired by an activity from Discovering Algebra called “Toe the Line.” The goal of this activity is to remind students that we need to flip the inequality symbol whenever we multiply or divide both sides of the inequality by a negative number.
{"url":"https://mathequalslove.net/inequalities-investigation/","timestamp":"2024-11-08T08:10:40Z","content_type":"text/html","content_length":"216446","record_id":"<urn:uuid:b1a2c627-9441-409e-bcb3-98ae82fcc81b>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00790.warc.gz"}
Binary Search in Pseudocode - PseudoEditor Writing a Binary Search in Pseudocode Binary search is a commonly used algorithm in computer science that involves searching for a specific value in a sorted list or array. It is an efficient way to search for a value as it can quickly narrow down the search space by dividing it in half with each comparison. However, implementing binary search can be challenging, especially for beginners. One way to make the implementation of binary search easier is to use pseudocode. Pseudocode is a high-level description of a program that uses a mixture of natural language and programming language syntax to describe the steps involved in a particular algorithm. By using pseudocode, we can focus on the logic of the algorithm without worrying about the specifics of the programming language. In the case of binary search, pseudocode can help us to understand the steps involved in the algorithm and how to implement them in code. We follow the pseudocode standard set by the main computer science exam board in the UK, AQA, allowing a universal set of functions and operators to be used. This makes it much easier for programmers to understand each other's pseudocode! Understanding Binary Search • Binary search is a search algorithm that is used to find a specific value in a sorted list or array. • It is a very efficient algorithm that can quickly find the target value with a small number of comparisons. To perform a binary search, we first need to have a sorted list or array. We start by comparing the target value with the middle element of the list. If the target value is equal to the middle element, we return the index of the middle element. If the target value is greater than the middle element, we search the right half of the list. Otherwise, we search the left half of the list. We repeat this process until we find the target value or determine that it is not in the list. A Pseudocode Binary Search The pseudocode for binary search can be written as follows: First of all, we need to create a data set to sort. This can be done by just declaring an array and a subroutine in our pseudocode. Explaing the Pseudocode In this pseudocode, we initialize the left and right indices to the first and last elements of the list, respectively. We then enter a loop that continues until the left index is greater than the right index. Inside the loop, we calculate the middle index and compare the middle element to the target value. If the middle element is equal to the target value, we return the middle index. Otherwise, we update the left or right index based on whether the target value is greater or less than the middle element. If we exit the loop without finding the target value, we return -1. Overall, binary search is a powerful algorithm that can quickly find a specific value in a sorted list or array. With the pseudocode provided, you should be able to implement binary search in your own programs. Variables Used We need to use an array or list of sorted elements, a target value we want to find in the array, and two pointers to keep track of the left and right boundaries of our search range. Main Algorithm Now that we have our variables defined, let' take a look at the pseudocode for the binary search algorithm. • Set the left pointer to the first element of the array and the right pointer to the last element. • While the left pointer is less than or equal to the right pointer: • Calculate the middle index as the average of the left and right pointers. • If the middle element is equal to the target value, return its index. • If the middle element is less than the target value, set the left pointer to the middle index + 1. • If the middle element is greater than the target value, set the right pointer to the middle index - 1. • If the target value is not found in the array, return -1. Try our pseudocode editor. Give our pseudocode editor a go today for free - with a built-in compiler, tools to convert pseudocode to code, and project saving, PseudoEditor makes writing pseudocode easier than ever! In summary, the binary search algorithm works by repeatedly dividing the search range in half until the target value is found or the search range is empty. This makes it a very efficient algorithm for finding elements in a sorted array.
{"url":"https://pseudoeditor.com/guides/binary-search","timestamp":"2024-11-12T09:00:26Z","content_type":"text/html","content_length":"83450","record_id":"<urn:uuid:fbab9dc0-aca5-4f07-a2e5-0eb50feff203>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00813.warc.gz"}
New NSF award to support 'rich and intricate discoveries' in mathematics | Department of Mathematics Professor of mathematics Holly Swisher was recently awarded a three-year $211.5K National Science Foundation grant to investigate some of the field’s most fundamental questions in number theory pertaining to modular and automorphic forms, which play a crucial role in various branches of mathematics and mathematical physics. These include combinatorics, algebra, analysis, arithmetic geometry, number theory and string theory. An expert in number theory and combinatorics, Swisher’s research focuses on the mathematical areas of partition theory, modular forms, mock modular forms and hypergeometric series as well as the interrelationships among them. An esoteric branch of classical mathematics, modular forms are complex valued functions on the upper-half of the complex plane that are highly symmetric. A complex plane is a way to geometrically represent complex numbers as points on a plane on the Cartesian coordinate system where the x-axis represents the real part and the y-axis represents the imaginary part of each number. “One of the beautiful things about number theory is that seemingly simple questions, when deeply investigated, can blossom into rich and intricate discoveries." Due to its special property of symmetry or congruence, modular forms can be found in fundamental proofs and theorems in other branches of mathematics throughout the twentieth century. Major research problems deeply intertwined with modular forms include the proof of Fermat’s Last Theorem, the Langlands problem, the Taniyara-Shimura conjecture (now the modularity theorem) and open questions in superstring theory. Swisher will pursue research in the field of modular or automorphic forms in two primary ways — projects related to combinatorial generating functions and projects related to hypergeometric functions. At the foundation of her inquiry lies the central and intriguing role played by modular forms in many major problems in number theory over the last century. “One of the beautiful things about number theory is that seemingly simple questions, when deeply investigated, can blossom into rich and intricate discoveries,” said Swisher. She will explore the relationships between several types of modular forms such as quantum modular forms, harmonic Maass forms and mock modular forms, which were first theorized by Ramanujan in 1920. Swisher explores this project through the mathematics of combinatorial functions, as a testing ground for the theory of modular forms. This strand of research will pay particular attention to combinatorial functions related to the theory of partition of numbers (a branch of number theory) to better understand modularity of combinatorial functions. According to Swisher, historically, examples arising from combinatorial generating functions have been a rich source of varied types of modularity behavior, and determining a general theory for the modularity of combinatorial generating functions would be a significant piece of the puzzle. Swisher who leads the Research Experiences for Undergraduates (REU) site in mathematics at Oregon State, has co-authored several articles proving important results on combinatorial objects and modular properties with her REU students. Swisher and her students will also engage in research on classical hypergeometric functions, with respect to the larger modular and automorphic forms landscape, which have been of great importance to many areas of science, including mathematics, engineering and physics. Swisher is a member of one of the most ambitious mathematical collaborations in recent times, the L-functions and Modular Forms Database. She is part of a team of more than 70 mathematicians from 12 different countries who are working to create a massive mathematical database which catalogs objects of central importance in number theory and maps out the intricate connections between them.
{"url":"https://math.oregonstate.edu/impact/2021/08/new-nsf-award-to-support-rich-and-intricate-discoveries-in-mathematics","timestamp":"2024-11-04T21:35:37Z","content_type":"text/html","content_length":"67046","record_id":"<urn:uuid:7ab0b604-845a-496d-8c91-7fa2a3413cac>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00563.warc.gz"}
Subtracting hours and minutes with negative times On Feb 19, 7:47 pm, Nitroman3000 < > wrote: > in A1 I have the time I have to work every day: 08:24 / HH:mm > In B1 I have the time I worked on a given day e.g.: 08:30 > A1=08:24 > B1=08:30 > C1= "=IF(B1<A1;A1-B1;B1-A1)=00:06" that is "Today I worked 6 minutes > more". > But if I work less then 8:24 on a day, then the formula gives me an > incorrect result: > A2=08:24 > B2=08:05 > C2= "=IF(B2<A2;A2-B2;B2-A2)=00:19" that is today I worked 19 minutes > less. I expect "-00:19" minutes, a negative time, indicating that I > worked less. > What am I doing wrong? Nothing. The IF expression is doing what you told it to do (but perhaps you do not understand): it always returns non-negative time -- and for good reason. Generally, you should compute =B2-A2. Thus, if actual time worked (B2) is more than expected time worked (A2), you get a positive result indicating that you worked more. Likewise, you get a negative result indicating that you worked less. The problem is: Excel is not happy with negative values using a time format (Time or Custom [h]:mm). It displays "####" in that case. To work around that, you always want the result to be non-negative -- which is what your IF expression does. You only need some mechanism for distinguish "worked more" and "work less". Exactly what to do depends on your requirements. One way: and you might want to set the Horizontal Alignment format to Right. The only problem: you will not be able to include that "negative time" (i.e. negative time __text__) in sums and other arithmetic If you are okay with that, fine. If not, let us know your needs, and we might be able to offer something that meets your needs.
{"url":"https://groups.google.com/g/microsoft.public.excel/c/oIjAmHeX65g","timestamp":"2024-11-04T16:54:30Z","content_type":"text/html","content_length":"731901","record_id":"<urn:uuid:85bdb2cb-a47c-461a-bb34-46ee2ca13b80>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00405.warc.gz"}
Dynamic Representations - Core concept 4.2: Graphical representations Core concept 4.2 Graphical representations 4.2.1.1 Describe and plot coordinates, including non-integer values, in all four quadrants 4.2.1.2 Solve a range of problems involving coordinates 4.2.1.3* Know that a set of coordinates, constructed according to a mathematical rule, can be represented algebraically and graphically 4.2.1.4 Understand that a graphical representation shows all of the points (within a range) that satisfy a relationship secmm-cc-theme-4-key-idea-32 (1).pptx 4.2.1.3 Know that a set of coordinates, constructed according to a mathematical rule, can be represented algebraically and graphically ●Identify an additive relationship from a set of coordinates. ●Understand how many points are required to determine a linear relationship. ●Identify a multiplicative relationship from a set of coordinates. ●Visualise the relationship between sets of coordinates. ●Identify a two-step relationship from a set of coordinates. ●Solve problems where there is more than one answer and there are elements of experimentation, investigation, checking, reasoning, proof, etc. Example 4 Change the coordinates and then move the sliders to make the line go through all the coordinates. 4.2.2.1 Recognise that linear relationships have particular algebraic and graphical features as a result of the constant rate of change 4.2.2.2 Understand that there are two key elements to any linear relationship: rate of change and intercept point 4.2.2.3* That writing linear equations in the form y = mx + c helps to reveal the structure 4.2.2.4 Solve a range of problems involving graphical and algebraic aspects of linear relationships secmm-cc-theme-4-key-idea-33 (1).pptx 4.2.2.3 That writing linear equations in the form y = mx + c helps to reveal the structure ●The value of the constant term is the y-intercept when the equation is in the form y = mx + c. ●Equations with a y-intercept of zero pass through the origin. ●The value of the coefficient of x is the gradient, when the equation is in the form y = mx + c. ●Identify the gradient and the y-intercept from equations in various forms. Which of the following points lie on the line 2x + y = 7: (1,5) (5,1) (3,1) (4,1) (5,3) (5,-3)? Can you explain why or why not? Can you show this with a calculation as well as a drawing? The graphs above allow you to make connections between the bar model (algebra tile), graph, coordinates and table. The graphs below clearly show the gradient as steps. A similar activity can be done with Cuisenaire rods square paper. 4.2.3.1 Understand that different types of equation give rise to different graph shapes, identifying quadratics in particular 4.2.3.2 Read and interpret points from a graph to solve problems 4.2.3.3 Model real-life situations graphically 4.2.3.4 Recognise that the point of intersection of two linear graphs satisfies both relationships and hence represents the solution to both those equations 4.2.3.3 Model real life situations graphically ●Understand and interpret the gradient in context. ●Understand and interpret the intercept in context. ●Understand and interpret a graph in context. 4.2.3.4 Recognise that the point of intersection of two linear graphs satisfies both relationships and hence represents the solution to both those equations ●Identify regions on the plane ●Understand that the point of intersection is a solution to both equations
{"url":"https://www.enigmadynamicrepresentations.com/ks3-mathematics-core-concepts-visual-representations/sequences-and-graphs/core-concept-4-2-graphical-representations","timestamp":"2024-11-08T05:43:24Z","content_type":"text/html","content_length":"251095","record_id":"<urn:uuid:fe1894be-3d34-48bd-a2b4-f0a6a30f3f60>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00732.warc.gz"}
Process Capability Index (Cpk) Calculator - GEGCalculators Process Capability Index (Cpk) Calculator Process Capability Index (Cpk) Calculator 1. How do you calculate the process capability index (Cpk)? • Cpk is calculated using the formula: Cpk = min((USL – μ) / (3σ), (μ – LSL) / (3σ)), where USL is the upper specification limit, μ is the process mean, σ is the process standard deviation, and LSL is the lower specification limit. 2. What does a 1.33 process capability mean? • A Cpk of 1.33 indicates that the process is capable of producing products that meet the specification limits with a certain level of reliability. It suggests that the process has a moderate level of capability. 3. How do you calculate process capability index in Excel? • In Excel, you can calculate Cpk using the NORMINV and STDEV functions, or you can use specialized statistical add-ins or software for process capability analysis. 4. What is a 1.33 Cpk process capability? • A Cpk of 1.33 implies that the process is capable of producing products that meet the specification limits, with a margin of 1.33 times the standard deviation. 5. How do you calculate CP and Cpk? • CP (Process Capability) is calculated as CP = (USL – LSL) / (6σ), while Cpk (Process Capability Index) is calculated as explained in question 1. 6. How do you manually calculate Cpk? • You can manually calculate Cpk using the formula mentioned in question 1, where you need the process mean (μ), process standard deviation (σ), and specification limits (USL and LSL). 7. Why do we calculate Cpk? • Cpk is calculated to assess and quantify the capability of a process to produce products within specified tolerance limits. It helps identify whether a process is capable of meeting customer 8. How many samples do I need to calculate Cpk? • The number of samples required for Cpk calculation can vary, but a larger sample size (typically 30 or more data points) provides a more reliable estimate of process capability. 9. How many ppm is 1.33 Cpk? • Estimation: A Cpk of 1.33 may correspond to around 2,700 parts per million (ppm) defects outside the specification limits, assuming a normal distribution. 10. What does a Cpk of 0.5 mean? – A Cpk of 0.5 suggests that the process is not capable of consistently producing products within the specified limits. It indicates a low level of process 11. What is the minimum CP value for an acceptable process capability? – Estimation: An acceptable CP value may be considered greater than 1, indicating that the process spread is less than the specification limits. 12. What is a good CP value? – A CP value greater than 1 is generally considered good, indicating that the process can produce products within the specification limits. 13. What does a CP of 2 mean? – A CP of 2 suggests that the process spread is half the width of the specification limits. It indicates a high level of process capability. 14. What is CP 1 process capability? – A CP of 1 indicates that the process spread is equal to the width of the specification limits, which means the process is just capable of producing products within the limits. 15. What is a 1.33 Cpk confidence level? – A Cpk of 1.33 does not directly represent a confidence level. Confidence levels are typically associated with confidence intervals in statistical analysis. 16. What sigma level is 1.33 Cpk? – Estimation: A Cpk of 1.33 may correspond to approximately 4.5 sigma in a process. 17. Why is CP always greater than Cpk? – CP is a measure of potential process capability, while Cpk accounts for the process mean relative to the specification limits. CP assumes the process mean is perfectly centered, while Cpk considers any deviation. 18. How to calculate Cpk online? – There are online calculators and software tools available that can calculate Cpk. You can input your data and specification limits to get the Cpk value. 19. What is the normal range for Cpk? – The normal range for Cpk depends on the industry and product specifications but is typically considered acceptable when Cpk is greater than 1. 20. How do you calculate CP and CV? – CP (Process Capability) and CV (Coefficient of Variation) are different measures. CP is calculated using specification limits and process variability, while CV is calculated as the ratio of the standard deviation to the mean. 21. How do I create a Cpk chart in Excel? – You can create a Cpk chart in Excel by organizing your data, calculating Cpk values, and using Excel’s charting capabilities to visualize the results. 22. What does a Cpk of 1.67 mean? – A Cpk of 1.67 suggests that the process is highly capable and can consistently produce products within the specification limits with a substantial margin. 23. How do you find the sigma value of a Cpk? – The sigma value corresponding to a Cpk can be estimated using statistical tables or software, as it depends on the specific Cpk value and the normal 24. What is the minimum data required for Cpk calculation? – To calculate Cpk, you typically need data consisting of measurements, the process mean (μ), process standard deviation (σ), and specification limits (USL and LSL). 25. What does a Cpk of 0.67 mean? – A Cpk of 0.67 indicates that the process is not capable of consistently producing products within the specified limits. It suggests a relatively low level of process capability. 26. What is a bad Cpk value? – A Cpk value less than 1 is generally considered bad, as it indicates that the process is not meeting the specification limits reliably. 27. What is the formula for Cpk PPM? – Estimation: The formula for calculating Cpk PPM (parts per million defects) would depend on the specific Cpk value, but it generally involves estimating the number of defects outside the specification limits. 28. What does a Cpk of 0.7 mean? – A Cpk of 0.7 suggests that the process is not capable of consistently producing products within the specified limits. It indicates a low level of process 29. How to calculate capability? – Capability is calculated using various indices such as CP and Cpk, which assess a process’s ability to produce products within specification limits based on process variation and mean values. 30. How high is too high for Cpk? – There isn’t a defined upper limit for Cpk, but extremely high Cpk values (e.g., well above 2) may indicate over-control or excessive precision that could lead to increased costs. 31. What is a good and bad Cpk? – A good Cpk is typically greater than 1, indicating a capable process. A bad Cpk is less than 1, suggesting an incapable process. 32. What is the difference between CP and Cpk? – CP (Process Capability) measures the potential capability of a process, assuming it’s centered perfectly. Cpk (Process Capability Index) accounts for the process mean’s position relative to specification limits. 33. Can CP be greater than Cpk? – Yes, CP can be greater than Cpk, especially when the process mean is well-centered between the specification limits. 34. Can Cpk be negative? – Yes, Cpk can be negative when the process spread is greater than the specification width, indicating a process that is not capable of meeting the specifications. 35. Should CP be high or low? – CP should be high, preferably greater than 1, to indicate a capable process. 36. What does a low CP value mean? – A low CP value suggests that the process spread is wider compared to the specification limits, indicating lower process capability. 37. What is CP, a Capability Index measured for? – CP (Process Capability) is measured to assess the potential capability of a process to produce products within specification limits. 38. What does a CP of 1.50 mean? – A CP of 1.50 suggests that the process spread is 1.5 times the width of the specification limits, indicating moderate process capability. 39. How do I check my machine capability? – Machine capability can be assessed by collecting data on machine performance, measuring variations, and calculating CP and Cpk values to evaluate its 40. What does the process capability index tell you? – The process capability index (Cpk) provides insights into a process’s ability to produce products within specified tolerance limits. It quantifies the process’s capability to meet customer requirements. 41. What is the CP process capability ratio? – The CP process capability ratio measures the capability of a process to produce products within specification limits. It is calculated as CP = (USL – LSL) / (6σ). 42. What is process capability in simple words? – Process capability, in simple terms, refers to the ability of a manufacturing process to consistently produce products that meet specified quality requirements without exceeding tolerance limits. 43. How do you read a process capability report? – A process capability report typically includes CP and Cpk values, which indicate the process’s capability. Higher values suggest better capability, while lower values may indicate the need for process improvement. 44. What is the best measure for process capability? – The best measure for process capability depends on the specific context. CP and Cpk are commonly used, with Cpk providing a more robust assessment of capability. 45. What is the difference between Cpk and CMK? – CMK measures the capability of a process based on its historical data, while Cpk considers the short-term capability based on current data. 46. What does Cpk 1.66 mean? – A Cpk of 1.66 suggests that the process is highly capable of consistently producing products within the specification limits with a significant margin. 47. How do you calculate CM and CMK? – CM (Machine Capability) and CMK are calculated similarly to CP and Cpk, but they are based on historical machine data rather than the process data. GEG Calculators is a comprehensive online platform that offers a wide range of calculators to cater to various needs. With over 300 calculators covering finance, health, science, mathematics, and more, GEG Calculators provides users with accurate and convenient tools for everyday calculations. The website’s user-friendly interface ensures easy navigation and accessibility, making it suitable for people from all walks of life. Whether it’s financial planning, health assessments, or educational purposes, GEG Calculators has a calculator to suit every requirement. With its reliable and up-to-date calculations, GEG Calculators has become a go-to resource for individuals, professionals, and students seeking quick and precise results for their calculations. Leave a Comment
{"url":"https://gegcalculators.com/process-capability-index-cpk-calculator/","timestamp":"2024-11-15T04:48:36Z","content_type":"text/html","content_length":"175671","record_id":"<urn:uuid:09f0a529-5e5b-448e-86fe-d6ac5b41ecb0>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00204.warc.gz"}
Daily Crime Log, Fire Log & Incidents Officers are investigating a malicious mischief incident that occurred at the Sigma Nu fraternity. Students reported that unknown members of another fraternity threw items at their fraternity house. • Reported: September 13, 2019 at 4:30 am • Occurred: September 13, 2019 at 12:30 pm • Location: Sigma Nu Fraternity • Disposition: Open
{"url":"https://incidents.utulsa.edu/daily-crime-log/1212-19/","timestamp":"2024-11-10T08:34:58Z","content_type":"text/html","content_length":"58456","record_id":"<urn:uuid:3f263de9-614a-4cf0-9c69-78a4331dd4b7>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00496.warc.gz"}
revengc: An R package to reverse engineer decoupled and censored data Decoupled (e.g. separate averages) and censored (e.g. > 100 species) variables are continually reported by many well-established organizations (e.g. World Health Organization (WHO), Centers for Disease Control and Prevention (CDC), World Bank, and various national censuses). The challenge therefore is to infer what the original data could have been given summarized information. We present an R package that reverse engineers decoupled and/or censored data with two main functions. The cnbinom.pars function estimates the average and dispersion parameter of a censored univariate frequency table. The rec function reverse engineers summarized data into an uncensored bivariate table of probabilities. It is highly recommended for a user to read the vignettes for more information about the methodology of both functions. Getting Started You can install the latest development version from github with or the the latest release version from CRAN with cnbinom.pars() has the following format where a description of the argument directly below • censoredtable - A frequency table (censored and/or uncensored). A data.frame and matrix are acceptable classes. See Table format section below. rec() has the following format where a description of each argument is found below rec(X, Y, Xlowerbound, Xupperbound, Ylowerbound, Yupperbound, seed.matrix, seed.estimation.method) • X - Argument can be an average, a univariate frequency table, or a censored contingency table. The average value should be a numeric class while a data.frame or matrix are acceptable table classes. Y defaults to NULL if X argument is a censored contingency table. See Table format section below. • Y - Same description as X but this argument is for the Y variable. X defaults to NULL if Y argument is a censored contingency table. • Xlowerbound - A numeric class value to represent the left bound for X (row in contingency table). The value must strictly be a non-negative integer and cannot be greater than the lowest category/ average value provided for X (e.g. the lower bound cannot be 6 if a table has ‘< 5’ as a X or row category). • Xupperbound - A numeric class value to represent the right bound for X (row in contingency table). The value must strictly be a non-negative integer and cannot be less than the highest category/ average value provided for X (e.g. the upper bound cannot be 90 if a table has ‘> 100’ as a X or row category). • Ylowerbound - Same description as Xlowerbound but this argument is for Y (column in contingency table). • Yupperbound - Same description as Xupperbound but this argument is for Y (column in contingency table). • seed.matrix - An initial probability matrix to be updated. If decoupled variables is provided the default is a Xlowerbound:Xupperbound by Ylowerbound:Yupperbound matrix with interior cells of 1, which are then converted to probabilities. If a censored contingency table is provided the default is the seedmatrix()$Probabilities output. • seed.estimation.method - A character string indicating which method is used for updating the seed.matrix. The choices are: “ipfp”, “ml”, “chi2”, or “lsq”. Default is “ipfp”. Table format The univariate frequency table, which can be a data.frame or matrix class, must have two columns and n number of rows. The categories must be in the first column with the frequencies or probabilities in the second column. Row names should never be placed in this table (the default row names should always be 1:n). Column names can be any character string. The only symbols accepted for censored data are listed below. Note, less than or equal to (<= and LE) is not equivalent to less than (< and L) and greater than or equal to (>=, +, and GE) is not equivalent to greater than (> and G). Also, calculations use closed intervals. • left censoring: <, L, <=, LE • interval censoring: - or I (symbol has to be placed in the middle of the two category values) • right censoring: >, >=, +, G, GE • uncensored: no symbol (only provide category value) The formatted example below is made with the following code. univariatetable<-cbind(as.character(c("<=6", "7-12", "13-19", "20+")), c(11800,57100,14800,3900)) 7-12 57100 13-19 14800 20+ 3900 The contingency table has restrictions. The censored symbols should follow the requirements listed above. The table’s class can be a data.frame or a matrix. The column names should be the Y category values. Row names should never be placed in this table, the default should always be 1:n. The first column should be the X category values. The inside of the table are X * Y cross tabulation, which are either nonnegative frequencies or probabilities if seed.estimation.method is “ipfp” or strictly positive when method is “ml”, “lsq” or “chi2”. The row and column marginal totals corresponding to their X and Y category values need to be placed in this table. The top left, top right, and bottom left corners of the table should be NA or blank. The bottom right corner can be a total cross tabulation sum value, NA, or blank. The formatted example below is made with the following code. contingencytable<-matrix(c(18, 13, 7, 19, 8, 5, 8, 12, 10), nrow = 3, ncol = 3) contingencytable<-cbind(contingencytable, rowmarginal) contingencytable<-rbind(contingencytable, colmarginal) contingencytable<-data.frame(c("<5", "5I9", "G9", NA), contingencytable) colnames(contingencytable)<-c(NA,"<=19","20-30",">=31", NA) <5 18 19 8 45 5I9 13 8 12 33 G9 7 5 10 22 NA 38 32 30 100 Examples of Applying functions to Census Data A Nepal Living Standards Survey [1] provides a censored table and average for urban household size. Using the censored table, the cnbinom.pars function calculates a close approximation to the provided average household size (4.4 people). # revengc has the Nepal houshold table preloaded as univariatetable.csv cnbinom.pars(censoredtable = univariatetable.csv) In 2010, the Population Census Data - Statistics Indonesia provided over 60 censored contingency tables containing Floor Area of Dwelling Unit (square meter) by Household Member Size. The tables are separated by province, urban, and rural. Here we use the household size by area contingency table for Indonesia’s rural Aceh Province to show the multiple coding steps and functions implemented inside rec. This allows the user to see a methodology workflow in code form. The final uncensored household size by area estimated probability table, which implemented the “ipfp” method and default seed matrix, has rows ranging from 1 (Xlowerbound) to 15 (Xupperbound) people and columns ranging from 10 (Ylowerbound) to 310 (Yupperbound) square meters. # data = Indonesia 's rural Aceh Province censored contingency table # preloaded as 'contingencytable.csv' # provided upper and lower bound values for table # X=row and Y=column # table of row marginals provides average and dispersion for x # table of column marginals provides average and dispersion for y # create uncensored row and column ranges # new uncensored row marginal table = truncated negative binomial distribution uncensored.row.margin<-dtrunc(rowrange, mu=x$Average, size = x$Dispersion, a = Xlowerbound-1, b = Xupperbound, spec = "nbinom") # new uncensored column margin table = truncated negative binomial distribution uncensored.column.margin<-dtrunc(colrange, mu=y$Average, size = y$Dispersion, a = Ylowerbound-1, b = Yupperbound, spec = "nbinom") # sum of truncated distributions equal 1 # margins need to be equal for mipfp # create seed of probabilities (rec default) seed.output<-seedmatrix(contingencytable.csv, Xlowerbound, Xupperbound, Ylowerbound, Yupperbound)$Probabilities # run mipfp # store the new margins in a list tgt.data<-list(uncensored.row.margin, uncensored.column.margin) # list of dimensions of each marginal constrain # calling the estimated function ## seed has to be in array format for mipfp package ## ipfp is the selected seed.estimation.method length(Ylowerbound:Yupperbound))), tgt.list, tgt.data, method="ipfp")$x.hat # filling in names of updated seed # reweight estimates to known censored interior cells final1<-reweight.contingencytable(observed.table = contingencytable.csv, estimated.table = final1) # final result is probabilities # rec function outputs the same table # default of rec seed.estimation.method is ipfp # default of rec seed.matrix is the output of the seedmatrix() function final2<-rec(X= contingencytable.csv, Xlowerbound = 1, Xupperbound = 15, Ylowerbound = 10, Yupperbound = 310) # check that both data.frame results have same values all(final1 == final2$Probabilities) [1] National Planning Commissions Secretariat, Government of Nepal. (2011). Nepal Living Standards Survey. Retrieved from: http://siteresources.worldbank.org/INTLSMS/Resources/3358986-1181743055198/ [2] Population Census Data - Statistics Indonesia. (2010). Household by Floor Area of Dwelling Unit and Households Member Size. Retrieved from: http://sp2010.bps.go.id/index.php/site/tabel?wid=
{"url":"https://cran.uvigo.es/web/packages/revengc/readme/README.html","timestamp":"2024-11-07T00:10:32Z","content_type":"application/xhtml+xml","content_length":"13578","record_id":"<urn:uuid:8209b51d-3f2c-42af-ba14-58d2e2207bda>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00267.warc.gz"}
Tip: K-means clustering in SAS - comparing PROC FASTCLUS and PROC HPCLUS SAS has many PROCs for performing different types of clustering tasks, but this tip will focus on using the well-known k-means algorithm ( http://en.wikipedia.org/wiki/K-means_clustering) as implemented by the FASTCLUS procedure in SAS/STAT® and the HPCLUS procedure in SAS® Enterprise Miner™. The k-means algorithm assigns clusters to observations in a way that minimizes the distance between observations and their assigned cluster centroids. This is done in an iterative approach by reassigning cluster membership and cluster centroids until the solution reaches a local optimum. While both procedures implement standard k-means, PROC FASTCLUS achieves fast convergence through non-random initialization, while PROC HPCLUS enables clustering of large data sets through multithreaded and distributed computing. The two procedures also differ in a few implementation details, as outlined below. Variables measured on different scales (such as age and income) should be standardized prior to clustering, so that the solution is not driven by variables measured on larger scales. PROC FASTCLUS does not have a standardization option, and you should use methods such as PROC STDIZE to standardize input data prior to running FASTCLUS. In contrast, PROC HPCLUS provides a STANDARDIZE option with range and z-score standardization. PROC FASTCLUS selects the first k complete (no missing values) observations that are RADIUS (default set to 0) apart from each other as initial cluster centroids. As such, the ordering of observations can affect initial centroids selection, and FASTCLUS is not recommended for data sets with fewer than 100 observations. You can use the REPLACE=RANDOM option to select pseudo-random centroids instead. In Enterprise Miner 13.2, PROC HPCLUS provides only random initialization, using pseudo-random indices to select k complete observations as initial cluster centroids. You can specify the seed for the random number generator using the SEED= option, which can provide consistency across runs that use same number of nodes and threads. Clustering Nominal variables PROC FASTCLUS does not support the clustering of nominal variables. In Enterprise Miner 13.2, PROC HPCLUS uses the k-modes algorithm (http://shapeofdata.wordpress.com/2014/03/04/k-modes/), where cluster centroids are replaced by the modes of the nominal variables, for clustering nominal data sets. The DISTANCENOM option offers several choices for determining the distance between two nominal values, which can be simple matching distance or based on the occurrence frequency of the nominal value. Prior to Enterprise Miner 13.2, PROC HPCLUS did not support clustering of nominal variables. Neither procedure currently supports the clustering of data with both numeric and nominal variables. The DISTANCE procedure can be used to obtain distance matrices of nominal variables for use in other clustering procedures such as CLUSTER or MODECLUS, as detailed in http://support.sas.com/kb/22/ Missing values PROC FASTCLUS deals with observations with missing values by scaling the distance obtained from all non-missing variables. By default, PROC HPCLUS ignores observations with missing values. You can also select to impute missing values by using the IMPUTE option, which sets missing values to the means of numeric variables or the modes of nominal variables. Determining the Number of Clusters k PROC FASTCLUS considers k values less than or equal to the MAXCLUSTERS option, and it reports results for only a single k value, which is generally k=MAXCLUSTERS if MAXCLUSTERS is reasonably small. If the appropriate number of clusters is not known beforehand, you should try different values of MAXCLUSTERS and make a decision based on the output results. PROC FASTCLUS outputs several statistics that can be used to determine the best value of k, the interpretation of which will be discussed in an upcoming tip. For numeric variables, PROC HPCLUS provides the convenient NOC=ABC option to auto-select the number of clusters k based on the aligned box criterion (ABC). For each k value from MINCLUSTERS (default to 2) to MAXCLUSTERS, ABC compares the within-cluster dispersion of the results to that of a simulated reference distribution, and selects a value of k where the within-cluster dispersions of the data results and the reference distribution differ greatly. The interpretation of the ABC value will also be discussed in further detail in an upcoming tip. Scalability and Speed PROC FASTCLUS has been used for enterprise scale problems for many years. It is a highly efficient but single-threaded procedure that decreases execution time by locating non-random cluster seeds. PROC HPCLUS is one of many High-Performance Procedures in SAS Enterprise MIner 13.2. It is a multithreaded, distributed implementation of k-means that makes full use of multi-core processors and distributed computing environments. For example, PROC HPCLUS completed the extremely memory and computation-intensive task of assigning approximately 100 million observations with 13 variables to 1000 clusters in 46 minutes while executing on a 24 node Teradata appliance. As an example, we will cluster the pixel values from handwritten digits taken from the MNIST database. Previous research has shown that clusters in the MNIST database are overlapping and non-spherical. To create easily interpretable results for this example, we will use 500 observations each from digits {0, 1, 2, 3, 4}. Note that standardization is not necessary since variables are measured in the same units. Here are three ways to cluster our data&colon; 1) using purely FASTCLUS, 2) using purely HPCLUS, and 3) using a combination of both. 1) Using purely FASTCLUS. If we don't know the number of expected clusters beforehand, we have to run PROC FASTCLUS multiple times with different k values. The macro below looks at k=3 to 8. /* run fastclus for k from 3 to 8 */ %macro doFASTCLUS; %do k= 3 %to 8; proc fastclus data= digits out= fcOut maxiter= 100 converge= 0 /* run to complete convergence */ radius= 100 /* look for initial centroids that are far apart */ maxclusters= &k The summary output for each k includes four different statistics for determining the compactness and separation of the clustering results. The summary results for k=5 are shown below. Look for a future tip that discusses how to estimate the number of clusters using output statistics such as the Cubic Clustering Criterion and Pseudo F Statistic. 2) Using purely HPCLUS In HPCLUS, we can use the NOC= ABC option to auto-select the best k value between 3 and 8. proc hpclus data= digits maxclusters= 8 maxiter= 100 seed= 54321 /* set seed for pseudo-random number generator */ NOC= ABC(B= 1 minclusters= 3 align= PCA); /* select best k between 3 and 8 using ABC */ score out= OutScore; input pixel:; /* input variables */ ods output ABCStats= ABC; /* save ABC criterion values for plotting */ HPCLUS reports the best number of clusters in its output, along with the ABC gap values for each k value. The interpretation of ABC values will be discussed in detail in a future tip as well. We can view a plot of the ABC values for each k using the following code: /* view ABC (used to determine best k) */ proc sgplot data= ABC; scatter x= K y= Gap / markerattrs= (color= 'STPK' symbol= 'circleFilled'); xaxis grid integer values= (3 to 8 by 1); yaxis label= 'ABC Value'; The slight peak at k=5 indicates that the best estimate for number of clusters is 5, which is expected for our data set of handwritten digits between 0-4. 3) Combining FASTCLUS and HPCLUS. Alternatively, we can combine the strengths of FASTCLUS and HPCLUS by first using HPCLUS to estimate the number of clusters k, then using FASTCLUS to obtain well-separated clusters through its non-random initialization. /* run HPCLUS to auto-select best k value */ proc hpclus data= digits maxclusters= 8 maxiter= 100 seed= 54321 NOC= ABC(B= 1 minclusters= 3 align= PCA); /* select best k between 3 and 8 using ABC */ input pixel:; ods output ABCResults= k; /* save k value selected by ABC */ /* get selected k into a macro var */ data _null_; set k; call symput('k', strip(k)); %put k= &k.; /* now run FASTCLUS to find k nice clusters from non-random seeds */ proc fastclus data= digits out= fcOut maxiter= 100 converge= 0 radius= 100 /* look for initial centroids that are far apart */ maxclusters= &k /* k found by hpclus */ PROC FASTCLUS is suitable for small to medium-sized data. It provides non-random initialization which often leads to more well-separated clusters, and it gives several statistics indicating the goodness of the clustering results. If you do not know what the number of clusters k should be beforehand, you would have to run FASTCLUS with different values of k to manually determine the best k. HPCLUS is intended for running computationally demanding clustering tasks on distributed systems. It can automatically select the best k value within a specified range, but only supports random Useful links See SAS/STAT 9.2 User's Guide Introduction to Cluster Procedures (https://support.sas.com/documentation/cdl/en/statugclustering/61759/PDF/default/statugclustering.pdf) for information on the many different clustering procedures in SAS. This document covers procedures from SAS/STAT, including CLUSTER, FASTCLUS, MODECLUS, VARCLUS, and TREE. See Usage Note 22542: Clustering binary, ordinal, or nominal data (http://support.sas.com/kb/22/542.html) Video tutorial The tutorial below by SAS' @CatTruxillo walks you through two ways to do k-means clustering in SAS Visual Statistics and SAS Studio. Besides PROC FASTCLUS, described above, there are other ways to perform k-means clustering in SAS: you can write a program in PROC KCLUS, PROC CAS, Python, or R. You can point and click in SAS Visual Statistics, Enterprise Guide, Enterprise Miner, JMP, Model Studio, and SAS Studio.
{"url":"https://communities.sas.com/t5/SAS-Communities-Library/Tip-K-means-clustering-in-SAS-comparing-PROC-FASTCLUS-and-PROC/ta-p/221369","timestamp":"2024-11-03T16:48:14Z","content_type":"text/html","content_length":"162967","record_id":"<urn:uuid:a025ebe1-1da3-4a9b-ba4f-ea696050de16>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00203.warc.gz"}
Quantitative Dimension Economics Track 1. ECON BC1007 Math Methods for Economics is required to satisfy the Calculus requirement for the Economics major. However, two semesters of calculus, MATH UN1101 Calculus I followed by MATH UN1201 Calculus III* may be substituted for Math Methods. (If the student has received AP credit for or has placed out of Calculus 1, then only Calculus III need be taken). APMA E2000 - Multivariable Calculus for Engineers and Applied Scientists is an acceptable substitute for Calculus III. Also acceptable is a more fast-paced sequence for advanced students -- UN1207-8 Honors Calculus. (Whichever sequence you choose, the courses count toward the college wide distribution requirements.) A grade of C- or better is required to fulfill the major requirement. Learn more about it with the Calculus AP credit and Math sequence table 2. Two semesters of statistics and econometrics are required. Most students take Statistics for Economics (ECON BC2411) but STAT W1101 (formerly STAT W1111), STAT W1201 (formerly STAT W1211), PSYC BC1101, [SIEO W3600 or] STAT GU4001 (formerly SIEO W4150) are also acceptable to meet the requirement. Most students take Econometrics (ECON BC3018) but Introduction to Econometrics (ECON UN3412) is also acceptable. Students whose interest lies in the mathematical or quantitative dimension of the major are encouraged to take additional courses. More advanced mathematics courses that are particularly relevant to economics include: Linear Algebra (MATH UN2010), Analysis and Optimization (MAT UN2500), Ordinary Differential Equations (MATH UN3027), Dynamical Systems (MATH UN3030), and Introduction to Modern Analysis (MATH GU4061). * (The Mathematics department has changed its Calculus offerings since 2003 so that Calculus II and III can be taken in any order, provided the student receives a final grade of B or better in I. And since Calculus III has more topics directly relevant to Economics, we require III instead of II)
{"url":"https://economics.barnard.edu/quantitative-dimension-economics-track","timestamp":"2024-11-11T15:58:11Z","content_type":"text/html","content_length":"72518","record_id":"<urn:uuid:efefd4ed-a4e6-4432-9ce9-f6d66a200a08>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00386.warc.gz"}
Reading: Calculating Opportunity Cost It makes intuitive sense that Charlie can buy only a limited number of bus tickets and burgers with a limited budget. Also, the more burgers he buys, the fewer bus tickets he can buy. With a simple example like this, it isn’t too hard to determine what he can do with his very small budget, but when budgets and constraints are more complex, it’s important to know how to solve equations that demonstrate budget constraints and opportunity cost. Very simply, when Charlie is spending his full budget on burgers and tickets, his budget is equal to the total amount that he spends on burgers plus the total amount that he spends on bus tickets. For example, if Charlie buys four bus tickets and four burgers with his $10 budget (point B on the graph below), the equation would be You can see this on the graph of Charlie’s budget constraint, Figure 1, below. If we want to answer the question “How many burgers and bus tickets can Charlie buy?” then we need to use the budget constraint equation. Step 1. The equation for any budget constraint is the following: [latex]\text{Budget }={P}_{1}\times{Q}_{1}+{P}_{2}\times{Q}_{2}+\dots+{P}_{n}\times{Q}_{n}\\[/latex] where P and Q are the price and respective quantity of any number, n, of items purchased and Budget is the amount of income one has to spend. Step 2. Apply the budget constraint equation to the scenario. In Charlie’s case, this works out to be [latex]\begin{array}{l}\text{Budget}={P}_{1}\times{Q}_{1}+{P}_{2}\times{Q}_{2}\\\text{Budget}=\$10\\\,\,\,\,\,\,\,\,\,\,\,\,{P}_{1}=\$2\left(\text{the price of a burger}\right)\\\,\,\,\,\,\,\,\,\,\, \,\,{Q}_{1}=\text{quantity of burgers}\left(\text{variable}\right)\\\,\,\,\,\,\,\,\,\,\,\,\,{P}_{2}=\$0.50\left(\text{the price of a bus ticket}\right)\\\,\,\,\,\,\,\,\,\,\,\,\,{Q}_{2}=\text{quantity of tickets}\left(\text{variable}\right)\end{array}[/latex] For Charlie, this is Step 3. Simplify the equation. At this point we need to decide whether to solve for [latex]{Q}_{1}[/latex] or [latex]{Q}_{2}[/latex]. Remember, [latex]{Q}_{1} = \text{quantity of burgers} \\[/latex]. So, in this equation [latex]{Q}_{1}[/latex] represents the number of burgers Charlie can buy depending on how many bus tickets he wants to purchase in a given week. [latex]{Q}_{2}=\\text{quantity of tickets}\[/latex]. So, [latex]{Q}_{2}[/latex] represents the number of bus tickets Charlie can buy depending on how many burgers he wants to purchase in a given week. We are going solve for [latex]{Q}_{1}[/latex]. (2\right)-10+\left(2\right)0.50Q_{2}\,\,\,\,\,\,\,\,\,\text{Clear decimal by multiplying everything by 2}\\\,\,\,\,\,\,\,\,\,\,\,\,-4Q_{1}=-20+Q_{2}\\\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,Q_{1}=5-\ frac{1}{4}Q_{2}\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\text{Divide both sides by}-4\end{array}\\[/latex] Step 4. Use the equation. Now we have an equation that helps us calculate the number of burgers Charlie can buy depending on how many bus tickets he wants to purchase in a given week. For example, say he wants 8 bus tickets in a given week. [latex]{Q}_{2}[/latex] represents the number of bus tickets Charlie buys, so we plug in 8 for [latex]{Q}_{2}[/latex], which gives us This means Charlie can buy 3 burgers that week (point C on the graph, above). Let’s try one more. Say Charlie has a week when he walks everywhere he goes so that he can splurge on burgers. He buys 0 bus tickets that week. [latex]{Q}_{2}[/latex] represents the number of bus tickets Charlie buys, so we plug in 0 for [latex]{Q}_{2}[/latex], giving us So, if Charlie doesn’t ride the bus, he can buy 5 burgers that week (point A on the graph). If you plug other numbers of bus tickets into the equation, you get the results shown in Table 1, below, which are the points on Charlie’s budget constraint. Table 1. Point Quantity of Burgers (at $2) Quantity of Bus Tickets (at 50 cents) A 5 0 B 4 4 C 3 8 D 2 12 E 1 16 F 0 20 Step 4. Graph the results. If we plot each point on a graph, we can see a line that shows us the number of burgers Charlie can buy depending on how many bus tickets he wants to purchase in a given week. We can make two important observations about this graph. First, the slope of the line is negative (the line slopes downward). Remember in the last module when we discussed graphing, we noted that when when X and Y have a negative, or inverse, relationship, X and Y move in opposite directions—that is, as one rises, the other falls. This means that the only way to get more of one good is to give up some of the other. Second, the slope is defined as the change in the number of burgers (shown on the vertical axis) Charlie can buy for every incremental change in the number of tickets (shown on the horizontal axis) he buys. If he buys one less burger, he can buy four more bus tickets. The slope of a budget constraint always shows the opportunity cost of the good that is on the horizontal axis. If Charlie has to give up lots of burgers to buy just one bus ticket, then the slope will be steeper, because the opportunity cost is greater. Let’s look at this in action and see it on a graph. What if we change the price of the burger to $1? We will keep the price of bus tickets at 50 cents. Now, instead of buying 4 more tickets for every burger he gives up, Charlie can only buy 2 tickets for every burger he gives up. Figure 3, below, shows Charlie’s new budget constraint (and the change in slope). Self Check: The Cost of Choices Answer the question(s) below to see how well you understand the topics covered in the previous section. This short quiz does not count toward your grade in the class, and you can retake it an unlimited number of times. You’ll have more success on the Self Check if you’ve completed the two Readings in this section. Use this quiz to check your understanding and decide whether to (1) study the previous section further or (2) move on to the next section.
{"url":"https://courses.lumenlearning.com/atd-sac-microeconomics/chapter/reading-calculating-opportunity-cost/","timestamp":"2024-11-05T09:23:10Z","content_type":"text/html","content_length":"56617","record_id":"<urn:uuid:94ff017b-c038-4667-8a11-acd1204c83eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00785.warc.gz"}
3819 -- Coverage Time Limit: 1000MS Memory Limit: 65536K Total Submissions: 492 Accepted: 106 A cell phone user is travelling along a line segment with end points having integer coordinates. In order for the user to have cell phone coverage, it must be within the transmission radius of some transmission tower. As the user travels along the path, cell phone coverage may be gained (or lost) as the user moves inside the radius of some tower (or outside of the radii of all towers). Given the location of up to 100 towers and their transmission radii, you are to compute the percentage of cell phone coverage the user has along the specified path. The (x,y) coordinates are integers between -100 and 100, inclusive, and the tower radii are integers between 1 and 100, inclusive. Your program will be given a sequence of configurations, one per line, of the form: N C0X C0Y C1X C1Y T1X T1Y T1R T2X T2Y T2R ... Here, N is the number of towers, (C0X,C0Y) is the start of path of the cell phone user, (C1X,C1Y) is the end of the path, (TkX,TkY) is the position of the kth tower, and TkR is its transmission radius. The start and end points of the paths are distinct. The last problem is terminated by the line For each configuration, output one line containing the percentage of coverage the cell phone has, rounded to two decimal places. Sample Input Sample Output 2009 Rocky Mountain
{"url":"http://poj.org/problem?id=3819","timestamp":"2024-11-10T05:32:32Z","content_type":"text/html","content_length":"6432","record_id":"<urn:uuid:e3dd328b-3bc0-472c-8e34-869b5dfcac6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00404.warc.gz"}
AVERAGEIFS - Excel docs, syntax and examples The AVERAGEIFS function calculates the average of a range of values based on multiple criteria. It allows you to specify one or more sets of criteria and determine the average of cells that meet all of the specified conditions. Example explanation Cell D4 uses the AVERAGEIFS function to calculate the average score of students in math class who scored above 50. =AVERAGEIFS(average_range, criteria_range1, criteria1, [criteria_range2, criteria2, ...]) average_range The range of cells to average based on the specified criteria. criteria_range1 The range of cells that the first criterion will be applied to. criteria1 The criterion or condition to be met in the first criteria_range. criteria_range2 (Optional) Additional ranges to which further criteria will be applied. criteria2 (Optional) Additional criteria to be met in the corresponding criteria_range. ... Additional pairs of criteria_range and criteria can be added as needed. About AVERAGEIFS 🔗 When you need to determine the average of a range of data while considering specific conditions, the AVERAGEIFS function in Excel comes to the rescue. It empowers you to tailor your average calculation by applying multiple criteria, providing a more granular and targeted outcome. This functionality is particularly valuable in scenarios where you want to analyze data based on different characteristics or attributes, allowing for deeper insights into your dataset's subsets. The AVERAGEIFS function becomes your trusty companion, effortlessly sifting through your data to deliver the precise average you seek, guided by the criteria you define. With its flexibility and versatility, AVERAGEIFS becomes an indispensable tool in your Excel arsenal, enabling you to derive meaningful averages that align with your specific requirements. Examples 🔗 Suppose you have a dataset containing sales information for different products, and you want to calculate the average sales amount for a particular product category (criteria1) during a specific month (criteria2). You can use the AVERAGEIFS formula as follows: =AVERAGEIFS(sales_amount_range, product_category_range, "Electronics", month_range, "January") This will provide the average sales amount for the Electronics category in the month of January based on the specified criteria. In another scenario, you have data for student scores across various subjects, and you wish to determine the average score for a specific grade level (criteria1) and in a particular exam type (criteria2). You can employ the AVERAGEIFS function in the following manner: =AVERAGEIFS(score_range, grade_level_range, "10th Grade", exam_type_range, "Midterm") This will yield the average score for the 10th grade students specifically in the Midterm exam. The AVERAGEIFS function requires that all the specified criteria are met for a cell to be included in the average calculation. Make sure to supply valid ranges and criteria that accurately capture the data subsets you intend to average. Additionally, ensure proper alignment of the criteria ranges and criteria to ensure accurate computation of the average. Questions 🔗 What happens if not all of the criteria specified in AVERAGEIFS are met for a particular cell? If any of the specified criteria are not met for a cell, that cell's value will be excluded from the average calculation. AVERAGEIFS only includes cells that meet all of the specified conditions. Can I specify multiple sets of criteria in the AVERAGEIFS function? Yes, AVERAGEIFS allows you to specify multiple sets of criteria by providing additional pairs of criteria_range and criteria. This enables you to further refine the subsets of data for which you want to calculate the average. What is the difference between AVERAGEIFS and AVERAGEIF? While AVERAGEIFS allows for the application of multiple criteria to determine the average, AVERAGEIF focuses on a single criterion. AVERAGEIF calculates the average based on a single condition, whereas AVERAGEIFS enables the use of multiple conditions to refine the average calculation. Related functions 🔗 Leave a Comment
{"url":"https://spreadsheetcenter.com/excel-functions/averageifs/","timestamp":"2024-11-05T00:47:09Z","content_type":"text/html","content_length":"32649","record_id":"<urn:uuid:dfd7aa95-66e1-433d-93bc-8978257b3c51>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00013.warc.gz"}
30 Best Calculus tutors - Wiingy Find the Best Calculus Tutors Are you looking for the best Calculus tutors? Our experienced Calculus tutors will help you solve complex Calculus problems with step-by-step solutions. Our one-on-one private Calculus tutoring lessons start at $28/hr. Our online Calculus tutors will help you understand Calculus concepts and provide personalized lessons, homework help, and test prep at an affordable price. What sets Wiingy apart Expert verified tutors Free Trial Lesson No subscriptions Sign up with 1 lesson Transparent refunds No questions asked Starting at $28/hr Affordable 1-on-1 Learning Top Calculus tutors available online 2003 Calculus tutors available Responds in 4 min Star Tutor Calculus Tutor 13+ years experience Master calculus concepts with the help of a tutor who has a Bachelor's degree and 13 years of experience. Gain a thorough understanding of calculus through expert instruction and tailored lessons. Responds in 5 min Star Tutor Calculus Tutor 5+ years experience Develop your calculus skills with personalized guidance from a tutor who has a Bachelor's degree and 5 years of experience. Ideal for students seeking practical and effective calculus support. Responds in 4 min Star Tutor Calculus Tutor 4+ years experience Outstanding Calculus instructor with 4+ years of experience, possessing a strong understanding of mathematical concepts. Holds a Master's Degree in Mathematics. Responds in 5 min Star Tutor Calculus Tutor 4+ years experience Qualified Calculus tutor with a Bachelor's degree in Physics and 4+ years of experience. Provides interactive 1-on-1 concept clearing lessons, homework help, and test prep to school and college students. Responds in 5 min Star Tutor Calculus Tutor 4+ years experience Calculus tutor, boasting 4+ years of proven experience, Excels in thorough exam preparation and fostering critical analysis skills. Additionally, hold a Bachelor's Degree in Mathematics. Responds in 11 min Star Tutor Calculus Tutor 3+ years experience Discover expert Calculus tutoring from a professional with a Master’s in Mathematics and 3 years of teaching experience. Offering clear, engaging lessons designed to simplify concepts and enhance understanding for academic success in Calculus Responds in 3 min Star Tutor Calculus Tutor 12+ years experience Master Calculus with specialized guidance from a tutor holding a Master’s degree and 12 years of experience. Gain a deep understanding of complex calculus concepts and techniques. Responds in 4 min Star Tutor Calculus Tutor 12+ years experience Achieve success in Calculus with personalized instruction from a tutor holding a Bachelor’s degree and 12 years of experience. Strengthen your math skills and boost your academic performance. Responds in 12 min Star Tutor Calculus Tutor 2+ years experience Calculus Tutor offers 2 years of expertise, delivering personalized instruction and comprehensive assistance. With a profound grasp of calculus concepts, students at both high school and university levels receive tailored guidance to excel in subject Responds in 8 min Star Tutor Calculus Tutor 4+ years experience Strengthen your Calculus understanding with personalized instruction from a tutor holding a Bachelor’s degree and 4 years of experience. Achieve your learning goals in advanced Calculus. Responds in 13 min Star Tutor Calculus Tutor 13+ years experience Expert Calculus instructor with 13+ years of tutoring experience for high school to university students, assisting with assignments and test preparation. Holds a bachelor's degree. Responds in 11 min Student Favourite Calculus Tutor 10+ years experience Qualified Calculus tutor with a Master's Degree in Mathematics and 10+ years of tutoring experience with high school and college students. Provides assistance with exam preparation and mock test Responds in 26 min Student Favourite Calculus Tutor 1+ years experience Top-rated Calculus Tutor offers 1+ years of expertise, delivering personalized instruction. With a profound grasp of calculus concepts, students at both high school and university levels receive tailored guidance to excel in the subject Responds in 5 min Star Tutor Calculus Tutor 15+ years experience Experienced Calculus tutor with a Master's in Mathematics and 15 years of expertise. Offers engaging lessons, homework assistance, and exam preparation for high school and college students. Responds in 7 min Star Tutor Calculus Tutor 8+ years experience Enhance your understanding of Calculus with tailored support from a tutor holding a Master’s degree and 8 years of experience. Excel in Calculus through personalized instruction and practice. Responds in 37 min Student Favourite Calculus Tutor 4+ years experience Calculus tutor with 4+ years of high school and university tutoring experience, providing personalized sessions and expert guidance on optimal practices. possesses a Bachelor's degree. Calculus Tutor 2+ years experience Highly skilled Calculus tutor for high school and university students with 2+ years of experience. Offers personalized sessions, homework assistance, and test preparation. Holds a Master's Degree in Responds in 9 min Star Tutor Calculus Tutor 11+ years experience Achieve excellence in calculus with dedicated support from a tutor holding a Master’s degree and 11 years of experience. Build a strong mathematical foundation for future success. Responds in 2 min Star Tutor Calculus Tutor 14+ years experience Gain mastery in calculus with the assistance of a tutor who has a Master's degree and 14 years of experience. Receive expert advice and techniques for tackling complex calculus problems. Responds in 40 min Student Favourite Calculus Tutor 8+ years experience Master advanced calculus concepts with the help of a tutor who has a Master's degree and 8 years of experience. Develop a deep understanding of calculus topics through expert instruction. Responds in 10 min Star Tutor Calculus Tutor 15+ years experience An experienced Calculus Tutor with 15 years of expertise provides personalized instruction and comprehensive assistance. Holds a bachelor's degree in applied mathematics. Responds in 27 min Student Favourite Calculus Tutor 14+ years experience Proficient Calculus tutor, with a Master's degree in the field, renowned for tutoring high school to university students. Specializing in online sessions, provide comprehensive support to students seeking to excel in their studies. Responds in 3 min Student Favourite Calculus Tutor 4+ years experience Succeed in Calculus with expert tutoring from a Master’s degree holder and 4 years of experience. Build a strong foundation in calculus concepts and enhance your analytical skills. Responds in 27 min Student Favourite Calculus Tutor 3+ years experience Calculus specialist, M.Sc in Math and with 3 years of tutoring experience to student's in the subject, here to guide you through academic and learning hurdles with the use of a structured online classes and fun activities. Responds in 19 min Student Favourite Calculus Tutor 3+ years experience Top-notch Calculus Tutor with a Master’s in Mathematics and 3 Years of Experience. Offers engaging lessons, homework assistance, and test preparation for high school and college students. Responds in 4 min Star Tutor Calculus Tutor 3+ years experience Excel in Calculus with expert guidance from a tutor holding a Master’s degree and 3 years of experience. Master derivatives, integrals, and complex calculus concepts. Responds in 49 min Student Favourite Calculus Tutor 2+ years experience Highly skilled Calculus tutor, holds a Master's degree in Mathematics. With 2+ years of experience in tutoring, Specialize in providing comprehensive lessons and exam preparation for high school and university students Responds in 4 min Star Tutor Calculus Tutor 8+ years experience A proficient Calculus tutor with 8+ years of teaching experience. Offers personalized online tutoring, interactive lessons, Holds a PhD degree in Mathematics. Responds in 5 min Star Tutor Calculus Tutor 5+ years experience Top-rated Calculus tutor with 5+ years of experience. Assists in assignment help and homework for students. Offers interactive 1-on-1 lessons for students in the US and CA. Calculus Tutor 5+ years experience Expert in Calculus tutoring with 5+ years of experience. Provides interactive 1-on-1 concept clearing lessons, homework assistance, and test preparation to students from grade 10 to University level. Calculus topics you can learn • Limits intro • Estimating limits from graphs • Estimating limits from tables • Formal definition of limits (epsilon-delta) • Properties of limits • Limits by direct substitution • Limits using algebraic manipulation • Strategy in finding limits • Squeeze theorem • Types of discontinuities • Continuity at a point • Continuity over an interval • Removing discontinuities • Infinite limits • Limits at infinity • Intermediate value theorem Try our affordable private lessons risk-free • Our free trial lets you experience a real session with an expert tutor. • We find the perfect tutor for you based on your learning needs. • Sign up for as few or as many lessons as you want. No minimum commitment or subscriptions. In case you are not satisfied with the tutor after your first session, let us know, and we will replace the tutor for free under our Perfect Match Guarantee program. Calculus skills & concepts to know for better grades Here are the important topics for Calculus: Limits and Continuity • Definition of a limit • Evaluating limits • Properties of limits • Continuity • Definition of the derivative • Basic differentiation rules • The chain rule • Applications of differentiation, such as finding extrema, increasing/decreasing intervals, and concavity • Definition of the definite integral • The Fundamental Theorem of Calculus • Indefinite integrals • Integration techniques, such as u-substitution, integration by parts, and partial fractions • Applications of integration, such as finding areas, volumes, and arc lengths Transcendental Functions • Exponential and logarithmic functions • Trigonometric functions • Inverse trigonometric functions • Applications of transcendental functions Infinite Series • Definition of an infinite series • Tests for convergence • Taylor series and Maclaurin series • Applications of infinite series Why Wiingy is the best site for online Calculus homework help and test prep? If you are struggling with Calculus and are considering a tutoring service, Wiingy has the best online tutoring program for Calculus. Here are some of the key benefits of using Wiingy for online math homework help and test prep: Best Calculus teachers Wiingy’s award-winning math tutors are experts in their field, with years of experience teaching and helping students succeed. They are passionate about math and committed to helping students reach their full potential. Find an amazing tutor for calculus on Wiingy right away! 24/7 Calculus help With Wiingy, you can get math help whenever you need it, 24 hours a day, 7 days a week. As opposed to in-person tutoring, our tutors are available online so you can get the help you need when you need it most. Better Calculus grades Our math tutoring program is designed to help students improve their grades and succeed in the class. Our tutors will work with you to identify your strengths and weaknesses and develop a personalized plan to help you reach your goals. Interactive and flexible sessions Our math tutoring sessions are interactive and flexible, so you can learn at your own pace and in a way that works best for you. You can ask questions, get feedback on your work, and get help with any specific topics, whether it’s key concepts or advanced concepts. Calculus worksheets and other resources In addition to tutoring sessions, Wiingy also provides students access to various math formula sheets and worksheets. Wiingy also offers a math exam guide. These free resources can help you to learn new concepts, practice your skills, and prepare for the math exam. Progress tracking Our private online math tutoring platform provides parents and students with progress-tracking tools and reports. This will help them track the student’s progress and identify areas where they need additional help. Find Calculus tutors at a location near you Essential information about your Calculus Average lesson cost: $28/hr Free trial offered: Yes Tutors available: 1,000+ Average tutor rating: 4.8/5 Lesson format: One-on-One Online
{"url":"https://wiingy.com/tutoring/subject/calculus-tutors/","timestamp":"2024-11-05T12:26:05Z","content_type":"text/html","content_length":"497146","record_id":"<urn:uuid:80ede78e-9e87-4e05-bbc3-871b6b2acfb4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00630.warc.gz"}
Physics: Find the electric field on any point on a y-axis and find the electric field as a function of the radial distance. This paper concentrates on the primary theme of Physics: Find the electric field on any point on a y-axis and find the electric field as a function of the radial distance. in which you have to explain and evaluate its intricate aspects in detail. In addition to this, this paper has been reviewed and purchased by most of the students hence; it has been rated 4.8 points on the scale of 5 points. Besides, the price of this paper starts from £ 79. For more details and full access to the paper, please refer to the site. Physics: Find the electric field on any point on a y-axis and find the electric field as a function of the radial distance. 5.) A square of paper measuring a on a side carries a total charge +q which is uniformly distributed over its surface. The square lies in the x-y plane with its center at the origin and its sides parallel to the coordinate axes. Find the electric field on any point on the y-axis with y > a. Show that this square looks like a point charge for the case y>> a. 9.) The electric charge of the proton is not concentrated at a point but rather distributed over a volume. According to experimental investigations at the Stanford Linear Accelerator, the charge distribution of the pro- ton can be approximated by an exponential function p= ____ e^((-r)/b) where r is the radial position inside the proton and b is a constant equal to 0.23 x 10^(-15)m. Find the electric field as a function of the radial distance. What is the magnitude of the electric field at r =1 x 10^(-15)m? Compare the electric field strength you find to that of a point charge of magnitude e. At what distances r do these two differ by 10% or more?
{"url":"https://www.yourdissertation.co.uk/physics-find-the-electric-field-on-any-point-on-a-y-axis-and-find-the-electric-field-as-a-function-of-the-radial-distance-1/","timestamp":"2024-11-03T04:15:05Z","content_type":"text/html","content_length":"25255","record_id":"<urn:uuid:8b93f2c8-683c-4e08-8e31-86b462c8e2ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00058.warc.gz"}
दाहरण 1. सरल कीजिए : (sinβ+icosβ)5(cosα+isinα)4... | Filo Question asked by Filo student Not the question you're searching for? + Ask your question Video solutions (1) Learn from their 1-to-1 discussion with Filo tutors. 5 mins Uploaded on: 9/10/2023 Was this solution helpful? Found 6 tutors discussing this question Discuss this question LIVE 6 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice more questions on Complex Number and Binomial Theorem View more Students who ask this question also asked View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text दाहरण 1. सरल कीजिए : Updated On Sep 10, 2023 Topic Complex Number and Binomial Theorem Subject Mathematics Class Class 12 Answer Type Video solution: 1 Upvotes 116 Avg. Video Duration 5 min
{"url":"https://askfilo.com/user-question-answers-mathematics/daahrnn-1-srl-kiijie-35353138373138","timestamp":"2024-11-14T14:06:09Z","content_type":"text/html","content_length":"404125","record_id":"<urn:uuid:81a99d1f-0cba-4996-a3ad-d285367fac66>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00430.warc.gz"}
Territorial Army Bharti 2022 Territorial Army Bharti 2022 @jointerritorialarmy.gov.in Territorial Army Bharti 2022 :Territorial Army Invites Application for the Army Officer Recruitment 2022,territorial army recruitment 2022 apply online @jointerritorialarmy.gov.in, More information on Education Qualification, Age Limit, Selection Process, How to Apply, Important Dates, and other processes are given below. Also read the official advertisement, before applying. Details Territorial Army Bharti 2022 • Organization Name – Territorial Indian Army • Official Web Site – www.jointerritorialarmy.gov.in • Post Name – Territorial Army • Job Location – All over India • Application Mode – Online • Application Start Date – 1st July, 2022 • Application Last Date – 31st July, 2022 Territorial Army Bharti 2022 Applications are invited from gainfully employed young citizens for an opportunity of donning the uniform and serving the nation as Territorial Army Officers, based on the concept of enabling motivated young citizens to serve in a military environment without having to sacrifice their primary professions. You can serve the nation in two capacities – as a civilian and as a soldier. No other option allows you such an expanse of experiences. Territorial Army Bharti 2022 Posts name Territorial Army Bharti 2022 Condition OF Eligibility: Education Qualification, Age Limits & Salary/ pay scale • Please check official notification. • The standard of the papers in Elementary Mathematics will be a Matriculation level. The standard of papers in other subjects will approximately be such as maybe expected of a graduate of an Indian university. Paper – I. Reasoning and Elementary Mathematics. (a) Part – 1. Reasoning. The question paper will be designed to test the candidates ability to complete sequences making logical conclusion based on simple patter of numbers, statements, figures, letters etc. as may be expected of a rational thinking person without any special study of the subject. (b)Part – 2. Elementary Mathematics. (i) Arithmetic. Number System – natural numbers, integers, rational and real numbers. Fundamental operations – addition, subtraction, multiplication, division, square roots, decimal fraction. (ii) Unitary Method. Time and distance, time and work, percentages, application to simple and compound interest, profit and loss, ratio and proportion, variation. (iii) Elementary Number Theory. Division algorithm, prime and composite numbers. Tests of divisibility by 2, 3, 4, 5, 9 & 11. Multiples and factors, factorization theorem, HCF and LCM. Euclidean algorithm, logarithms to base 10, laws of logarithms, use of logarithmic tables. (iv) Algebra. Basic operations, simple factors, remainder theorem, HCF, LCM, theory of polynomials, solutions of quadratic equations, relation between its roots and coefficients (only real roots to be considered). Simultaneous linear equations in two unknowns-analytical and graphical solutions. Simultaneous linear equations in two variables and their solutions. Practical problems leading to two simultaneous liner equations or in equations in two variables or quadratic equations in one variable and their solutions. Set language and set notation, rational expressions and conditional identities, laws of indices. (v) Trigonometry. Sine x, cosine x, tangent x when O° < x < 90°. Values of sine x, cos x and ten x, for x = 0°, 30°, 45°, 60° & 90°. Simple trigonometric identities. Use of trigonometric tables. Simple cases of heights and distances. (vi) Geometry. Lines and angles, plane and plane figures theorems on • Properties of angles at a point. • Parallel lines. • Sides and angles of a triangle. • Congruency of triangles. • Similar triangles. • Concurrence of medians and altitudes. • Properties of angles, sides and diagonals of a parallelogram, • rectangle and square. • Circle and its properties, including tangents and normal. • Loci. (vii) Mensuration. Areas of squares, rectangles, parallelograms, triangle and circle. Areas of figures which can bisect into the figures (field book). Surface area and volume of cuboids, lateral surface and volume of right circular area of cylinders. Surface area and volume of spheres. (viii) Statistics. Collection and tabulation of statistical data, graphical representation-frequency polygons, bar charts, pie charts, etc. Measures of central tendency. Paper – II. General Knowledge and English. Part – 1. General Knowledge. General knowledge including knowledge of current events and such matters of everyday observation and experience in scientific aspects as may be expected of an educated person who has not made a special study of any scientific subject. The paper will also include questions on history of India and geography of nature which candidates should be able to answer without special study. Part – 2. English. The question paper will be designed to test the candidates’ understanding of English and workman – like use of words. Questions in English are from synonyms, antonyms, reading comprehension, Para jumbles, error spotting, jumbled sentences, sentence correction and fill in the blanks Territorial Army Bharti 2022 Application Fees : Candidates’ are required to pay a fee of RS 200/- (Rupees two hundred only). Territorial Army Bharti 2022 Selecation Procedure • Candidates whose application forms are found correct will be called for screening (written exam followed by interview only if passed in written exam) by a Preliminary Interview Board (PIB) by the respective Territorial Army Group Headquarters. • Successful candidates will further undergo tests at a Service Selection Board (SSB) and Medical Board for final selection. • Vacancies of male and female candidates will be determined as per organisational requirement. How To Apply Territorial Army Bharti 2022? • Candidates are required to apply online by using the website www.jointerritorialarmy.gov.in. Territorial Army Bharti 2022 Important Dates 1. Online Application start date – 1st July, 2022 2. Online Application last date – 31st July, 2022 3. WRITTEN EXAMINATION DATE – 25 SEP 2022 Territorial Army Bharti 2022 Important Jobs-Links Official Notification territorial army Recruitment 2022 – View Here Apply Online – Click Here ફ્રી સિલાઈ મશીન યોજના 2022 | Free Sewing Machine Scheme 2022 Leave a Comment
{"url":"https://dgondwana.in/territorial-army-bharti-2022/","timestamp":"2024-11-11T00:40:28Z","content_type":"text/html","content_length":"207591","record_id":"<urn:uuid:e85ec4f7-8dd1-4619-92bc-b0399d7d036b>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00041.warc.gz"}
How to Make Math Lessons Fun and Easy Learn how to make math lessons fun and easy for students. Discover creative strategies to reduce math anxiety and boost engagement through games, real-life examples, and personalized tutoring. Many students suffer from anxiety. Math anxiety is a condition in which students are unable or unwilling to understand math and have low self-confidence. It is important to tackle math anxiety early on so that it does not develop into adulthood. Positive reinforcement and other methods, such as investing in math tutoring, are key to preventing math anxiety. Itâ s also important to make math fun. Students will associate a subject with fun and engagement, and their confidence levels will rise when they discover they understand it. Many young students tend to have negative feelings towards math lessons. Whenever itâ s time to learn some mathematics, it is common to hear, â Itâ s boringâ or even â I hate math.â However, students and families that learn how to make math easy and fun can make lessons more exciting and dispel any negative thoughts/feelings associated with math. Suppose you want to achieve this feat. In that case, you can begin by understanding why math is boring. Afterward, you can devise tailor-made solutions that will help you make math fun, enjoyable, and engaging. Here is a complete guide to assist you on this educational journey. Why Learners Find Math Lessons Boring and Unenjoyable Math is associated with fear, confusion, and boredom, but why is this the case? Here are the three leading reasons why math is boring to some students: Students consider math to be too abstract The most common answer to why students find math classes boring is that they consider these classes irrelevant and too abstract. Learners may see no concrete/physical aspect of learning and applying math. As a result, these students may think it is boring because it doesnâ t relate to their everyday experiences. While simple math concepts like subtraction and counting make sense in the real world, students often struggle to understand higher mathematics processes such as trigonometry or algebra. The approach is what matters, however. One teacher proved that math can be made more accessible to learners and more connected to the real world by telling stories. Students tend to struggle when learning math Students who have difficulty understanding math concepts can quickly become disjointed or discouraged when learning math. When this struggle occurs for an extended period, the students will associate math with boredom. Some students lack interest in mathematics Different students have different interests. Some love science and math-related studies, whereas others prefer languages, arts, and philosophy. Suppose a student has more interest in subjects other than math. In that case, they may end up focusing more on what they love and becoming uninterested in learning mathematics. How to Make Math Fun and Easy in 10 Straightforward Ways When it comes to teaching math and making the sessions less dull and miserable, youâ ll need to think out of the box and develop tailor-made solutions to help your students learn to love math. Here are ten creative ways to make math concepts fun and easy to grasp. Find ways to diversify your math study sessions Some students are not comfortable enough to ask questions during math classes. However, students who are more introverted may benefit from learning in a group setting where they can discuss problems. Other students learn differently from their peers. Therefore, it is important to provide diverse lesson plans for students (especially when teaching math). Students grasp concepts differently, and it would be best to use various tactics to help your students and ensure they can easily understand math concepts. For instance, you can include flashcards and worksheets when learning mathematics. Using this combination, you can bridge learning gaps and make math fun and exciting. Add math games into your study sessions Math games help to create excitement and fun for your students when theyâ re studying while also alleviating the boredom associated with repetitive math lessons. Nowadays, you can find online or physical games to improve engagement in math sessions. For example, your students can try out Simon says â Geometryâ to learn more about types of lines and angles. This game makes learning fun while also providing some physical exercise that breaks the monotony of learning math for your students and improving their math skills. Use reward-based learning Rewarding or incentivizing students as they study math can help them develop a better attitude towards this subject. Rewards help motivate students to try harder when learning new concepts and doing practice tests. As a parent, you can motivate your child to learn math by offering them a fun activity or extra screen time after they finish studying. You can also reward yourself with snacks or food between or after study sessions if you are a student. Get a professional tutor Some students study best through doing, while others learn better by watching. Each student is unique, and classrooms cannot cater to them all. Some may struggle while others succeed. If a student is struggling with a subject, individual attention is crucial to their success. Many classrooms lack the ability to meet with students one-on-one to help them understand the curriculum. Students can get one-on-one attention from a math tutor to help them understand math problems and ask any questions that they may have. If a struggling student feels embarrassed asking questions, it could be because he or she seems to be the only one who is not understanding. They can ask as many questions as they want without being embarrassed before their classmates or the whole class. The tutoring process is not rushed. Students will be given explanations that fit their learning style and wonâ t feel pressured. Hiring a professional tutor can help make your math lessons more fun. Tutors know how to create customized math lessons that ensure you gain valuable mathematical knowledge while also enjoying the study sessions. Nowadays, you can find the best local tutors by signing up with Erudite Tuition. Incorporate technology into your math studies You can use modern technological tools like the CK-12 app to make math fun for students. The CK-12 app offers various workbooks, practice tests, and quizzes that interactively teach students math. You can also download other apps such as the Geometry Pad to learn math and geometry in a fun and enthralling way. Make math lessons a group/family affair Conducting group studies with your friends or family members, such as siblings, can help you gain interest and confidence in learning math. Group studies give you new insights into math while also enabling you to learn more quickly and fill in learning gaps. This way, math studies will be more enjoyable and easier to understand. Use real-life objects to reduce the abstract nature of math You can use various real-life items to develop physical learning tools for different math concepts. Since math can be boring due to its abstract nature, you can use pebbles, buttons, and base ten blocks to make math fun, concrete, and lively for your students. Shapes such as spheres and play money can also make concepts such as geometry, addition, subtraction, division, and multiplication easier for your students. Integrate other subjects Mixing your math lessons with other subjects you love, such as arts, science, or social studies, can help bring relevance and excitement to your studies. You will also learn about the bigger picture of how things work and develop a more profound love for math. Teachers can use certain techniques to integrate arts into math in many different ways. For example, they could ask students to draw lines and shapes to understand geometry, paint a wall clock to understand time, or even learn ratios by mixing paint. Use personal interests to make math fun Incorporating studentsâ interests in math problems and equations is one of the best ways to get them excited about math. Talking probability? Choose their favourite team. Teaching money? Ask students to calculate how much money they will need each week to buy the latest toy or piece of technology they want. Making math word problems more interesting by including the names of your students can help. Make it funny if you are unsure. Students are more likely to recall a math concept with a humorous acronym or word problem about polka-dotted elephants. So letâ s have some fun in math class! You can pique your interest in math by incorporating personal interests into your study lessons or practice tests. For instance, you can use your favourite sports team to see their chances of winning a season or a competition when learning about probability. Also, adding your name to math word problems can make lessons interesting and enjoyable. Take a musical approach Music and songs help break the writing and reading monotony associated with math lessons and solving math problems. Nowadays, you can find numerous math songs teaching about common denominators, mean, median, range, distance formula, and other math concepts. Sometimes, all it takes to make math interesting and to get students excited about learning is some creativity. Many students find math lessons offered by the curriculum are boring. Therefore, it is essential to develop custom strategies and teachings that make mathematics more fun and enjoyable for the student. Erudite Tuition offers flexible and engaging tutoring for students who want to improve their math and other subjects. Our experienced tutors create personalised lessons that cater to each studentâ s learning style, making math both fun and accessible. If you wish to learn math in a competent yet enjoyable manner, kindly consider getting in touch with Erudite Tuition today. Let us help make math a subject to look forward to, not dread.
{"url":"https://eruditetuition.com.au/blog/tutoring-services/make-math-lessons-fun-and-easy/","timestamp":"2024-11-05T10:06:15Z","content_type":"text/html","content_length":"54679","record_id":"<urn:uuid:2225947a-587d-44bd-a700-b576daf0cc95>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00866.warc.gz"}
Are All Perfect Numbers Even?/Progress While it is not known whether there exist any odd perfect numbers, several important facts have been established. It had been established by $1986$ that an odd perfect number, if one were to exist, would have over $200$ digits. By $1997$ that lower bound had been raised to $300$ digits. By $2012$ that lower bound had been raised again to $1500$ digits. An odd perfect number $n$ is of the form: $n = p^a q^b r^c \cdots$ $p, q, r, \ldots$ are prime numbers of the form $4 k + 1$ for some $k \in \Z_{>0}$ $a$ is also of the form $4 k + 1$ for some $k \in \Z_{>0}$ $b, c, \ldots$ are all even. An odd perfect number has:
{"url":"https://proofwiki.org/wiki/Are_All_Perfect_Numbers_Even%3F/Progress","timestamp":"2024-11-07T22:21:48Z","content_type":"text/html","content_length":"45047","record_id":"<urn:uuid:67f07b38-0e6d-464f-9c44-b9683dcd15f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00400.warc.gz"}
Cricket Batting Percentage Calculation 15 Jul 2024 Popularity: ⭐⭐⭐ Batting Percentage Calculator This calculator provides the calculation of batting percentage for cricket players. Calculation Example: Batting percentage is a statistic used in cricket to measure the average number of runs scored by a batsman per innings. It is calculated by dividing the total number of runs scored by the batsman by the number of innings in which he was dismissed. Related Questions Q: What is the importance of batting percentage in cricket? A: Batting percentage is an important statistic in cricket as it provides an indication of a batsman’s ability to score runs consistently. Q: How is batting percentage used to compare different batsmen? A: Batting percentage is often used to compare different batsmen and to assess their overall performance. | —— | —- | —- | Calculation Expression Batting Frequency: The number of innings in which the batsman was dismissed is given by BF = M - NO. Batting Percentage: The batting percentage is given by BP = (R / BF) * 100. Calculated values Considering these as variable values: NO=10.0, R=1000.0, M=100.0, the calculated value(s) are given in table below | —— | —- | Batting Percentage 1111.111 Similar Calculators Calculator Apps Academic Chapters on the topic Matching 3D parts for batting percentage calculator calculation for Calculations Related Parameter Sensitivity Analysis engineering data App in action The video below shows the app in action.
{"url":"https://blog.truegeometry.com/calculators/batting_percentage_calculator_calculation_for_Calculations.html","timestamp":"2024-11-11T20:50:29Z","content_type":"text/html","content_length":"36734","record_id":"<urn:uuid:cae5f7e3-afc2-42cc-b81e-80f0815abf58>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00845.warc.gz"}
Drum Waves When you hit a drum with your hand, the drum's membrane vibrates, which then causes the air around it to vibrate, and it is this vibrating air that we perceive as the sound of the drum. In the visualization above, you can see some of the different ways that a drum can vibrate. Each bit of the drum's surface pulls on the bits around it. What this means mathematically, is that the surface of the drum satisfies the wave equation: \[\frac{\partial^2 u}{\partial t^2} = c^2 \left( \frac{\partial ^2 u}{\partial x^2} + \frac{\partial^2 u}{\partial y ^2} \right)\] Here, \(u\) represents the height of the drum's surface above its resting height, and \(c\) determines the speed that waves propogate along the drum's surface, and depends on its membrane material and tension. The fact that the drum is attached to the frame at the surface means that solutions have to satisfy \(u=0\) at the edge of the drum. To solve this equation for a circular drum as depicted above, we use much the same techniques that were used when determining the spherical harmonics. First, we convert the equation into a coordinate system that match the symmetry of our system, namely polar coordinates. Next we apply separation of variables, which means that we assume that the solution is a product of functions that depend only on time, radius and angle respectively. Then we solve the equations that result and determine that the possible vibrations can have the following form: \[\cos (c \lambda_{mn} t) J_m(\lambda_{mn}r) \sin(m\theta),\] where the cosine can be replaced with sine and the sine with a cosine (independently). Here \(J_m, m=0,1,2...\) is the infinite family of Bessel functions, and \(\lambda_{mn} = z_{mn} / a\), where \ (z_{mn}\) is the \(n\)th zero of \(J_m\) and \(a\) is the radius of the drum. Each Bessel function has infinitely many zeros. Because the wave function is a linear differential equation, sums of multiples of solutions are also solutions. The solutions above form a basis, and so any initial displacement and motion of the drum can be closely approximated by a combination of these basic solutions. Notice that each basic function has a different vibrational frequency in time, \(t\). These frequencies correspond to the different notes that the drum is capable of producing. The notes that a drum produces change depending on how it is struck. Now, we see that this is because different ways of striking the drum activate different basis functions. When we look at single basis functions, the situation is pretty simple. As \(m\) increases, the number of waves increases by one as you go around the disk. As \(n\) increases, the number of waves increases by one as you go from the center to the edge of the disk. Overall, the basis function waves are very symmetric and easy to understand. Things become richer and more complicated as we combine multiple basis functions together. In the second visualization, below, we see the combination of two basis functions. The balance parameter indicates how much the combination leans toward one basis function versus the other. A balance parameter of 0.5 indicates an even balance between the two. When creating this visualization I wasn't sure if there were any Javascript packages available to compute the required Bessel functions. I was very fortunate to find this open source implementation of the Bessel functions. It was easy to use, fast, and allowed for a smooth high-resolution rendering of the drum vibrations. I also needed to determine several zeros of the Bessel functions to create the visualizations. I got the zeros offline using SciPy and then hard-coded them into the Typescript code.
{"url":"https://vinequai.com/drumwaves","timestamp":"2024-11-09T12:49:50Z","content_type":"text/html","content_length":"8020","record_id":"<urn:uuid:93cc0e93-56d7-4245-8532-49238e198884>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00682.warc.gz"}
Times table space challenge - Multiplication by URBrainy.com Times table space challenge Great set of 'space' pages to test how quickly the two, five and ten times tables are known. 4 pages Times table space challenge Great set of 'space' pages to test how quickly the two, five and ten times tables are known. Create my FREE account including a 7 day free trial of everything Already have an account? Sign in Free Accounts Include Subscribe to our newsletter The latest news, articles, and resources, sent to your inbox weekly. © Copyright 2011 - 2024 Route One Network Ltd. - URBrainy.com 11.4.6
{"url":"https://urbrainy.com/get/1912/times-table-space-challenge-9347","timestamp":"2024-11-04T07:14:22Z","content_type":"text/html","content_length":"127316","record_id":"<urn:uuid:63f20605-2b67-4be1-b2f7-6b5e7f9931e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00317.warc.gz"}
Comments on Questions?: Quadratics Revisited: The Falling Object ModelDan did a pretty good job of explaining the proces...I know this is an older post, but I just ran acros...Instead of the video+overhead, you could also give...I think I can do the same thing in GeoGebra. I ca......and that&#39;s why I put this out here before p...So students are moving the white circle to the pic...@Dan My kids will most likely have a go at it. I ...I can&#39;t figure out how to use the applet. Help...Pretty nifty. It is intuitive to me on how to use... tag:blogger.com,1999:blog-5964889903484807623.post8661695086734938255..comments2023-12-18T04:44:25.358-08:00David Coxhttp://www.blogger.com/profile/ 06277427735527075341noreply@blogger.comBlogger10125tag:blogger.com,1999:blog-5964889903484807623.post-3985438973201796312011-09-10T07:25:00.475-07:002011-09-10T07:25:00.475-07:00Dan did a pretty good job of explaining the process <a href="http://blog.mrmeyer.com/?p=8553" rel="nofollow">here</a>.David Coxhttps://www.blogger.com/profile/ 06277427735527075341noreply@blogger.comtag:blogger.com,1999:blog-5964889903484807623.post-70791488592334157022011-09-09T22:39:35.695-07:002011-09-09T22:39:35.695-07:00I know this is an older post, but I just ran across it. I&#39;d love to make my own version of your &quot;strobed&quot; picture. Is it easy to explain what you did? I&#39;ll follow along with your explanation as best I can. 10174263929610082827noreply@blogger.comtag:blogger.com,1999:blog-5964889903484807623.post-92044351222032980152011-02-11T11:57:12.059-08:002011-02-11T11:57:12.059-08:00Instead of the video+overhead, you could also give the kids a handout of the time lapse photo you made, have them measure directly on the handout and the use the applet with sliders to fit the curve.Frank Noschesehttps:// www.blogger.com/profile/16584042587600632345noreply@blogger.comtag:blogger.com,1999:blog-5964889903484807623.post-52342633651032867822011-02-11T09:20:56.240-08:002011-02-11T09:20:56.240-08:00I think I can do the same thing in GeoGebra. I can define variables a,b and c with sliders and let the kids mess with the regression themselves. The main reason I made the applet was to see how close I could get to the falling object model by analyzing a still picture. <br /><br />I definitely want my kids to do most of the lifting. I&#39;m just still trying to figure out how to best do 09061225929769097049noreply@blogger.comtag:blogger.com,1999:blog-5964889903484807623.post-26935591693962249032011-02-11T07:16:17.585-08:002011-02-11T07:16:17.585-08:00...and that&#39;s why I put this out here before putting it in front of my kids. Thanks, Frank.David Coxhttps://www.blogger.com/profile/ 06277427735527075341noreply@blogger.comtag:blogger.com,1999:blog-5964889903484807623.post-77326337217898956682011-02-11T07:14:32.338-08:002011-02-11T07:14:32.338-08:00This comment has been removed by the author.David Coxhttps://www.blogger.com/profile/ 06277427735527075341noreply@blogger.comtag:blogger.com,1999:blog-5964889903484807623.post-62959674638778757612011-02-11T07:02:08.111-08:002011-02-11T07:02:08.111-08:00So students are moving the white circle to the pictures of the ball (which you made), getting the height(which the computer gives them), and graphing them with a program to find the parabola (which the computer calculates). Seems like the kids aren&#39;t doing much thinking, just pointing and clicking.<br /><br />These are middle school kids, right? Do they know how the computer calculates the height? Do they know how the computer fits the parabola?<br /><br />Here&#39;s what I might do. Give each kid an overhead transparency and marker to put onto the computer screen to mark the position of the ball every frame (or every 2, 5, n frames or whatever). Given the initial height of the ball, have them use proportions to determine the heights of the ball on their transparency.<br /><br />Then go to Data Flyer http:// www.shodor.org/interactivate/activities/DataFlyer/ to input the points and use the manual curve fit sliders to fit the function to their data. They will have a better feel for how each coefficient manipulates the curve.<br /><br />Repeat video analysis for ball tossed straight up. Repeat for ball tossed between two people. What&#39;s the same/different about the equations for all three vertical motions?Frank Noschesehttps://www.blogger.com/profile/ 16584042587600632345noreply@blogger.comtag:blogger.com,1999:blog-5964889903484807623.post-51508045090651233702011-02-07T19:31:40.366-08:002011-02-07T19:31:40.366-08:00@Dan<br />My kids will most likely have a go at it. I think I&#39;m going to show the video, add timecode and cut it off before the ball hits the ground. <br /><br />@Frank<br />Place the white circle around each strobe and record the distance from the ground. Elapsed time between strobes is 1/6 second. Enter the ordered pairs into the second applet and use the fitpoly function to do the quadratic regression.David 06277427735527075341noreply@blogger.comtag:blogger.com,1999:blog-5964889903484807623.post-73547201261052066332011-02-07T19:03:35.820-08:002011-02-07T19:03:35.820-08:00I can&#39;t figure out how to use the applet. Help?Frank Noschesehttps://www.blogger.com/profile/ 16584042587600632345noreply@blogger.comtag:blogger.com,1999:blog-5964889903484807623.post-63667632688304891812011-02-03T07:24:24.758-08:002011-02-03T07:24:24.758-08:00Pretty nifty. <br />It is intuitive to me on how to use the applets together, will it be to 7th/8th graders? Probably?<br />I like it though. Are you going to show the video first? Or let them have at the applets?Dan
{"url":"https://coxmath.blogspot.com/feeds/8661695086734938255/comments/default","timestamp":"2024-11-04T20:02:26Z","content_type":"application/atom+xml","content_length":"21788","record_id":"<urn:uuid:8a853c3e-b002-47fe-9ab5-8e0a99c83db1>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00262.warc.gz"}
Fundamentals of Model Theory by William Weiss, Cherie D'Mello Publisher: University of Toronto 1997 Number of pages: 64 This book provides an introduction to Model Theory which can be used as a text for a reading course or a summer project at the senior undergraduate or graduate level. It is also a primer which will give someone a self contained overview of the subject, before diving into one of the more encyclopedic standard graduate texts. Download or read it online for free here: Download link Similar books Hack, Hack, Who's There? A Gentle Introduction to Model Theory David Reid SmashwordsThe skeleton of this book is a science fiction story without the usual mangling of physics. The flesh is composed of non-technical mainstream explanations and examples of the field of mathematics which deals with meaning, called Model Theory. Model-Theoretic Logics J. Barwise, S. Feferman SpringerThe subject matter of this book constitutes a merging of several directions in general model theory: cardinality quantifiers; infinitary languages; and, finally, that on generalized quantifiers and abstract characterizations of first-order logic. Model Theory, Algebra and Geometry D. Haskell, A. Pillay, C. Steinhorn Cambridge University PressThe book gives the necessary background for the model theory and the mathematics behind the applications. Aimed at graduate students and researchers, it contains surveys by leading experts covering the whole spectrum of contemporary model theory. Model Theory C. Ward Henson University of South CarolinaThe purpose of this text is to give a thorough introduction to the methods of model theory for first order logic. Model theory is the branch of logic that deals with mathematical structures and the formal languages they interpret.
{"url":"https://www.e-booksdirectory.com/details.php?ebook=3037","timestamp":"2024-11-06T08:42:50Z","content_type":"text/html","content_length":"11187","record_id":"<urn:uuid:284d13d3-17ff-4a45-a34b-250c6ce83e83>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00149.warc.gz"}
Numerical Methods Lab 1 - Finite Difference - Duplicate Lab 1 - Finite Difference A Python code is created to model a problem and find its solution. The finite difference method (FDM) is implemented in the code which contained methods to discretise the problem and a three point centred model is used before outlining the Neumann and Direchlet boundary conditions. Approximations were found, which were then compared to analytical solutions where the error is displayed using a log-log chart. It is found that the centred approximation is more accurate than the backwards approximation. This confirms the theory, which states that the centred approximation should be more accurate. As the order of truncature for a centred scheme is two and for a backwards scheme its one. This will be refered to later in section 1.3. The objective is to find the approximative solution to the problem by using the Finite Difference Method (FDM) and assess its accuracy by comparing it to the analytical solution which is also calculated with Python. Therefore, the context is to verify if the approximate method is accurate enough to justify not using the analytical solution because it is calculated manually, thus taking more time. There were three methods of identifying points used forward, backward and centred approximation. These are outlined below: An algorithm was wrote on Python using Numpy as a math function. The following steps were used to discretise the problem, the three point centred approximation was used to discretise the differential equation except the extremities. It was needed to take into account the boundary conditions in order to have a unique solution. Dirichlet is the lower boundary condition at the begining of the domain and Neumann is the upper boundary at the end of the domain. The analytical solution was calculated to test the error in the approximate solution. The analytical solution was obtained by integrating the ODE twice and applying the boundary conditions, when the analytical solutions for the three source terms were found and inputted into the python code, they were compared to their respective approximate results. The method of comparison used was a log-log graph to represent the linear evolution of the error compared to the inverse of the grid distance. Figure 1: One-dimensional representation Consider the following strong form, defined for x ∈ ]0, L[ , with L = 1: The problem parameters K, L and qn are equal to 1. While the source term r(x) corresponds to three options: 1.2 Discretisation of the ODE The first step of the FDM is to discretise the differential equation, a 3 point centred approximation is used. The general formula of the first derivative of a function u(x) at a general point x is: In using this numerical approximation, some error must be accounted for. This is known as truncature error. Truncature error arises when certain numerical methods are used and that was the case when approximating this ODE. All error after the second order was regarded as outside of the domain of interest in this project. The truncature error for the centered approximation is of the 2nd order. The error was also calculated for Newmann boundary conditions in both centred and backwards boundary schemes. The order of truncature was two and one respectfully. Displayed below are the graphs that represent the results of the methods that were outlined in the introduction 1-D Problem section. Firstly, the approximate numerical solutions are shown for each of the three source terms, followed by the analytical algebraic solutions and finally both solutions are compared via log-log graphs. 2.1. Approximate Solutions Below, in Figures 1, 2 and 3, the backwards and centred aproximations of the ODE 1-D problem at each of the source terms are shown. The curves were plotted for the number of points being equal to n = [10,20,40,80,160] respectively. Figure 2: Approximation solution for the source term r(x) = 1 Figure 3: Approximation solution for the source term r(x)=sin(π*x/L) Figure 4: Approximation solution for the source term r(x)=sin(π*x/2L) For each of the three source terms, ten curves are plotted, five being centred approximation and 5 being backwards approximation. Each of the five curves had a different number of sampling points. As mentioned above, the range was 10, 20, 40, 80, 160. It was noted that for each of the three source terms, the lowest curve was provided by the ten point sample and the gradient of the curves increased as the number of sample points increased. The problem given is shown below: Analytical solutions for the problem were found by manually integrating the expression and subbing in each of the source terms and boundary conditions. How the solutions were obtained is showed Figure 5 is the result of these solutions being solved and plotted through Python. Figure 5: Algebraic solutions for the three different source terms 2.3. Error between analytical and approximation To determine the accuracy of the approximation and to compare the backward and centered approximation, a logarithmic graph was plotted taking the absolute value of the distance between the algebric curve and the approximate curve as the ordinate and the inverse of the distance between sampling points as abscissa. The results for the three source terms are shown below. Figure 6: centred and backwards error r(x) = 1 (log-log graph) Figure 7: centred and backwards error r(x) = sin(π*x/L) (log-log graph) Figure 8 - centred and backwards error r(x) = sin(π*x/2L) (log-log graph) The first source term was deemed to be the most accurate as the centred approximation was significantly lower than any other error presentented, however it also gave the most substantial difference between centred and backwards approximations. The accuracy of the numerical methods was found to be dependant on the order of truncation of the approximation as well as the amount of points (n) chosen to be considered. For this report, a range of values of n = 10, 20, 40, 80, 160 were secected. The accuracy of the approximations increased as the value of n increased. Source term u1, it was found that there was no difference between the centred and backwards approximations. The truncation error theory states that the centred approximation should be more accurate than the backwards approximation because the centred approximation is second order and the backwards approximation is first order. This is confirmed in this project as for all three source terms, the centred approximantion was less than or equal to the backwards approximation.
{"url":"https://deepnote.com/app/alemom1/Numerical-Methods-Lab-1-Finite-Difference-Duplicate-e1be3d1e-cedd-4492-917c-1fb1182809e7","timestamp":"2024-11-08T12:01:18Z","content_type":"text/html","content_length":"209733","record_id":"<urn:uuid:dd13cab6-5831-40e5-8dc1-dd35ab3e6a7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00234.warc.gz"}
ACO Seminar A family of sets is called union-free if there are no three distinct sets in the family such that the union of two of the sets is equal to the third set. An old problem of Moser asks: how large of a union-free subfamily does every family of M sets have? In this talk I will prove that every family of M sets contains a union-free subfamily of size at least [\sqrt{4M+1}] - 1 and that this bound is tight for all M. This solves Moser's problem and proves a conjecture of Erdos and Shelah from 1972. I will also discuss some related results and open problems. Joint work with Jacob Fox and Benny Sudakov.
{"url":"https://aco.math.cmu.edu/abs-12-02/nov15.html","timestamp":"2024-11-05T05:40:30Z","content_type":"text/html","content_length":"1977","record_id":"<urn:uuid:25fc3718-5b14-480e-b94c-f05e54c38113>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00145.warc.gz"}
Bresenham Line Algorithm: A Powerful Tool for Efficient Line Drawing | Saturn Cloud Blog Bresenham Line Algorithm: A Powerful Tool for Efficient Line Drawing As a data scientist or software engineer, you often encounter scenarios where you need to draw lines or perform line-related operations efficiently. One such algorithm that has stood the test of time is the Bresenham Line Algorithm. In this article, we will explore the inner workings of the Bresenham Line Algorithm, its applications, and how you can implement it to optimize line drawing in your What is the Bresenham Line Algorithm? The Bresenham Line Algorithm is an efficient method for drawing lines on a grid-based display or calculating line-related operations. It was developed by Jack E. Bresenham in 1962 and has since become a fundamental algorithm in computer graphics and image processing. The primary advantage of the Bresenham Line Algorithm is its ability to determine the points of a line that should be plotted to achieve the most accurate and efficient line representation. Unlike other line-drawing algorithms, such as the Digital Differential Analyzer (DDA) algorithm, which uses floating-point calculations, the Bresenham algorithm operates solely on integer arithmetic, making it faster and more suitable for implementation on resource-constrained systems. How Does the Bresenham Line Algorithm Work? The Bresenham Line Algorithm uses the concept of integer-based error accumulation to calculate the most appropriate pixels to plot for a given line. Here’s a step-by-step breakdown of how the algorithm works: 1. Accept the coordinates of the start and end points of the line: (x1, y1) and (x2, y2), respectively. 2. Determine the differences between the x and y coordinates: dx = x2 - x1 and dy = y2 - y1. 3. Calculate the decision parameter, also known as the error term: d = 2 * dy - dx. 4. Initialize two variables: x = x1 and y = y1, which represent the current coordinates being plotted. 5. Begin plotting the line by setting the initial pixel at (x, y). 6. Iterate over the x-axis from x1 to x2: □ If d > 0, increment y by 1 and update d using the formula: d = d + 2 * (dy - dx). □ If d ≤ 0, update d using the formula: d = d + 2 * dy. □ Increment x by 1. □ Plot the pixel at (x, y). 7. Repeat step 6 until x reaches x2. By iteratively updating the decision parameter and selectively incrementing the y-coordinate, the Bresenham Line Algorithm efficiently determines the points to plot, resulting in a smooth and accurate line representation. Applications of the Bresenham Line Algorithm The Bresenham Line Algorithm finds applications in various domains, including computer graphics, image processing, and computational geometry. Here are a few use cases where this algorithm is particularly useful: Line Drawing The primary application of the Bresenham Line Algorithm is in drawing lines on a digital display or canvas. Its efficiency makes it suitable for real-time rendering in applications such as computer-aided design (CAD), video games, and computer simulations. In computer graphics, rasterization is the process of converting vector graphics into raster images or bitmaps. The Bresenham Line Algorithm plays a crucial role in rasterization by determining the pixels to plot for each line segment, resulting in a smooth and accurate image representation. Line Clipping Line clipping is the process of determining which portions of a line lie within a specific region of interest. The Bresenham Line Algorithm can be extended to handle line clipping efficiently, making it an essential component of algorithms like the Cohen-Sutherland line clipping algorithm. Implementing the Bresenham Line Algorithm To implement the Bresenham Line Algorithm, you can use any programming language of your choice. Here’s a Python implementation to get you started: def bresenham_line(x1, y1, x2, y2): dx = abs(x2 - x1) dy = abs(y2 - y1) slope = dy > dx if slope: x1, y1 = y1, x1 x2, y2 = y2, x2 if x1 > x2: x1, x2 = x2, x1 y1, y2 = y2, y1 dx = abs(x2 - x1) dy = abs(y2 - y1) error = dx // 2 y = y1 ystep = 1 if y1 < y2 else -1 points = [] for x in range(x1, x2 + 1): coord = (y, x) if slope else (x, y) error -= dy if error < 0: y += ystep error += dx return points Feel free to adapt this code to your specific requirements or translate it into your preferred programming language. The Bresenham Line Algorithm is a powerful tool for efficient line drawing and line-related operations. Its ability to determine the most accurate pixels to plot using integer arithmetic makes it a popular choice in computer graphics, image processing, and computational geometry. By understanding the inner workings of the Bresenham Line Algorithm and its applications, you can leverage its efficiency to optimize line drawing in your projects. So go ahead, implement the Bresenham Line Algorithm, and explore the possibilities it offers for smooth and accurate line representations. About Saturn Cloud Saturn Cloud is your all-in-one solution for data science & ML development, deployment, and data pipelines in the cloud. Spin up a notebook with 4TB of RAM, add a GPU, connect to a distributed cluster of workers, and more. Request a demo today to learn more. Saturn Cloud provides customizable, ready-to-use cloud environments for collaborative data teams. Try Saturn Cloud and join thousands of users moving to the cloud without having to switch tools.
{"url":"https://saturncloud.io/blog/bresenham-line-algorithm-a-powerful-tool-for-efficient-line-drawing/","timestamp":"2024-11-05T07:27:06Z","content_type":"text/html","content_length":"41999","record_id":"<urn:uuid:72bea247-d4e2-4c03-9d04-6d9c3965948c>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00287.warc.gz"}
Rediscovering Trigonometry Part 4: More Useful Formulas Click here to see part one of this four week series. Click here to see part two of this four week series. Click here to see part three of this four week series. Now that we have discovered some useful trigonometric identities, we can continue to build on them and create many more. There are an infinite number of trigonometric identities out there (not all of them have been created of course), but we will stick to two in this post: the product-to-sum formulas and the sum-to-product formulas. Take the four angle addition/subtraction formulas we discovered in our first week. I will use A and B as our letters rather than alpha and beta. 1. sin(A + B) = sinAcosB + cosAsinB 2. sin(A – B) = sinAcosB – cosAsinB 3. cos(A + B) = cosAcosB – sinAsinB 4. cos(A – B) = cosAcosB + sinAsinB These can be messed with very easily to create some new formulas. For instance, adding together the first two formulas gives: sin(A + B) + sin(A – B) = sinAcosB + cosAsinB + sinAcosB – cosAsinB 2sinAcosB = sin(A + B) + sin(A – B) sinAcosB = [sin(A + B) + sin(A – B)]/2 We now have a new identity. This can be now be used to solve a whole new range of problems and generate a whole new range of identities. The same steps can be done by subtracting the second equation from the first, adding the third and fourth together, and subtracting the fourth from the third. This creates the four product-to-sum identities. These four formulas can be rewritten in a way that converts the sum into a product. Let's rewrite the variables as the following: a + b = A a – b = B Making this change, we can then perform some operations to get a whole new set of formulas. These are called the sum-to-product identities. These can then be built upon to generate whole new sets of formulas as well. Though the actual mathematics here might be a bit complicated, the idea is simple. Mathematics is always continuing to be developed, and this can be done through building upon previous ideas to form new ideas that help solve new problems. Trigonometry is a great place to see this sort of thing happen.
{"url":"https://coolmathstuff123.blogspot.com/2014/03/rediscovering-trigonometry-part-4-more.html","timestamp":"2024-11-03T21:33:28Z","content_type":"text/html","content_length":"76015","record_id":"<urn:uuid:14bb4425-4827-4c4f-ad96-d86a8ffcf19b>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00329.warc.gz"}
Lesson 9 You Ate the Whole Thing Warm-up: Number Talk: What's the Sum? (10 minutes) The purpose of this Number Talk is to elicit strategies and understandings students have for adding multiples of 5 to multiples of 5. These understandings help students develop fluency and will be helpful later in this unit when students will learn to tell time to the nearest 5 minutes. • Display one expression. • “Give me a signal when you have an answer and can explain how you got it.” • 1 minute: quiet think time • Record answers and strategies. • Keep expressions and work displayed. • Repeat with each expression. Student Facing Find the value of each expression mentally. • \(20 + 10 + 10 + 5\) • \(30 + 25\) • \(35 + 15\) • \(15 + 25 + 15\) Activity Synthesis • “What patterns did you notice working with these numbers?” • “How could the second problem help you think about the last one?” (I know that \(15 + 15 = 30\), so it was the same problem.) Activity 1: Pizza to Share (15 minutes) The purpose of this activity is for students to learn that when you partition a shape into 2, 3, or 4 equal pieces, the whole shape can be named as 2 halves, 3 thirds, 4 fourths respectively. This activity uses the context of pizza to intentionally elicit “whole” from students to describe the situation. They will continue to deepen their understanding of a whole as a mathematical term during their study of fractions in grade 3. Students observe regularity in repeated reasoning (MP8) when they see that however many equal pieces the whole pizza is cut into, that number of pieces makes the Students begin the activity by looking at the problem displayed, rather than in their books. At the end of the launch, students open their books and work on the problem. This activity uses MLR5 Co-craft Questions. Advances: writing, reading, representing. MLR5 Co-Craft Questions • Display only the problem stem and image for Clare’s pizza, without revealing the questions. • “Clare’s friends were going to share a pizza. The image shows how they cut the pizza.” • “Write a list of mathematical questions that could be asked about this situation.” (How many friends does she have? How much pizza can each friend get? Can they slice the pizza a different way? What would be a fair way to share the pizza with her friends?) • 2 minutes: independent work time • 2–3 minutes: partner discussion • Invite several students to share one question with the class. Record responses. • “What do these questions have in common? How are they different?” • Reveal the additional context and questions for Clare’s pizza (students open books), and invite additional connections. • “Why do you believe her friends are upset?” (The pizza is cut into thirds, so 3 slices would be the whole pizza. 3 thirds is the same as the whole thing.) • “Now you and your partner will discuss what happened in each problem when students share a pizza.” • 10 minutes: partner work time Student Facing Clare’s friends were going to share a pizza. The image shows how they cut the pizza. 1. Clare ate 3 slices and her friends got upset with her. 1. Why are her friends upset? 2. How many thirds did Clare eat? 3. How much of the pizza was left? 1. Priya will eat ________________________ of the pizza. 2. Together they will eat ________________________ of the pizza. 1. Each girl will eat ______________________ of the pizza. 2. Together they will eat ________________________ of the pizza. 1. How much pizza will each child eat? ________________________ 2. How much pizza will they eat in all? ________________________ Advancing Student Thinking If students disagree that 2 halves, 3 thirds, or 4 fourths is the same as the whole pizza, consider asking: • “If Jada ate this piece (one half) and Mai ate this piece (one half), how much of the pizza is left?” • “How could you show how much of the pizza each student ate?” Activity Synthesis • Invite students to share how much of the pizza each group of friends ate using halves, thirds, fourths, and quarters. • Record responses. • “Each group had the same-size pizza. Of all the students, which group of students will have the largest slices? Why do you think that is?” (Jada and Mai only have to share the pizza with 2 people, so they have the largest pieces.) • “What do you notice about the size of the slices and the number of students?” (The slices get smaller if there are more students.) • Share and record responses. Activity 2: Equal Shares of the Pie (20 minutes) The purpose of this activity is for students to recognize and describe pieces of circles using the words half of, a third of, and a quarter of. Students match shapes partitioned into halves and quarters to stories and partition shapes into quarters and halves based on directions. Students can continue to use one fourth when describing a piece, but encourage the use of a quarter as a way to describe the same piece. Representation: Access for Perception. To support access for students with color blindness, label the colors on visual displays and student tasks. Provide access to colored pencils, crayons, or markers that have labels to indicate color. Supports accessibility for: Visual-Spatial Processing • Groups of 2 • Give students access to colored pencils. • “You are going to read some stories with a partner about students sharing pies.” • “Then you will partition and color shapes on your own.” • 5 minutes: partner work time • As students work, encourage them to use precise language when talking with their partners. • Consider asking: “Is there another way you could say how much of the circle is shaded?" • 10 minutes: independent work time • Monitor for students who accurately shade the circles to share in the synthesis. Student Facing Write the letter of each image next to the matching story. 1. Noah ate most of the pie. He left a quarter of the pie for Diego. __________ 2. Lin gave away a half of her pie and kept a half of the pie for herself. __________ 3. Tyler cut a pie into four equal pieces. He ate a quarter of the pie. __________ 4. Mai sliced the pie to share it equally with Clare and Priya. __________ 1. How much of the pie will they each get? a _________________ 2. How much of the pie will they eat in all? _________________ 5. Now you try. □ Partition the circle into four equal pieces. □ Shade in a quarter of the circle red. □ Shade in the rest of the circle blue. How much of the circle is shaded? _____________________________ □ Partition the circle into 2 equal pieces. □ Shade one half of the circle blue. □ Color the other piece yellow. How much of the circle is yellow? ________________________ How much of the circle is shaded? ________________________ Activity Synthesis • Invite previously selected students to display their partitioned circles. Share at least one example of a circle partitioned into halves and one example of a circle partitioned into fourths. • “How are the circles you partitioned and shaded the same? How are they different?” (They are both circles. They are shaded with different colors. They are partitioned differently. The whole circle is shaded in both. All of the pieces are shaded in both.) • “How much of each circle is shaded?” (4 fourths, 2 halves, the whole circle) Lesson Synthesis “We have learned a lot about composing and decomposing shapes. Sometimes different-size pieces can make up a whole shape. Sometimes the whole shape is made up of equal-size pieces. We learned that these equal-size pieces of a whole have special names.” “Each of these shapes has pieces shaded. How would you name each one? Are there any pieces that you are not sure how to name? Explain.” (The first circle shows 2 halves because there are two equal pieces. The first hexagon has some pieces that are not thirds because each piece is a different size. I think the red trapezoid is half because you could use another trapezoid that's the same size to make the whole hexagon, but I'm not sure.) Cool-down: Partition a Circle (5 minutes) Student Facing We have learned a lot about composing and decomposing shapes. Sometimes the pieces make up a whole shape, but all of the pieces are not the same size. Sometimes the whole is partitioned into equal pieces and they have special names. We practiced partitioning shapes into halves, thirds, and fourths. We learned that halves, thirds, and fourths of the same shape can look different. We learned that we can say the whole shape is 2 halves, 3 thirds, 4 fourths, or 4 quarters. How can you use halves, thirds, fourths, or quarters to describe the pieces of these shapes? How can you use halves, thirds, fourths, or quarters to describe the whole shape?
{"url":"https://curriculum.illustrativemathematics.org/k5/teachers/grade-2/unit-6/lesson-9/lesson.html","timestamp":"2024-11-11T08:02:39Z","content_type":"text/html","content_length":"146375","record_id":"<urn:uuid:15163965-afb5-4932-ba8e-7c1977e0a6c5>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00510.warc.gz"}
Nikola Šerman, prof. emeritus Linear Theory Transfer Function Concept Digest 10. ELEMENTARY BLOCKS 10.3 First Order Lag When the pole of the transfer function (10.2) is a real negative number p = – ω[0] and constant K equals ω[0], the transfer function becomes: In control engineering, this elementary block is named first-order lag or PT1. Its only parameter is referred to as time constant denoted by T , where Fig 10.9 Block presentation of the first-order lag First-order lag as an autonomous component is most often made by closing a negative unity feed-back around an integrator, as is shown in fig. 10.10. Fig 10.10 First-order lag internal structure Relationship between the output y(t) and input x(t) in the time domain is defined by the equation: First-order lag step response is shown in fig. 10.11. Fig 10.11 First order lag step response to unity step at the input (time constant T = 1s). The step response provides an information on a shape of the monotonic summands in the complex system’s transients – chapter 5. The shape is defined by time constant T influence on the respective exponential function e^σ[0]t with σ[0] < 0, (for all the converging parts of the system transients). A time period between point A and point B is always equal T. Point A is any point where the tangent touches the curve. Point B is the point where the same tangent intersects the horizontal asymptote. Point A can be located anywhere at the curve itself. Frequency response of a first-order lag in Nyquist diagram is presented in fig. 10.12. It is always a halve-circle with the unity diameter regardless of the time constant T value. Fig 10.12 First-order lag frequency response in Nyquist diagram. A first-order lag frequency response in Bode diagram is shown in fig. 10.13. In a low frequency range, where α[dB] = 0, φ = 0°). In a high-frequency range, where resembling the properties of an integrator. The slope of the amplitude-frequency characteristic asymptote equals -20dB/decade, and the phase shift amounts -90° in the whole high-frequency range. Any variation in T would move the characteristics in the horizontal direction. Their shape always remains the same. Fig 10.13 Frequency response of a first-order lag in Bode diagram for T = 1. Red straight lines denote high-frequency asymptotes.
{"url":"https://turbine.arirang.hr/linear-theory/10-2/103-2/","timestamp":"2024-11-02T08:44:32Z","content_type":"text/html","content_length":"38053","record_id":"<urn:uuid:90f1c76e-c22c-4075-8d98-ab838fd17bb2>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00199.warc.gz"}
CBSE 10th Class Mathematics Syllabus Course Structure I Term Units Topics Marks I Number System 11 II Algebra 23 III Geometry 17 IV Trigonometry 22 V Statistics 17 Total 90 II Term Units Topics Marks II Algebra 23 III Geometry 17 IV Trigonometry 8 V Probability 8 VI Co-ordinate Geometry 11 VII Mensuration 23 Total 90 First Term Course Syllabus Unit I: Number Systems 1. Real Numbers • Euclid's division lemma • Fundamental Theorem of Arithmetic - statements after reviewing work done earlier and after illustrating and motivating through examples • Proofs of results - irrationality of √2, √3, √5, decimal expansions of rational numbers in terms of terminating/non-terminating recurring decimals Unit II: Algebra 1. Polynomials • Zeros of a polynomial • Relationship between zeros and coefficients of quadratic polynomials • Statement and simple problems on division algorithm for polynomials with real coefficients 2. Pair of Linear Equations in Two Variables • Pair of linear equations in two variables and their graphical solution • Geometric representation of different possibilities of solutions/inconsistency • Algebraic conditions for number of solutions • Solution of a pair of linear equations in two variables algebraically - by substitution, by elimination and by cross multiplication method • Simple situational problems must be included • Simple problems on equations reducible to linear equations Unit III: Geometry 1. Triangles • Definitions, examples, counter examples of similar triangles • (Prove) If a line is drawn parallel to one side of a triangle to intersect the other two sides in distinct points, the other two sides are divided in the same ratio • (Motivate) If a line divides two sides of a triangle in the same ratio, the line is parallel to the third side • (Motivate) If in two triangles, the corresponding angles are equal, their corresponding sides are proportional and the triangles are similar • (Motivate) If the corresponding sides of two triangles are proportional, their corresponding angles are equal and the two triangles are similar • (Motivate) If one angle of a triangle is equal to one angle of another triangle and the sides including these angles are proportional, the two triangles are similar • (Motivate) If a perpendicular is drawn from the vertex of the right angle of a right triangle to the hypotenuse, the triangles on each side of the perpendicular are similar to the whole triangle and to each other • (Prove) The ratio of the areas of two similar triangles is equal to the ratio of the squares on their corresponding sides • (Prove) In a right triangle, the square on the hypotenuse is equal to the sum of the squares on the other two sides • (Prove) In a triangle, if the square on one side is equal to sum of the squares on the other two sides, the angles opposite to the first side is a right triangle Unit IV: Trigonometry 1. Introduction to Trigonometry • Trigonometric ratios of an acute angle of a right-angled triangle • Proof of their existence (well defined); motivate the ratios, whichever are defined at 0^o and 90^o • Values (with proofs) of the trigonometric ratios of 30^o, 45^o and 60^o • Relationships between the ratios 2. Trigonometric Identities • Proof and applications of the identity sin2A &plus; cos2A = 1 • Only simple identities to be given • Trigonometric ratios of complementary angles Unit V: Statistics and Probability 1. Statistics • Mean, median and mode of grouped data (bimodal situation to be avoided) • Cumulative frequency graph Second Term Course Syllabus Unit II: Algebra 3. Quadratic Equations • Standard form of a quadratic equation ax^2 &plus; bx &plus; c = 0, (a ≠ 0) • Solution of the quadratic equations (only real roots) by factorization, by completing the square and by using quadratic formula • Relationship between discriminant and nature of roots • Situational problems based on quadratic equations related to day to day activities to be incorporated 4. Arithmetic Progressions Unit III: Geometry 2. Circles • Tangents to a circle motivated by chords drawn from points coming closer and closer to the point • (Prove) The tangent at any point of a circle is perpendicular to the radius through the point of contact • (Prove) The lengths of tangents drawn from an external point to circle are equal 3. Constructions • Division of a line segment in a given ratio (internally) • Tangent to a circle from a point outside it • Construction of a triangle similar to a given triangle Unit IV: Trigonometry 3. Heights and Distances • Simple and believable problems on heights and distances • Problems should not involve more than two right triangles • Angles of elevation / depression should be only 30^o, 45^o, 60^o Unit V: Statistics and Probability 2. Probability • Classical definition of probability • Simple problems on single events (not using set notation) Unit VI: Coordinate Geometry 1. Lines (In two-dimensions) • Concepts of coordinate geometry, graphs of linear equations • Distance formula • Section formula (internal division) • Area of a triangle Unit VII: Mensuration 1. Areas Related to Circles • Motivate the area of a circle; area of sectors and segments of a circle • Problems based on areas and perimeter / circumference of the above said plane figures • In calculating area of segment of a circle, problems should be restricted to central angle of 60^o, 90^o and 120^o only • Plane figures involving triangles, simple quadrilaterals and circle should be taken 2. Surface Areas and Volumes • Problems on finding surface areas and volumes of combinations of any two of the following − • Problems involving converting one type of metallic solid into another and other mixed problems. (Problems with combination of not more than two different solids be taken.) To download pdf Click here.
{"url":"http://www.howcodex.com/cbse_syllabus/cbse_10th_class_maths_syllabus.htm","timestamp":"2024-11-12T13:19:24Z","content_type":"text/html","content_length":"52708","record_id":"<urn:uuid:efa0e19c-4e37-4c76-84ed-f24430b9b629>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00318.warc.gz"}
Assign the diagonal elements of a matrix SAS/IML programmers know that the VECDIAG matrix can be used to extract the diagonal elements of a matrix. For example, the following statements extract the diagonal of a 3 x 3 matrix: proc iml; m = {1 2 3, 4 5 6, 7 8 9}; v = vecdiag(m); /* v = {1,5,9} */ But did you know that you can also assign the diagonal elements without using a loop? Because SAS/IML matrices are stored in row-major order, the elements on the diagonal of an n x p matrix have the indices 1, p+1, 2p+2, ... np. In other words, the following statements assign the diagonal elements of a matrix: start SetDiag(A, v); diagIdx = do(1,nrow(A)*ncol(A), ncol(A)+1); A[diagIdx] = v; /* set diagonal elements */ run SetDiag(m, 3:1); /* set diagonal elements to {3,2,1} */ run SetDiag(m, .); /* set diagonal elements to missing */ I've used this trick in several past blog posts (for example, in my post about how to compute the log-determinant of a matrix), but recently I saw some SAS/IML code that used loops to set the diagonal elements, as follows: do i = 1 to ncol(m); m[i,i] = v[i]; /* set diagonal elements (not efficient) */ As I have said before, to increase efficiency, replace loops over matrix elements with an equivalent vector operation. Assigning the diagonal elements of a matrix by using this trick is another example of how to avoid looping in the SAS/IML language. 4 Comments Leave A Reply
{"url":"https://blogs.sas.com/content/iml/2013/10/21/assign-the-diagonal-elements-of-a-matrix.html","timestamp":"2024-11-05T12:36:59Z","content_type":"text/html","content_length":"40373","record_id":"<urn:uuid:de2107ca-145a-4811-a5e5-36b0959335d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00228.warc.gz"}
Guowei Lv The title of this episode is called “Side effects”, but in my opinion, it’s all about how to move side-effects out of the function and make the function composable. Side effects in the body of the function If there is some side-effects in the body of the function, we can move the side-effect into the return of the function, and let the caller deal with it. func computeWithEffect(_ x: Int) -> Int { let computation = x * x + 1 print("Computed \(computation)") return computation We can move the stuff to print into the return type. func computeAndPrint(_ x: Int) -> (Int, [String]) { let computation = x * x + 1 return (computation, ["Computed \(computation)"]) This way we can do: let (computation, logs) = computeAndPrint(2) logs.forEach { print($0) } But there is one problem, we cannot compose computeAndPrint with itself, because of the extra String array in the return type. To solve this problem, we can invent a new operator >=>. precedencegroup EffectfulComposition { associativity: left higherThan: ForwardApplication lowerThan: ForwardComposition infix operator >=>: EffectfulComposition func >=> <A, B, C>( _ f: @escaping (A) -> (B, [String]), _ g: @escaping (B) -> (C, [String]) ) -> ((A) -> (C, [String])) { return { a in let (b, logs) = f(a) let (c, moreLogs) = g(b) return (c, logs + moreLogs) Well, this operator is implemented here just to solve the [String] in the return type, but it can be a general operator too. E.g. we can do: func >=> <A, B, C>( _ f: @escaping (A) -> B?, _ g: @escaping (B) -> C? ) -> ((A) -> C?) { return { a in guard let b = f(a) else { return nil } return g(b) So I would say the fish operator >=> can be used whenever the functions cannot be directly composed together because of their return types. It has not really much to do with Side-effects per se, but more of a tool to solve function composition problems.
{"url":"https://www.lvguowei.me/","timestamp":"2024-11-04T05:56:46Z","content_type":"text/html","content_length":"51490","record_id":"<urn:uuid:81879fea-36a3-4c25-945d-c32dd7482d16>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00732.warc.gz"}
High School Math Finland - Behavior of polynomial function Behaviour of a polynomial function The behaviour of a polynomial function can be studied using a derivative. Since the derivative at a certain point of the function is the slope of the tangent drawn at this point, much can already be deduced from the sign of the derivative. As the graph of the function increases, all the tangents become ascending lines, the slope is positive. The derivative is positive as the function increases. When the graph of the function decreases, all the tangents become descending lines, the slope is negative. The derivative is negative when the function decreases. At points where the graph of the function changes direction, the tangent is parallel to the x-axis and its slope is 0. The function changes direction at the zero points of the derivative. Below is a graph of the function f and the tangents drawn onto it. At points where the function changes direction the derivative is zero. Tangents drawn in the image in red. Before the first change of direction, the function is increasing and the tangents drawn there are ascending lines. The figure shows the green tangent at -1. Between the direction change points -the zero points of the derivative- the values of the function decrease. The tangents drawn to this area are decreasing lines. A blue tangent drawn at x = 0,5. We can study the behaviour of a function by examining its derivative. Example 1 When is the function f increasing and when is it decreasing? We find the derivative and examine its signs The zero point of the derivative is x = 1. The graph of the derivative is an ascending line Thus, the derivative has negative values when x < 1 and positive values when x > 1. From this we can deduce the behaviour of the function. Equality is included during increasing and decreasing. Below is a graph of the function f in blue and a graph of the derivative f´ in green. The extreme values are the values of the function that are reached at the zero points of the derivative. These points are called extreme points. Example 2 Find the extreme points and extreme values of the function f. We find the derivative and then find its zeros The zeros of the derivative are x = -1 and x = 2. These are extreme points. The derivative graph is an upward opening parabola. Let's make a table of signs. Below the signs of the derivative is the behaviour chart of the function. When the derivative is positive, the function increases and the function decreases when the derivative is negative. Point -1 is the zero point of the derivative. The function changes direction and this is the local maximum point. The second zero point of the derivative, point 2, has a local minimum point. Extreme values of a function Example 3 Find the extreme values of the function g We find the derivative and then find its zeros The derivative has only one zero point x = 2. The graph of the derivative is an upward-opening parabola, so it gets no negative values anywhere. Let's make a behaviour chart of the function. After the zero point of the derivative, the function continues to increase. The function has no extreme values at all. This is a saddle point. The continuous function gets its maximum and minimum values in a closed interval, either at the endpoints of the interval or at the zero points of the derivative that are in this interval. Example 4 Find the maximum and minimum value of the function f on an interval [2,4] The zeros of the derivative are x = 1 and x = 3. Of these, only x = 3 belongs to the interval [2,4]. We calculate the values of the function at the endpoints of the interval and at the zero point of the derivative that is in the interval. With x = 4 the function gets the maximum value of the interval which is 3, and in x = 3 the function gets its minimum value of -1. Below is a graph of the function. Turn on the subtitles if needed
{"url":"https://x.eiramath.com/home/maa6/behavior-of-polynomial-function","timestamp":"2024-11-09T16:21:38Z","content_type":"text/html","content_length":"357387","record_id":"<urn:uuid:0b16ff1f-0742-449f-b9bc-2cfd717ec05c>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00353.warc.gz"}
Chern-Simons form Thanks, I hadn’t been aware of that. I added a remark on the fundamental work of Gelfand and Smirnov. some details on formulas and computations for Chern-Simons forms seems I created a stub for Weil homomorphism, but am running a bit out of steam… added a little bit of the $\infty$-Lie algebroid-description to Chern-Simons form. am starting curvature characteristic form and Chern-Simons form. But still working… One student in Zagreb had done a diploma (under Svrtan’s guidance) on the Gelfand-Smirnov approach few years ago, but then switched to the subject of automorphic forms/representation theory so we already forgot about most of the subject. It is part of a general picture of “formal geometry”. I have the feeling that Gosha Sharygin was telling me last year some new related “combinatorial” ideas on Gel’fand-Smirnov but I may mix it up. added a section (with some subsections) titled In oo-Chern-Weil theory to Chern-Simons form. • Andreas Čap, Keegan J. Flood, Thomas Mettler. Flat extensions and the Chern-Simons 3-form (2024). (arXiv:2409.12811). diff, v22, current
{"url":"https://nforum.ncatlab.org/discussion/1565/chernsimons-form/?Focus=13571","timestamp":"2024-11-09T19:15:23Z","content_type":"application/xhtml+xml","content_length":"47223","record_id":"<urn:uuid:cede01a6-1a71-4a8d-a217-ce00e035a8fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00776.warc.gz"}
Discussion: Epidemiologic Principles - Help @Savvy Essay Writers Discussion: Epidemiologic Principles Discussion: Epidemiologic Principles Discussion: Epidemiologic Principles Discussion: Epidemiologic Principles ORDER NOW FOR AN ORIGINAL PAPER ASSIGNMENT;Discussion: Epidemiologic Principles Epidemiologic Principles Worksheet Guidelines and Grading Rubric Purpose The purpose of this assignment is to help you to begin to understand and apply the important counts, ratios, and statistics presented in healthcare and epidemiological research. Remember to use the list of formulas presented prior to the problems and to carefully consider the purpose of each calculation and how it is interpreted. Also, do not forget the importance of the proper denominator such as per 100, per 1,000, per 10,000 etc. Course Outcomes This assignment provides documentation of students’ ability to meet the following course outcomes: (CO #1) Distinguish the roles and relationships between epidemiology and biostatistics in the prevention of disease and the improvement of health. (PO#1) (CO #2) Assimilate principles of epidemiology with scientific data necessary for epidemiologic intervention and draw appropriate inferences from epidemiologic data. (PO#1) (CO #4) Utilize appropriate public health terminology and definitions of epidemiology necessary for intra-/interprofessional collaboration in advanced nursing practice. (PO#8) Points This assignment is worth 50 points. Due Date Submit your completed worksheet to the Dropbox by Sunday 11:59 p.m. MT of Week 3 as directed. Requirements 1. Complete the Risk Calculation Worksheet located in Doc Sharing. 2. For each question, identify the correct answer. 3. Submit the worksheet to the Dropbox by 11:59 p.m. MT Sunday of Week 3. Preparing the Worksheet The following are best practices for preparing this assignment: 1. Prior to completing this worksheet, review the lessons, reading and course text up to this point. Also review the tables of calculations. 2. Each question is worth 5 points. 3. There is only one right answer for each of the 10 problems. 4. You will upload the completed worksheet to the Dropbox. Epidemiological Formulas and Statistics Parameter Definition Formula Incidence (exposed) Incidence of new cases of disease in persons who were exposed number (exposed with disease)/total number of exposed Incidence (unexposed) Incidence of new cases of disease in persons who were not exposed number (unexposed with disease)/total number of unexposed Incidence of Disease Measure of risk: Total number in a population with a disease divided by the total number of the population Number with the disease/ total population number Relative Risk Risk of disease in one group versus another: Risk of developing a disease after exposure. If this number is 1, it means there is no risk. R(exposed)/Risk (unexposed) (# exposed with disease[divided by]/total of all exposed) (# of unexposed with disease/[divided by]total of all unexposed) Odds Ratio A measure of exposure and disease outcome commonly used in case control studies R(exposed) / R (unexposed) 1- R(exposed) 1-R(unexposed) Prevalence The number of cases of a disease in a given time regardless of when it began (new and old cases) (Persons with the disease/ Total population) X 1000 Attributable Risk The difference in disease in those exposed and unexposed and is calculated from prospective data; directly attributed to exposure (if exposure is gone, disease would be gone) R(exposed) – R(unexposed) Crude Birth Rate The number of live births per 1,000 people in the population (# of births/estimated midyear population) X 1000 Crude Death Rate The number of deaths per 1,000 people in the population (# of deaths/estimated midyear population) X 1000 Fetal Death Rate The number of fetal deaths (20 weeks or more gestation) per 1,000 live births (# of fetal deaths/ # of live births + fetal deaths) X 1000 Annual Mortality Rate Usually an expression of a specific disease or can be all causes per 1,000 people for a year (# of deaths of all causes (or a specific disease)/midyear population) X 1000 Cause Specific Mortality Rate Usually an expression of a specific disease per 1,000 people for a year (# of deaths per a specific disease)/midyear population) X 1000 Case Fatality Rate The parentage of individuals who have a specific disease and die within a specific time after diagnosis (# of persons dying from a disease after diagnosis or set period/ # of persons with the disease) X 100 Answer the following questions. Remember that there is only one right answer for each item. 1. The population in the city of Springfield, Missouri, in March, 2014, was 200,000. The number of new cases of HIV was 28 between January 1 and June 30, 2014. The number of current HIV cases was 130 between January 1 and June 30, 2014. The incidence rate of HIV cases for this 6-month period was A. 7 per 100,000 of the population. B. 14 per 100,000 of the population. C. 28 per 100,000 of the population. D. 85 per 100,000 of the population. 2. The prevalence rate of HIV cases in Springfield, Missouri, as of June 30, 2014, was A. 14 per 100,000 of the population. B. 28 per 100,000 of the population. C. 79 per 100,000 of the population. D. 130 per 100,000 of the population. 3. In a North African country with a population of 5 million people, 50,000 deaths occurred during 2014. These deaths included 5,000 people from malaria out of 10,000 persons who had malaria. What was the total annual mortality rate for 2014 for this country? (Please show your work.) 4. What was the cause-specific mortality rate from malaria? (Please show your work.) 5. What was the case-fatality percentage from malaria? 6. Fill in and total the 4 X 4 table for the following disease parameters: · Total number of people with lung cancer in a given population = 120 · Total number of people with lung cancer who smoked = 90 · Total number of people with lung cancer who did not smoke = 30 · Total number of people who smoked = 150 · Total number of people in the population = 350 Fill in the missing parameters based on the above information. YES LUNG CANCER NO LUNG CANCER TOTALS YES SMOKING NO SMOKING TOTALS 7. From Question 6, what is the total number of people with no lung cancer? 8. From question 6, what is the total number of people who smoked but did not have lung cancer? 9. Set up the problem for relative risk based on the table in #6. 10. Calculate the relative risk. Get a 10 % discount on an order above $ 50 Use the following coupon code : The post Discussion: Epidemiologic Principles appeared first on NursingPaperSlayers. https://help.savvyessaywriters.net/wp-content/uploads/2022/02/Savvy-Essay-Writers_2-300x95.png 0 0 admin https://help.savvyessaywriters.net/wp-content/uploads/2022/02/Savvy-Essay-Writers_2-300x95.png admin2020-06-08 00:00:002020-01-16 21:54:19Discussion: Epidemiologic Principles
{"url":"https://help.savvyessaywriters.net/discussion-epidemiologic-principles/","timestamp":"2024-11-09T20:35:42Z","content_type":"text/html","content_length":"49589","record_id":"<urn:uuid:39b15db6-c839-495b-9d21-b56f91a8e6de>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00095.warc.gz"}
Free The Theory Of X Calculator | Online The Theory Of X Calculator The Theory Of X Calculator Now, What Exactly Is A “The Theory Of X Calculator”? A the theory of x calculator is a tool that helps you find the value of a certain variable in a quadratic equation by completing the square. The Theory Of X Calculator with steps This is a very simple tool for The Theory Of X Calculator. Follow the given process to use this tool. ☛ Step 1: Enter the complete equation/value in the input box i.e. across “Provide Required Input Value:” ☛ Step 2: Click “Enter Solve Button for Final Output”. ☛ Step 3: After that a window will appear with final output. Formula for the theory of x There is no definitive formula for the theory of x, as it is a broad and complex topic. However, some key concepts in the theory of x include the idea that people are motivated by a need to feel competent and that they will work to achieve goals that are important to them. The Theory Of X Definition The theory of x is a theory that states that the value of x is directly proportional to the amount of x that is present. Example of the theory of x based on Formula The theory of x is given by the formula: x = (1/2) (A + B) where A and B are the two values being averaged. For example, if the two values being averaged are A = 1 and B = 2, then the theory of x is given by x = (1/2) (A + B) = (1/2) (1 + 2) = 1.5. Leave a Comment
{"url":"https://usefulknot.com/calculators/the-theory-of-x-calculator/","timestamp":"2024-11-02T17:07:28Z","content_type":"text/html","content_length":"53941","record_id":"<urn:uuid:b8bf7e2a-b4b9-4178-ae51-8c1c338c0034>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00606.warc.gz"}
Quick start guide Quick start guide# The graph_tool module provides a Graph class and several algorithms that operate on it. The internals of this class, and of most algorithms, are written in C++ for performance, using the Boost Graph The module must be of course imported before it can be used. The package is subdivided into several sub-modules. To import everything from all of them, one can do: >>> from graph_tool.all import * In the following, it will always be assumed that the previous line was run. Creating graphs# An empty graph can be created by instantiating a Graph class: By default, newly created graphs are always directed. To construct undirected graphs, one must pass a value to the directed parameter: >>> ug = Graph(directed=False) A graph can always be switched on-the-fly from directed to undirected (and vice versa), with the set_directed() method. The “directedness” of the graph can be queried with the is_directed() method: >>> ug = Graph() >>> ug.set_directed(False) >>> assert ug.is_directed() == False Once a graph is created, it can be populated with vertices and edges. A vertex can be added with the add_vertex() method, which returns an instance of a Vertex class, also called a vertex descriptor. For instance, the following code creates two vertices, and returns vertex descriptors stored in the variables v1 and v2. >>> v1 = g.add_vertex() >>> v2 = g.add_vertex() Edges can be added in an analogous manner, by calling the add_edge() method, which returns an edge descriptor (an instance of the Edge class): >>> e = g.add_edge(v1, v2) The above code creates a directed edge from v1 to v2. A graph can also be created by providing another graph, in which case the entire graph (and its internal property maps, see Property maps) is copied: >>> g2 = Graph(g) # g2 is a copy of g Above, g2 is a “deep” copy of g, i.e. any modification of g2 will not affect g. Graph visualization in graph-tool can be interactive! When the output parameter of graph_draw() is omitted, instead of saving to a file, the function opens an interactive window. From there, the user can zoom in or out, rotate the graph, select and move individual nodes or node selections. See GraphWidget() for documentation on the interactive interface. If you are using a Jupyter notebook, the graphs are drawn inline if output is omitted. If an interactive window is desired instead, the option inline = False should be passed. We can visualize the graph we created so far with the graph_draw() function. >>> graph_draw(g, vertex_text=g.vertex_index, output="two-nodes.svg") We can add attributes to the nodes and edges of our graph via property maps. For example, suppose we want to add an edge weight and node color to our graph we have first to create two PropertyMap objects as such: >>> eweight = g.new_ep("double") # creates an EdgePropertyMap of type double >>> vcolor = g.new_vp("string") # creates a VertexPropertyMap of type string And now we set their values for each vertex and edge: >>> eweight[e] = 25.3 >>> vcolor[v1] = "#1c71d8" >>> vcolor[v2] = "#2ec27e" Property maps can then be used in many graph-tool functions to set node and edge properties, for example: >>> graph_draw(g, vertex_text=g.vertex_index, vertex_fill_color=vcolor, ... edge_pen_width=eweight, output="two-nodes-color.svg") Property maps are discussed in more detail in the section Property maps below. Adding many edges and vertices at once# The vertex values passed to the constructor need to be integers per default, but arbitrary objects can be passed as well if the option hashed = True is passed. In this case, the mapping of vertex descriptors to vertex ids is obtained via an internal VertexPropertyMap called "ids". E.g. in the example above we have >>> print(g.vp.ids[0]) See Property maps below for more details. It is also possible to add many edges and vertices at once when the graph is created. For example, it is possible to construct graphs directly from a list of edges, e.g. >>> g = Graph([('foo', 'bar'), ('gnu', 'gnat')], hashed=True) which is just a convenience shortcut to creating an empty graph and calling add_edge_list() afterward, as we will discuss below. Edge properties can also be initialized together with the edges by using tuples (source, target, property_1, property_2, ...), e.g. >>> g = Graph([('foo', 'bar', .5, 1), ('gnu', 'gnat', .78, 2)], hashed=True, ... eprops=[('weight', 'double'), ('time', 'int')]) The eprops parameter lists the name and value types of the properties, which are used to create internal property maps with the value encountered (see Property maps below for more details). It is possible also to pass an adjacency list to construct a graph, which is a dictionary of out-neighbors for every vertex key: >>> g = Graph({0: [2, 3], 1: [4], 3: [4, 5], 6: []}) We can also easily construct graphs from adjacency matrices. They need only to be converted to a sparse scipy matrix (i.e. a subclass of scipy.sparse.sparray or scipy.sparse.spmatrix) and passed to the constructor, e.g.: >>> m = np.array([[0, 1, 0], ... [0, 0, 1], ... [0, 1, 0]]) >>> g = Graph(scipy.sparse.lil_matrix(m)) The nonzero entries of the matrix are stored as an edge property map named "weight" (see Property maps below for more details), e.g. >>> m = np.array([[0, 1.2, 0], ... [0, 0, 10], ... [0, 7, 0]]) >>> g = Graph(scipy.sparse.lil_matrix(m)) >>> print(g.ep.weight.a) [ 1.2 10. 7. ] For undirected graphs (i.e. the option directed = False is given) only the upper triangular portion of the passed matrix will be considered, and the remaining entries will be ignored. We can also add many edges at once after the graph has been created using the add_edge_list() method. It accepts any iterable of (source, target) pairs, and automatically adds any new vertex seen: >>> g.add_edge_list([(0, 1), (2, 3)]) As above, if hashed = True is passed, the function add_edge_list() returns a VertexPropertyMap object that maps vertex descriptors to their id values in the list. See Property maps below. The vertex values passed to add_edge_list() need to be integers per default, but arbitrary objects can be passed as well if the option hashed = True is passed, e.g. for string values: >>> g.add_edge_list([('foo', 'bar'), ('gnu', 'gnat')], hashed=True, ... hash_type="string") or for arbitrary (hashable) Python objects: >>> g.add_edge_list([((2, 3), 'foo'), (3, 42.3)], hashed=True, ... hash_type="object") Manipulating graphs# With vertex and edge descriptors at hand, one can examine and manipulate the graph in an arbitrary manner. For instance, in order to obtain the out-degree of a vertex, we can simply call the out_degree() method: >>> g = Graph() >>> v1 = g.add_vertex() >>> v2 = g.add_vertex() >>> e = g.add_edge(v1, v2) >>> print(v1.out_degree()) For undirected graphs, the “out-degree” is synonym for degree, and in this case the in-degree of a vertex is always zero. Analogously, we could have used the in_degree() method to query the in-degree. Edge descriptors have two useful methods, source() and target(), which return the source and target vertex of an edge, respectively. >>> print(e.source(), e.target()) We can also directly convert an edge to a tuple of vertices, to the same effect: >>> u, v = e >>> print(u, v) The add_vertex() method also accepts an optional parameter which specifies the number of additional vertices to create. If this value is greater than 1, it returns an iterator on the added vertex >>> vlist = g.add_vertex(10) >>> print(len(list(vlist))) Each vertex in a graph has a unique index, which is *always* between \(0\) and \(N-1\), where \(N\) is the number of vertices. This index can be obtained by using the vertex_index attribute of the graph (which is a property map, see Property maps), or by converting the vertex descriptor to an int. >>> v = g.add_vertex() >>> print(g.vertex_index[v]) >>> print(int(v)) Edges and vertices can also be removed at any time with the remove_vertex() and remove_edge() methods, >>> g.remove_edge(e) # e no longer exists >>> g.remove_vertex(v2) # the second vertex is also gone When removing edges, it is important to keep in mind some performance considerations: Because of the contiguous indexing, removing a vertex with an index smaller than \(N-1\) will invalidate either the last (fast == True) or all (fast == False) descriptors pointing to vertices with higher index. As a consequence, if more than one vertex is to be removed at a given time, they should always be removed in decreasing index order: # 'vs' is a list of # vertex descriptors vs = sorted(vs) vs = reversed(vs) for v in vs: Alternatively (and preferably), a list (or any iterable) may be passed directly as the vertex parameter of the remove_vertex() function, and the above is performed internally (in C++). Note that property map values (see Property maps) are unaffected by the index changes due to vertex removal, as they are modified accordingly by the library. Removing a vertex is typically an \(O(N)\) operation. The vertices are internally stored in a STL vector, so removing an element somewhere in the middle of the list requires the shifting of the rest of the list. Thus, fast \(O(1)\) removals are only possible if one can guarantee that only vertices in the end of the list are removed (the ones last added to the graph), or if the relative vertex ordering is invalidated. The latter behavior can be achieved by passing the option fast = True, to remove_vertex(), which causes the vertex being deleted to be ‘swapped’ with the last vertex (i.e. with the largest index), which, in turn, will inherit the index of the vertex being deleted. Removing an edge is an \(O(k_{s} + k_{t})\) operation, where \(k_{s}\) is the out-degree of the source vertex, and \(k_{t}\) is the in-degree of the target vertex. This can be made faster by setting set_fast_edge_removal() to True, in which case it becomes \(O(1)\), at the expense of additional data of size \(O(E)\). No edge descriptors are ever invalidated after edge removal, with the exception of the edge itself that is being removed. Since vertices are uniquely identifiable by their indices, there is no need to keep the vertex descriptor lying around to access them at a later point. If we know its index, we can obtain the descriptor of a vertex with a given index using the vertex() method, which takes an index, and returns a vertex descriptor. Edges cannot be directly obtained by its index, but if the source and target vertices of a given edge are known, it can be retrieved with the edge() method >>> g.add_edge(g.vertex(2), g.vertex(3)) >>> e = g.edge(2, 3) Another way to obtain edge or vertex descriptors is to iterate through them, as described in section Iterating over vertices and edges. This is in fact the most useful way of obtaining vertex and edge descriptors. Like vertices, edges also have unique indices, which are given by the edge_index property: >>> e = g.add_edge(g.vertex(0), g.vertex(1)) >>> print(g.edge_index[e]) Differently from vertices, edge indices do not necessarily conform to any specific range. If no edges are ever removed, the indices will be in the range \([0, E-1]\), where \(E\) is the number of edges, and edges added earlier have lower indices. However if an edge is removed, its index will be “vacant”, and the remaining indices will be left unmodified, and thus will not all lie in the range \([0, E-1]\). If a new edge is added, it will reuse old indices, in an increasing order. Iterating over vertices and edges# Algorithms must often iterate through vertices, edges, out-edges of a vertex, etc. The Graph and Vertex classes provide different types of iterators for doing so. The iterators always point to edge or vertex descriptors. Iterating over all vertices or edges# In order to iterate through all the vertices or edges of a graph, the vertices() and edges() methods should be used: for v in g.vertices(): for e in g.edges(): The code above will print the vertices and edges of the graph in the order they are found. Iterating over the neighborhood of a vertex# You should never remove vertex or edge descriptors when iterating over them, since this invalidates the iterators. If you plan to remove vertices or edges during iteration, you must first store them somewhere (such as in a list) and remove them only after no iterator is being used. Removal during iteration will cause bad things to happen. The out- and in-edges of a vertex, as well as the out- and in-neighbors can be iterated through with the out_edges(), in_edges(), out_neighbors() and in_neighbors() methods, respectively. for v in g.vertices(): for e in v.out_edges(): for w in v.out_neighbors(): # the edge and neighbors order always match for e, w in zip(v.out_edges(), v.out_neighbors()): assert e.target() == w The code above will print the out-edges and out-neighbors of all vertices in the graph. Property maps# Property maps are a way of associating additional information to the vertices, edges, or to the graph itself. There are thus three types of property maps: vertex, edge, and graph. They are handled by the classes VertexPropertyMap, EdgePropertyMap, and GraphPropertyMap. Each created property map has an associated value type, which must be chosen from the predefined set: Type name Alias bool uint8_t int16_t short int32_t int int64_t long, long long double float long double vector<bool> vector<uint8_t> vector<int16_t> vector<short> vector<int32_t> vector<int> vector<int64_t> vector<long>, vector<long long> vector<double> vector<float> vector<long double> python::object object New property maps can be created for a given graph by calling one of the methods new_vertex_property() (alias new_vp()), new_edge_property() (alias new_ep()), or new_graph_property() (alias new_gp() ), for each map type. The values are then accessed by vertex or edge descriptors, or the graph itself, as such: from numpy.random import randint g = Graph() # insert some random links for s,t in zip(randint(0, 100, 100), randint(0, 100, 100)): g.add_edge(g.vertex(s), g.vertex(t)) vprop = g.new_vertex_property("double") # Double-precision floating point v = g.vertex(10) vprop[v] = 3.1416 vprop2 = g.new_vertex_property("vector<int>") # Vector of ints v = g.vertex(40) vprop2[v] = [1, 3, 42, 54] eprop = g.new_edge_property("object") # Arbitrary Python object. e = g.edges().next() eprop[e] = {"foo": "bar", "gnu": 42} # In this case, a dict. gprop = g.new_graph_property("bool") # Boolean gprop[g] = True It is possible also to access vertex property maps directly by vertex indices: >>> print(vprop[10]) The following lines are equivalent: eprop[(30, 40)] eprop[g.edge(30, 40)] Which means that indexing via (source, target) pairs is slower than via edge descriptors, since the function edge() needs to be called first. And likewise we can access edge descriptors via (source, target) pairs: >>> g.add_edge(30, 40) >>> eprop[(30, 40)] = "gnat" We can also iterate through the property map values directly, i.e. >>> print(list(vprop)[:10]) [0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0] Property maps with scalar value types can also be accessed as a numpy.ndarray, with the get_array() method, or the a attribute, e.g., from numpy.random import random # this assigns random values to the vertex properties vprop.get_array()[:] = random(g.num_vertices()) # or more conveniently (this is equivalent to the above) vprop.a = random(g.num_vertices()) Array interface for filtered graphs For filtered graphs (see Graph filtering below), it’s possible to get arrays that only point to the nodes and edges that are not filtered out via the fa and ma attributes instead. We usually want to apply transformations to the values of property maps. This can be achieved via iteration (see Iterating over vertices and edges), but since this is a such a common operation, there’s a more convenient way to do this via the transform() method (or is shorter alias t()), which takes a function and returns a copy of the property map with the function applied to its values: # Vertex property map with random values in the range [-.5, .5] rand = g.new_vp("double", vals=random(g.num_vertices()) - .5) # The following returns a copy of `rand` but containing only the absolute values m = rand.t(abs) Transformations are particularly useful to pass temporary properties to functions, e.g. erand = g.new_ep("double", vals=random(g.num_edges()) - .5) pos = sfdp_layout(g, eweight=erand.t(abs)) Internal property maps# Any created property map can be made “internal” to the corresponding graph. This means that it will be copied and saved to a file together with the graph. Properties are internalized by including them in the graph’s dictionary-like attributes vertex_properties, edge_properties or graph_properties (or their aliases, vp, ep or gp, respectively). When inserted in the graph, the property maps must have an unique name (between those of the same type): >>> eprop = g.new_edge_property("string") >>> g.ep["some name"] = eprop >>> g.list_properties() some name (edge) (type: string) Internal graph property maps behave slightly differently. Instead of returning the property map object, the value itself is returned from the dictionaries: >>> gprop = g.new_graph_property("int") >>> g.gp["foo"] = gprop # this sets the actual property map >>> g.gp["foo"] = 42 # this sets its value >>> print(g.gp["foo"]) >>> del g.gp["foo"] # the property map entry is deleted from the dictionary For convenience, the internal property maps can also be accessed via attributes: >>> vprop = g.new_vertex_property("double") >>> g.vp.foo = vprop # equivalent to g.vp["foo"] = vprop >>> v = g.vertex(0) >>> g.vp.foo[v] = 3.14 >>> print(g.vp.foo[v]) Graph I/O# Graphs can be saved and loaded in four formats: graphml, dot, gml and a custom binary format gt (see The gt file format). The binary format gt and the text-based graphml are the preferred formats, since they are by far the most complete. Both these formats are equally complete, but the gt format is faster and requires less storage. The dot and gml formats are fully supported, but since they contain no precise type information, all properties are read as strings (or also as double, in the case of gml), and must be converted by hand to the desired type. Therefore you should always use either gt or graphml, since they implement an exact bit-for-bit representation of all supported Property maps types, except when interfacing with other software, or existing data, which uses dot or gml. Graph classes can also be pickled with the pickle module. A graph can be saved or loaded to a file with the save and load methods, which take either a file name or a file-like object. A graph can also be loaded from disc with the load_graph() function, as g = Graph() # ... fill the graph ... g2 = load_graph("my_graph.gt.gz") # g and g2 should be identical copies of each other Graph filtering# It is important to emphasize that the filtering functionality does not add any performance overhead when the graph is not being filtered. In this case, the algorithms run just as fast as if the filtering functionality didn’t exist. One of the unique features of graph-tool is the “on-the-fly” filtering of edges and/or vertices. Filtering means the temporary masking of vertices/edges, which are in fact not really removed, and can be easily recovered. Ther are two different ways to enable graph filtering: via graph views or inplace filtering, which are covered in the following. Graph views# It is often desired to work with filtered and unfiltered graphs simultaneously, or to temporarily create a filtered version of graph for some specific task. For these purposes, graph-tool provides a GraphView class, which represents a filtered “view” of a graph, and behaves as an independent graph object, which shares the underlying data with the original graph. Graph views are constructed by instantiating a GraphView class, and passing a graph object which is supposed to be filtered, together with the desired filter parameters. For example, to create a directed view of an undirected graph g above, one could do: >>> ug = GraphView(g, directed=True) >>> ug.is_directed() Graph views also provide a direct and convenient approach to vertex/edge filtering. Let us consider the facebook friendship graph we used before and the betweeness centrality values: >>> g = collection.ns["ego_social/facebook_combined"] >>> vb, eb = betweenness(g) Let us suppose we would like to see how the graph would look like if some of the edges with higher betweeness values were removed. We can do this by a GraphView object and passing the efilt paramter: >>> u = GraphView(g, vfilt=eb.fa < 1e-6) GraphView objects behave exactly like regular Graph objects. In fact, GraphView is a subclass of Graph. The only difference is that a GraphView object shares its internal data with its parent Graph class. Therefore, if the original Graph object is modified, this modification will be reflected immediately in the GraphView object, and vice versa. Since GraphView is a derived class from Graph, and its instances are accepted as regular graphs by every function of the library. Graph views are “first class citizens” in graph-tool. If we visualize the graph we can see it now has been broken up in many components: >>> graph_draw(u, pos=g.vp._pos, output="facebook-filtered.pdf") Note however that no copy of the original graph was done, and no edge has been in fact removed. If we inspect the original graph g in the example above, it will be intact. In the example above, we passed a boolean array as the efilt, but we could have passed also a boolean property map, a function that takes an edge as single parameter, and returns True if the edge should be kept and False otherwise. For instance, the above could be equivalently achieved as: >>> u = GraphView(g, efilt=lambda e: eb[e] < 1e-6) But note however that would be slower, since it would involve one function call per edge in the graph. Vertices can also be filtered in an entirerly analogous fashion using the vfilt paramter. Composing graph views# Since graph views behave like regular graphs, one can just as easily create graph views of graph views. This provides a convenient way of composing filters. For instance, suppose we wanto to isolate the minimum spanning tree of all vertices of agraph above which have a degree larger than four: >>> g, pos = triangulation(random((500, 2)) * 4, type="delaunay") >>> u = GraphView(g, vfilt=lambda v: v.out_degree() > 4) >>> tree = min_spanning_tree(u) >>> u = GraphView(u, efilt=tree) The resulting graph view can be used and visualized as normal: >>> bv, be = betweenness(u) >>> be.a /= be.a.max() / 5 >>> graph_draw(u, pos=pos, vertex_fill_color=bv, ... edge_pen_width=be, output="mst-view.svg") In-place graph filtering# It is possible also to filter graphs “in-place”, i.e. without creating an additional object. To achieve this, vertices or edges which are to be filtered should be marked with a PropertyMap with value type bool, and then set with set_vertex_filter() or set_edge_filter() methods. By default, vertex or edges with value “1” are kept in the graphs, and those with value “0” are filtered out. This behaviour can be modified with the inverted parameter of the respective functions. All manipulation functions and algorithms will work as if the marked edges or vertices were removed from the graph, with minimum overhead. For example, to reproduce the same example as before for the facebook graph we could have done: >>> g = collection.ns["ego_social/facebook_combined"] >>> vb, eb = betweenness(g) >>> mask = g.new_ep("bool", vals = eb.fa < 1e-5) >>> g.set_edge_filter(mask) The mask property map has a bool type, with value 1 if the edge belongs to the tree, and 0 otherwise. Everything should work transparently on the filtered graph, simply as if the masked edges were removed. The original graph can be recovered by setting the edge filter to None. Everything works in analogous fashion with vertex filtering. Additionally, the graph can also have its edges reversed with the set_reversed() method. This is also an \(O(1)\) operation, which does not really modify the graph. As mentioned previously, the directedness of the graph can also be changed “on-the-fly” with the set_directed() method. Advanced iteration# Faster iteration over vertices and edges without descriptors# The mode of iteration considered above is convenient, but requires the creation of vertex and edge descriptor objects, which incurs a performance overhead. A faster approach involves the use of the methods iter_vertices(), iter_edges(), iter_out_edges(), iter_in_edges(), iter_all_edges(), iter_out_neighbors(), iter_in_neighbors(), iter_all_neighbors(), which return vertex indices and pairs thereof, instead of descriptors objects, to specify vertex and edges, respectively. For example, for the graph: g = Graph([(0, 1), (2, 3), (2, 4)]) we have for v in g.iter_vertices(): for e in g.iter_edges(): which yields [0, 1] [2, 3] [2, 4] and likewise for the iteration over the neighborhood of a vertex: for v in g.iter_vertices(): for e in g.iter_out_edges(v): for w in g.iter_out_neighbors(v): Even faster, “loopless” iteration over vertices and edges using arrays# While more convenient, looping over the graph as described in the previous sections are not quite the most efficient approaches to operate on graphs. This is because the loops are performed in pure Python, thus undermining the main feature of the library, which is the offloading of loops from Python to C++. Following the numpy philosophy, graph_tool also provides an array-based interface that avoids loops in Python. This is done with the get_vertices(), get_edges(), get_out_edges(), get_in_edges(), get_all_edges(), get_out_neighbors(), get_in_neighbors(), get_all_neighbors(), get_out_degrees(), get_in_degrees() and get_total_degrees() methods, which return numpy.ndarray instances instead of iterators. For example, using this interface we can get the out-degree of each node via: or the sum of the product of the in and out-degrees of the endpoints of each edge with: edges = g.get_edges() in_degs = g.get_in_degrees(g.get_vertices()) out_degs = g.get_out_degrees(g.get_vertices()) print((out_degs[edges[:,0]] * in_degs[edges[:,1]]).sum())
{"url":"https://graph-tool.skewed.de/static/doc/quickstart.html","timestamp":"2024-11-10T19:21:50Z","content_type":"text/html","content_length":"191709","record_id":"<urn:uuid:7c260cb2-f83c-4bf1-b40e-2261e4b94154>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00636.warc.gz"}
Mathematical Modeling: Bridging Theory and Reality Mathematical modeling is the process of using mathematical concepts published : 05 April 2024 Mathematical modeling is the process of using mathematical concepts and techniques to describe, analyze, and predict real-world phenomena. From the behavior of physical systems to the dynamics of social networks, mathematical models provide a powerful framework for understanding the complexities of the world around us. The Art of Abstraction At the heart of mathematical modeling lies the art of abstraction, the process of simplifying complex systems into mathematical representations. By identifying key variables, relationships, and assumptions, mathematicians construct models that capture the essential features of real-world phenomena. One of the key challenges in mathematical modeling is striking a balance between simplicity and accuracy. While overly simplistic models may fail to capture important aspects of reality, overly complex models may be unwieldy and difficult to analyze. Finding the right level of abstraction is essential for creating effective models. From Differential Equations to Agent-Based Models Mathematical models come in various forms, from differential equations to agent-based models. Differential equations describe the rate of change of variables over time and are used to model dynamic systems such as population growth, chemical reactions, and fluid flow. Agent-based models, on the other hand, simulate the behavior of individual agents within a system and their interactions with one another. These models are particularly useful for studying complex systems with emergent behavior, such as traffic flow, financial markets, and the spread of infectious diseases. Validation and Verification Once a mathematical model has been constructed, it must be validated and verified to ensure its accuracy and reliability. Validation involves comparing the predictions of the model with real-world data, while verification involves checking the correctness of the model's implementation. Model validation and verification are ongoing processes that require careful scrutiny and refinement. As new data becomes available and our understanding of the underlying system improves, models may need to be updated and revised to maintain their relevance and predictive power. Applications Across Disciplines Mathematical modeling has applications across diverse fields, from physics and engineering to economics and biology. In physics, models are used to describe the behavior of particles, fields, and forces, enabling physicists to explore the nature of the universe. In engineering, mathematical models are used to design and optimize systems, predict performance, and identify potential failure modes. From designing bridges and buildings to developing new technologies, engineers rely on mathematical modeling to guide their decisions and innovations. Challenges and Opportunities While mathematical modeling offers many benefits, it also poses challenges and limitations. Models are simplifications of reality and may not capture all relevant factors or interactions. Additionally, uncertainty and variability in real-world data can introduce errors and inaccuracies into models. Despite these challenges, mathematical modeling continues to be a powerful tool for understanding and navigating the complexities of the world. As technology advances and our computational capabilities grow, the opportunities for mathematical modeling to address pressing societal challenges and drive scientific discovery are greater than ever. Mathematical modeling is a cornerstone of modern science and engineering, bridging the gap between theory and reality. By providing a quantitative framework for understanding the behavior of complex systems, mathematical models empower researchers, engineers, and policymakers to make informed decisions and address real-world challenges. As we continue to push the boundaries of mathematical modeling, let us embrace the opportunities for innovation and discovery that lie ahead. For in the realm of mathematical modeling, the possibilities are endless, and the potential for impact is limitless.
{"url":"https://www.function-variation.com/article95","timestamp":"2024-11-06T11:04:21Z","content_type":"text/html","content_length":"18215","record_id":"<urn:uuid:39a568de-cb3b-49d3-acdb-c3ec316e2697>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00407.warc.gz"}
The functions $H_{0}, H_{1}, \ldots$ are generated by the Rodrigues formula: $H_{n}(x)=(-1)^{n} e^{x^{2}} \frac{d^{n}}{d x^{n}} e^{-x^{2}}$ (a) Show that $H_{n}$ is a polynomial of degree $n$, and that the $H_{n}$ are orthogonal with respect to the scalar product $(f, g)=\int_{-\infty}^{\infty} f(x) g(x) e^{-x^{2}} d x$ (b) By induction or otherwise, prove that the $H_{n}$ satisfy the three-term recurrence relation $H_{n+1}(x)=2 x H_{n}(x)-2 n H_{n-1}(x) .$ [Hint: you may need to prove the equality $H_{n}^{\prime}(x)=2 n H_{n-1}(x)$ as well.]
{"url":"https://questions.tripos.org/part-ib/2003-49/","timestamp":"2024-11-09T23:52:55Z","content_type":"text/html","content_length":"27237","record_id":"<urn:uuid:2a8aabe2-1cda-4fa5-8f7e-6bcf67cf6547>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00781.warc.gz"}
In topology and related fields of mathematics, a sequential space is a topological space whose topology can be completely characterized by its convergent/divergent sequences. They can be thought of as spaces that satisfy a very weak axiom of countability, and all first-countable spaces (notably metric spaces) are sequential. In any topological space ${\displaystyle (X,\tau ),}$ if a convergent sequence is contained in a closed set ${\displaystyle C,}$ then the limit of that sequence must be contained in ${\displaystyle C}$ as well. Sets with this property are known as sequentially closed. Sequential spaces are precisely those topological spaces for which sequentially closed sets are in fact closed. (These definitions can also be rephrased in terms of sequentially open sets; see below.) Said differently, any topology can be described in terms of nets (also known as Moore–Smith sequences), but those sequences may be "too long" (indexed by too large an ordinal) to compress into a sequence. Sequential spaces are those topological spaces for which nets of countable length (i.e., sequences) suffice to describe the topology. Any topology can be refined (that is, made finer) to a sequential topology, called the sequential coreflection of ${\displaystyle X.}$ The related concepts of Fréchet–Urysohn spaces, T-sequential spaces, and ${\displaystyle N}$-sequential spaces are also defined in terms of how a space's topology interacts with sequences, but have subtly different properties. Sequential spaces and ${\displaystyle N}$-sequential spaces were introduced by S. P. Franklin.^[1] Although spaces satisfying such properties had implicitly been studied for several years, the first formal definition is due to S. P. Franklin in 1965. Franklin wanted to determine "the classes of topological spaces that can be specified completely by the knowledge of their convergent sequences", and began by investigating the first-countable spaces, for which it was already known that sequences sufficed. Franklin then arrived at the modern definition by abstracting the necessary properties of first-countable spaces. Preliminary definitions Let ${\displaystyle X}$ be a set and let ${\displaystyle x_{\bullet }=\left(x_{i}\right)_{i=1}^{\infty }}$ be a sequence in ${\displaystyle X}$ ; that is, a family of elements of ${\displaystyle X}$ , indexed by the natural numbers. In this article, ${\displaystyle x_{\bullet }\subseteq S}$ means that each element in the sequence ${\displaystyle x_{\bullet }}$ is an element of ${\displaystyle S,}$ and, if ${\displaystyle f:X\to Y}$ is a map, then ${\displaystyle f\left(x_{\bullet }\right)=\left(f\left(x_{i}\right)\right)_{i=1}^{\infty }.}$ For any index ${\displaystyle i,}$ the tail of $ {\displaystyle x_{\bullet }}$ starting at ${\displaystyle i}$ is the sequence ${\displaystyle x_{\geq i}=(x_{i},x_{i+1},x_{i+2},\ldots ){\text{.}}}$ A sequence ${\displaystyle x_{\bullet }}$ is eventually in ${\displaystyle S}$ if some tail of ${\displaystyle x_{\bullet }}$ satisfies ${\displaystyle x_{\geq i}\subseteq S.}$ Let ${\displaystyle \tau }$ be a topology on ${\displaystyle X}$ and ${\displaystyle x_{\bullet }}$ a sequence therein. The sequence ${\displaystyle x_{\bullet }}$ converges to a point ${\ displaystyle x\in X,}$ written ${\displaystyle x_{\bullet }{\overset {\tau }{\to }}x}$ (when context allows, ${\displaystyle x_{\bullet }\to x}$ ), if, for every neighborhood ${\displaystyle U\in \ tau }$ of ${\displaystyle x,}$ eventually ${\displaystyle x_{\bullet }}$ is in ${\displaystyle U.}$ ${\displaystyle x}$ is then called a limit point of ${\displaystyle x_{\bullet }.}$ A function ${\displaystyle f:X\to Y}$ between topological spaces is sequentially continuous if ${\displaystyle x_{\bullet }\to x}$ implies ${\displaystyle f(x_{\bullet })\to f(x).}$ Sequential closure/interior Let ${\displaystyle (X,\tau )}$ be a topological space and let ${\displaystyle S\subseteq X}$ be a subset. The topological closure (resp. topological interior) of ${\displaystyle S}$ in ${\ displaystyle (X,\tau )}$ is denoted by ${\displaystyle \operatorname {cl} _{X}S}$ (resp. ${\displaystyle \operatorname {int} _{X}S}$ ). The sequential closure of ${\displaystyle S}$ in ${\displaystyle (X,\tau )}$ is the set${\displaystyle \operatorname {scl} (S)=\left\{x\in X:{\text{there exists a sequence }}s_{\bullet }\subseteq S{\ text{ such that }}s_{\bullet }\to x\right\}}$ which defines a map, the sequential closure operator, on the power set of ${\displaystyle X.}$ If necessary for clarity, this set may also be written ${\ displaystyle \operatorname {scl} _{X}(S)}$ or ${\displaystyle \operatorname {scl} _{(X,\tau )}(S).}$ It is always the case that ${\displaystyle \operatorname {scl} _{X}S\subseteq \operatorname {cl} _ {X}S,}$ but the reverse may fail. The sequential interior of ${\displaystyle S}$ in ${\displaystyle (X,\tau )}$ is the set${\displaystyle \operatorname {sint} (S)=\{s\in S:{\text{whenever }}x_{\bullet }\subseteq X{\text{ and }}x_{\ bullet }\to s,{\text{ then }}x_{\bullet }{\text{ is eventually in }}S\}}$ (the topological space again indicated with a subscript if necessary). Sequential closure and interior satisfy many of the nice properties of topological closure and interior: for all subsets ${\displaystyle R,S\subseteq X,}$ • ${\displaystyle \operatorname {scl} _{X}(X\setminus S)=X\setminus \operatorname {sint} _{X}(S)}$ and ${\displaystyle \operatorname {sint} _{X}(X\setminus S)=X\setminus \operatorname {scl} _{X} (S)}$ ; Fix ${\displaystyle x\in \operatorname {sint} (X\setminus S).}$ If ${\displaystyle x\in \operatorname {scl} (S),}$ then there exists ${\displaystyle s_{\bullet }\subseteq S}$ with ${\displaystyle s_{\bullet }\to x.}$ But by the definition of sequential interior, eventually ${\displaystyle s_{\bullet }}$ is in ${\displaystyle X\setminus S,}$ contradicting ${\displaystyle s_{\bullet }\ subseteq S.}$ Conversely, suppose ${\displaystyle xotin \operatorname {sint} (X\setminus S)}$ ; then there exists a sequence ${\displaystyle s_{\bullet }\subseteq X}$ with ${\displaystyle s_{\bullet }\to x}$ that is not eventually in ${\displaystyle X\setminus S.}$ By passing to the subsequence of elements not in ${\displaystyle X\setminus S,}$ we may assume that ${\displaystyle s_{\bullet }\subseteq S.}$ But then ${\displaystyle x\in \operatorname {scl} (S).}$ • ${\displaystyle \operatorname {scl} (\emptyset )=\emptyset }$ and ${\displaystyle \operatorname {sint} (\emptyset )=\emptyset }$ ; • ${\textstyle \operatorname {sint} (S)\subseteq S\subseteq \operatorname {scl} (S)}$ ; • ${\displaystyle \operatorname {scl} (R\cup S)=\operatorname {scl} (R)\cup \operatorname {scl} (S)}$ ; and • ${\textstyle \operatorname {scl} (S)\subseteq \operatorname {scl} (\operatorname {scl} (S)).}$ That is, sequential closure is a preclosure operator. Unlike topological closure, sequential closure is not idempotent: the last containment may be strict. Thus sequential closure is not a ( Kuratowski) closure operator. Sequentially closed and open sets A set ${\displaystyle S}$ is sequentially closed if ${\displaystyle S=\operatorname {scl} (S)}$ ; equivalently, for all ${\displaystyle s_{\bullet }\subseteq S}$ and ${\displaystyle x\in X}$ such that ${\displaystyle s_{\bullet }{\overset {\tau }{\to }}x,}$ we must have ${\displaystyle x\in S.}$ ^[note 1] A set ${\displaystyle S}$ is defined to be sequentially open if its complement is sequentially closed. Equivalent conditions include: • ${\displaystyle S=\operatorname {sint} (S)}$ or • For all ${\displaystyle x_{\bullet }\subseteq X}$ and ${\displaystyle s\in S}$ such that ${\displaystyle x_{\bullet }{\overset {\tau }{\to }}s,}$ eventually ${\displaystyle x_{\bullet }}$ is in $ {\displaystyle S}$ (that is, there exists some integer ${\displaystyle i}$ such that the tail ${\displaystyle x_{\geq i}\subseteq S}$ ). A set ${\displaystyle S}$ is a sequential neighborhood of a point ${\displaystyle x\in X}$ if it contains ${\displaystyle x}$ in its sequential interior; sequential neighborhoods need not be sequentially open (see § T- and N-sequential spaces below). It is possible for a subset of ${\displaystyle X}$ to be sequentially open but not open. Similarly, it is possible for there to exist a sequentially closed subset that is not closed. Sequential spaces and coreflection As discussed above, sequential closure is not in general idempotent, and so not the closure operator of a topology. One can obtain an idempotent sequential closure via transfinite iteration: for a successor ordinal ${\displaystyle \alpha +1,}$ define (as usual)${\displaystyle (\operatorname {scl} )^{\alpha +1}(S)=\operatorname {scl} ((\operatorname {scl} )^{\alpha }(S))}$ and, for a limit ordinal ${\displaystyle \alpha ,}$ define${\displaystyle (\operatorname {scl} )^{\alpha }(S)=\bigcup _{\beta <\alpha }{(\operatorname {scl} )^{\beta }(S)}{\text{.}}}$ This process gives an ordinal-indexed increasing sequence of sets; as it turns out, that sequence always stabilizes by index ${\displaystyle \omega _{1}}$ (the first uncountable ordinal). Conversely, the sequential order of ${\displaystyle X}$ is the minimal ordinal at which, for any choice of ${\displaystyle S,}$ the above sequence will stabilize.^[2] The transfinite sequential closure of ${\displaystyle S}$ is the terminal set in the above sequence: ${\displaystyle (\operatorname {scl} )^{\omega _{1}}(S).}$ The operator ${\displaystyle (\ operatorname {scl} )^{\omega _{1}}}$ is idempotent and thus a closure operator. In particular, it defines a topology, the sequential coreflection. In the sequential coreflection, every sequentially-closed set is closed (and every sequentially-open set is open).^[3] Sequential spaces A topological space ${\displaystyle (X,\tau )}$ is sequential if it satisfies any of the following equivalent conditions: • ${\displaystyle \tau }$ is its own sequential coreflection.^[4] • Every sequentially open subset of ${\displaystyle X}$ is open. • Every sequentially closed subset of ${\displaystyle X}$ is closed. • For any subset ${\displaystyle S\subseteq X}$ that is not closed in ${\displaystyle X,}$ there exists some^[note 2] ${\displaystyle x\in \operatorname {cl} (S)\setminus S}$ and a sequence in ${\ displaystyle S}$ that converges to ${\displaystyle x.}$ ^[5] • (Universal Property) For every topological space ${\displaystyle Y,}$ a map ${\displaystyle f:X\to Y}$ is continuous if and only if it is sequentially continuous (if ${\displaystyle x_{\bullet }\ to x}$ then ${\displaystyle f\left(x_{\bullet }\right)\to f(x)}$ ).^[6] • ${\displaystyle X}$ is the quotient of a first-countable space. • ${\displaystyle X}$ is the quotient of a metric space. By taking ${\displaystyle Y=X}$ and ${\displaystyle f}$ to be the identity map on ${\displaystyle X}$ in the universal property, it follows that the class of sequential spaces consists precisely of those spaces whose topological structure is determined by convergent sequences. If two topologies agree on convergent sequences, then they necessarily have the same sequential coreflection. Moreover, a function from ${\displaystyle Y}$ is sequentially continuous if and only if it is continuous on the sequential coreflection (that is, when pre-composed with ${\displaystyle f}$ ). T- and N-sequential spaces A T-sequential space is a topological space with sequential order 1, which is equivalent to any of the following conditions:^[1] • The sequential closure (or interior) of every subset of ${\displaystyle X}$ is sequentially closed (resp. open). • ${\displaystyle \operatorname {scl} }$ or ${\displaystyle \operatorname {sint} }$ are idempotent. • ${\textstyle \operatorname {scl} (S)=\bigcap _{{\text{sequentially closed }}C\supseteq S}{C}}$ or ${\textstyle \operatorname {sint} (S)=\bigcup _{{\text{sequentially open }}U\subseteq S}{U}}$ • Any sequential neighborhood of ${\displaystyle x\in X}$ can be shrunk to a sequentially-open set that contains ${\displaystyle x}$ ; formally, sequentially-open neighborhoods are a neighborhood basis for the sequential neighborhoods. • For any ${\displaystyle x\in X}$ and any sequential neighborhood ${\displaystyle N}$ of ${\displaystyle x,}$ there exists a sequential neighborhood ${\displaystyle M}$ of ${\displaystyle x}$ such that, for every ${\displaystyle m\in M,}$ the set ${\displaystyle N}$ is a sequential neighborhood of ${\displaystyle m.}$ Being a T-sequential space is incomparable with being a sequential space; there are sequential spaces that are not T-sequential and vice-versa. However, a topological space ${\displaystyle (X,\tau )} $ is called a ${\displaystyle N}$ -sequential (or neighborhood-sequential) if it is both sequential and T-sequential. An equivalent condition is that every sequential neighborhood contains an open (classical) neighborhood.^[1] Every first-countable space (and thus every metrizable space) is ${\displaystyle N}$ -sequential. There exist topological vector spaces that are sequential but not ${\displaystyle N}$ -sequential (and thus not T-sequential).^[1] Fréchet–Urysohn spaces A topological space ${\displaystyle (X,\tau )}$ is called Fréchet–Urysohn if it satisfies any of the following equivalent conditions: • ${\displaystyle X}$ is hereditarily sequential; that is, every topological subspace is sequential. • For every subset ${\displaystyle S\subseteq X,}$ ${\displaystyle \operatorname {scl} _{X}S=\operatorname {cl} _{X}S.}$ • For any subset ${\displaystyle S\subseteq X}$ that is not closed in ${\displaystyle X}$ and every ${\displaystyle x\in \left(\operatorname {cl} _{X}S\right)\setminus S,}$ there exists a sequence in ${\displaystyle S}$ that converges to ${\displaystyle x.}$ Fréchet–Urysohn spaces are also sometimes said to be "Fréchet," but should be confused with neither Fréchet spaces in functional analysis nor the T[1] condition. Examples and sufficient conditions Every CW-complex is sequential, as it can be considered as a quotient of a metric space. The prime spectrum of a commutative Noetherian ring with the Zariski topology is sequential.^[7] Take the real line ${\displaystyle \mathbb {R} }$ and identify the set ${\displaystyle \mathbb {Z} }$ of integers to a point. As a quotient of a metric space, the result is sequential, but it is not first countable. Every first-countable space is Fréchet–Urysohn and every Fréchet-Urysohn space is sequential. Thus every metrizable or pseudometrizable space — in particular, every second-countable space, metric space, or discrete space — is sequential. Let ${\displaystyle {\mathcal {F}}}$ be a set of maps from Fréchet–Urysohn spaces to ${\displaystyle X.}$ Then the final topology that ${\displaystyle {\mathcal {F}}}$ induces on ${\displaystyle X}$ is sequential. A Hausdorff topological vector space is sequential if and only if there exists no strictly finer topology with the same convergent sequences.^[9] Spaces that are sequential but not Fréchet-Urysohn Schwartz space ${\displaystyle {\mathcal {S}}\left(\mathbb {R} ^{n}\right)}$ and the space ${\displaystyle C^{\infty }(U)}$ of smooth functions, as discussed in the article on distributions, are both widely-used sequential spaces.^[10]^[11] More generally, every infinite-dimensional Montel DF-space is sequential but not Fréchet–Urysohn. Arens' space is sequential, but not Fréchet–Urysohn.^[12]^[13] Non-examples (spaces that are not sequential) The simplest space that is not sequential is the cocountable topology on an uncountable set. Every convergent sequence in such a space is eventually constant; hence every set is sequentially open. But the cocountable topology is not discrete. (One could call the topology "sequentially discrete".)^[14] Let ${\displaystyle C_{c}^{k}(U)}$ denote the space of ${\displaystyle k}$ -smooth test functions with its canonical topology and let ${\displaystyle {\mathcal {D}}'(U)}$ denote the space of distributions, the strong dual space of ${\displaystyle C_{c}^{\infty }(U)}$ ; neither are sequential (nor even an Ascoli space).^[10]^[11] On the other hand, both ${\displaystyle C_{c}^{\infty }(U)} $ and ${\displaystyle {\mathcal {D}}'(U)}$ are Montel spaces^[15] and, in the dual space of any Montel space, a sequence of continuous linear functionals converges in the strong dual topology if and only if it converges in the weak* topology (that is, converges pointwise).^[10] Every sequential space has countable tightness and is compactly generated. If ${\displaystyle f:X\to Y}$ is a continuous open surjection between two Hausdorff sequential spaces then the set ${\displaystyle \{y:{|f^{-1}(y)|=1}\}\subseteq Y}$ of points with unique preimage is closed. (By continuity, so is its preimage in ${\displaystyle X,}$ the set of all points on which ${\displaystyle f}$ is injective.) If ${\displaystyle f:X\to Y}$ is a surjective map (not necessarily continuous) onto a Hausdorff sequential space ${\displaystyle Y}$ and ${\displaystyle {\mathcal {B}}}$ bases for the topology on ${\ displaystyle X,}$ then ${\displaystyle f:X\to Y}$ is an open map if and only if, for every ${\displaystyle x\in X,}$ basic neighborhood ${\displaystyle B\in {\mathcal {B}}}$ of ${\displaystyle x,}$ and sequence ${\displaystyle y_{\bullet }=\left(y_{i}\right)_{i=1}^{\infty }\to f(x)}$ in ${\displaystyle Y,}$ there is a subsequence of ${\displaystyle y_{\bullet }}$ that is eventually in ${\ displaystyle f(B).}$ Categorical properties The full subcategory Seq of all sequential spaces is closed under the following operations in the category Top of topological spaces: The category Seq is not closed under the following operations in Top: • Continuous images • Subspaces • Finite products Since they are closed under topological sums and quotients, the sequential spaces form a coreflective subcategory of the category of topological spaces. In fact, they are the coreflective hull of metrizable spaces (that is, the smallest class of topological spaces closed under sums and quotients and containing the metrizable spaces). The subcategory Seq is a Cartesian closed category with respect to its own product (not that of Top). The exponential objects are equipped with the (convergent sequence)-open topology. P.I. Booth and A. Tillotson have shown that Seq is the smallest Cartesian closed subcategory of Top containing the underlying topological spaces of all metric spaces, CW-complexes, and differentiable manifolds and that is closed under colimits, quotients, and other "certain reasonable identities" that Norman Steenrod described as "convenient".^[17] Every sequential space is compactly generated, and finite products in Seq coincide with those for compactly generated spaces, since products in the category of compactly generated spaces preserve quotients of metric spaces. See also 1. ^ You cannot simultaneously apply this "test" to infinitely many subsets (for example, you can not use something akin to the axiom of choice). Not all sequential spaces are Fréchet-Urysohn, but only in those spaces can the closure of a set ${\displaystyle S}$ can be determined without it ever being necessary to consider any set other than ${\displaystyle S.}$ 2. ^ A Fréchet–Urysohn space is defined by the analogous condition for all such ${\displaystyle x}$ : For any subset ${\displaystyle S\subseteq X}$ that is not closed in ${\displaystyle X,}$ for any ${\displaystyle x\in \operatorname {cl} _{X}(S)\setminus S,}$ there exists a sequence in ${\ displaystyle S}$ that converges to ${\displaystyle x.}$ 1. ^ ^a ^b ^c ^d Snipes, Ray (1972). "T-sequential topological spaces" (PDF). Fundamenta Mathematicae. 77 (2): 95–98. doi:10.4064/fm-77-2-95-98. ISSN 0016-2736. 2. ^ *Arhangel'skiĭ, A. V.; Franklin, S. P. (1968). "Ordinal invariants for topological spaces". Michigan Math. J. 15 (3): 313–320. doi:10.1307/mmj/1029000034. 3. ^ Baron, S. (October 1968). "The Coreflective Subcategory of Sequential Spaces". Canadian Mathematical Bulletin. 11 (4): 603–604. doi:10.4153/CMB-1968-074-4. ISSN 0008-4395. S2CID 124685527. 4. ^ "Topology of sequentially open sets is sequential?". Mathematics Stack Exchange. 5. ^ Arkhangel'skii, A.V. and Pontryagin L.S., General Topology I, definition 9 p.12 6. ^ Baron, S.; Leader, Solomon (1966). "Solution to Problem #5299". The American Mathematical Monthly. 73 (6): 677–678. doi:10.2307/2314834. ISSN 0002-9890. JSTOR 2314834. 7. ^ "On sequential properties of Noetherian topological spaces" (PDF). 2004. Retrieved 30 Jul 2023. 8. ^ Dudley, R. M., On sequential convergence - Transactions of the American Mathematical Society Vol 112, 1964, pp. 483-507 9. ^ ^a ^b ^c Gabrielyan, Saak (2019). "Topological properties of strict ${\displaystyle (LF)}$ -spaces and strong duals of Montel strict ${\displaystyle (LF)}$ -spaces". Monatshefte für Mathematik. 189 (1): 91–99. arXiv:1702.07867. doi:10.1007/s00605-018-1223-6. 10. ^ ^a ^b T. Shirai, Sur les Topologies des Espaces de L. Schwartz, Proc. Japan Acad. 35 (1959), 31-36. 11. ^ Engelking 1989, Example 1.6.19 12. ^ Ma, Dan (19 August 2010). "A note about the Arens' space". Retrieved 1 August 2013. 13. ^ math; Sleziak, Martin (Dec 6, 2016). "Example of different topologies with same convergent sequences". Mathematics Stack Exchange. StackOverflow. Retrieved 2022-06-27. 14. ^ "Topological vector space". Encyclopedia of Mathematics. Retrieved September 6, 2020. “It is a Montel space, hence paracompact, and so normal.” • Arkhangel'skii, A.V. and Pontryagin, L.S., General Topology I, Springer-Verlag, New York (1990) ISBN 3-540-18178-4. • Arkhangel'skii, A V (1966). "Mappings and spaces" (PDF). Russian Mathematical Surveys. 21 (4): 115–162. Bibcode:1966RuMaS..21..115A. doi:10.1070/RM1966v021n04ABEH004169. ISSN 0036-0279. S2CID 250900871. Retrieved 10 February 2021. • Akiz, Hürmet Fulya; Koçak, Lokman (2019). "Sequentially Hausdorff and full sequentially Hausdorff spaces". Communications Faculty of Science University of Ankara Series A1Mathematics and Statistics. 68 (2): 1724–1732. doi:10.31801/cfsuasmas.424418. ISSN 1303-5991. Retrieved 10 February 2021. • Boone, James (1973). "A note on mesocompact and sequentially mesocompact spaces". Pacific Journal of Mathematics. 44 (1): 69–74. doi:10.2140/pjm.1973.44.69. ISSN 0030-8730. • Booth, Peter; Tillotson, J. (1980). "Monoidal closed, Cartesian closed and convenient categories of topological spaces". Pacific Journal of Mathematics. 88 (1): 35–53. doi:10.2140/pjm.1980.88.35. ISSN 0030-8730. Retrieved 10 February 2021. • Engelking, R., General Topology, Heldermann, Berlin (1989). Revised and completed edition. • Foged, L. (1985). "A characterization of closed images of metric spaces". Proceedings of the American Mathematical Society. 95 (3): 487–490. doi:10.1090/S0002-9939-1985-0806093-3. ISSN 0002-9939. • Franklin, S. (1965). "Spaces in which sequences suffice" (PDF). Fundamenta Mathematicae. 57 (1): 107–115. doi:10.4064/fm-57-1-107-115. ISSN 0016-2736. • Franklin, S. (1967). "Spaces in which sequences suffice II" (PDF). Fundamenta Mathematicae. 61 (1): 51–56. doi:10.4064/fm-61-1-51-56. ISSN 0016-2736. Retrieved 10 February 2021. • Goreham, Anthony, "Sequential Convergence in Topological Spaces", (2016) • Gruenhage, Gary; Michael, Ernest; Tanaka, Yoshio (1984). "Spaces determined by point-countable covers". Pacific Journal of Mathematics. 113 (2): 303–332. doi:10.2140/pjm.1984.113.303. ISSN • Michael, E.A. (1972). "A quintuple quotient quest". General Topology and Its Applications. 2 (2): 91–138. doi:10.1016/0016-660X(72)90040-2. ISSN 0016-660X. • Shou, Lin; Chuan, Liu; Mumin, Dai (1997). "Images on locally separable metric spaces". Acta Mathematica Sinica. 13 (1): 1–8. doi:10.1007/BF02560519. ISSN 1439-8516. S2CID 122383748. • Steenrod, N. E. (1967). "A convenient category of topological spaces". The Michigan Mathematical Journal. 14 (2): 133–152. doi:10.1307/mmj/1028999711. Retrieved 10 February 2021. • Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322. • Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
{"url":"https://www.knowpia.com/knowpedia/Sequential_space","timestamp":"2024-11-14T04:10:03Z","content_type":"text/html","content_length":"373290","record_id":"<urn:uuid:928018d8-a251-44bb-8117-e087165da749>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00571.warc.gz"}
Activity to Introduce Exponential Functions Janae Castro If you teach exponential functions, I have the perfect introductory activity for you! You may be familiar with the penny problem. Here’s how it goes… “ Your friend is hiring you to help them with their business for 30 days. They offer you the choice of two different compensations. The first is to take $0.01 on the first day, and then double that amount each subsequent day. The second option is to take $1,000 on the first day, $2,000 on the second, $3,000 on the third, etc. for 30 days. Which option do you choose?” I present this problem to my students. Some of them have heard the problem before and choose the penny. Some students are skeptical and choose option 2. I then put the students into groups of 2 or 3 and have them make a table of values for each option. I have them do this in the Desmos app, although they can also do it on paper or whiteboards. The benefit of Desmos comes when we analyze the graphs of the table. After the students make tables for the two options, they can then start to analyze what they see. Ask questions like… • Which one is the better choice? • How much is total for option 1? Option 2? • If you were only working for 10 days, which one is the better choice? 20 days? • When does option 1 become a better choice? I teach exponential functions after we’ve spend a LOT of time on linear functions, so I ask my students to name what kind of function they are seeing in option 2 (linear), and create an equation to model the table points. The students come up with y = 100x. It's a good time to review that functions can be expressed as tables, graphs, and equations! If you’re using Desmos this is also a good time to add this in to see the connected line on the graph (purple line below). Finally, as an extension, I have my students try and come up with an equation that will connect the penny dots! This takes a little time and some of them get it, which then launches us into our lessons on the exponential function and what the variables represent. Hope your students enjoy this intro activity as much as mine do! And when you're done, check out this mini project, real-life application of exponential functions! "When will I ever use this?" The famous question asked by every Algebra student. Give your students a REAL application of exponential functions with this mini project. Task: Students will become accountants, learning how compound interest can affect the future value of an investment. They will be given several scenarios to analyze, in order to pick the best retirement account for their client.
{"url":"https://www.mathteacherslounge.com/post/activity-to-introduce-exponential-functions","timestamp":"2024-11-07T22:58:01Z","content_type":"text/html","content_length":"1050489","record_id":"<urn:uuid:0bffedbc-8277-4f1e-859a-6a476ebd666b>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00140.warc.gz"}
Fractions Archives | Math Tutor Decimals and Fractions are like sisters of the same family. If we know the decimal, we can find the fraction, and also when we know the fraction, we can find the corresponding decimal as well. In this article, we are going to learn, How to convert Fraction to Decimal How to convert Decimal to Fraction […] Fraction to Decimal Read More » Multiplying Three Fractions In this lesson, we are going to learn how to multiplying three fractions.We have discussed how to multiply fractions from basics in another lesson. You can visit the following link to view the previous lesson.How to Multiply Fractions Multiplying Three Fractions Multiplying three fractions seems a little more complicated than multiplying two fractions. But the Multiplying Three Fractions Read More »
{"url":"https://mathtutory.com/category/fractions/","timestamp":"2024-11-08T21:31:59Z","content_type":"text/html","content_length":"120733","record_id":"<urn:uuid:b38f2da4-226f-404e-9f84-ad6dc429b574>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00286.warc.gz"}
Given: I2 + 3 F2 ® 2 IF3 How many moles of I2 are needed to... Given: I2 + 3 F2 ® 2 IF3 How many moles of I2 are needed to form 3 moles of IF3? How many moles of F2 are needed to form 10 moles of IF3? How many moles of I2 are needed to react with 3.5 moles of
{"url":"https://justaaa.com/chemistry/517850-given-i2-3-f2-2-if3-how-many-moles-of-i2-are","timestamp":"2024-11-04T18:35:15Z","content_type":"text/html","content_length":"38501","record_id":"<urn:uuid:63a7b3b8-4b79-4ca6-bd5d-c3f9299af6cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00828.warc.gz"}
AFM: Apparent Wavelength, a non-QM hidden variable (3). Ned Latham wrote: > [My apologies: I have bungled this article repeatedly. However, it's > now solid, and IMO, deserves consideration. This followup replaces > frequency and wavelength definitions echoing those of wave models with > definitions based purely on particulate properties.] I must apologise again. The definition of apparent wavelength below is, IMO, unsatisfactory (so much for solid). In fact a considered ballistic/emission theory must redefine both frequency and wavelength (I should also make it clear that with "emission theory" I mean particulate emission, not wave emission); I now define four coexistent Arrival frequency of a stream of particles: the rate at which the stream's particles reach the observer; Phase frequency of a particle: its spin rate; ie, the number of revolutions it makes per unit Objective wavelength: the distance a particle travels relative to the source while revolving once; Apparent wavelength[*]: the distance a particle travels relative to the observer while revolving once. That last is ballistically equivalent to the definition below, but stated in physical terms rather than mathematical. Note that the definition of arrival frequency defines a measurable quantity proportionate to brightness. > There are numerous experimental tests that purport to confirm Special > Relativity theory and/or refute what the testers and their reviewers > call emission theory, but those examining doppler shift all seem to > depend on the assumption that emission theory predicts wavelength > constancy even when source and observer are in relative motion. > That assumption is pretty thin. Any theory predicting so is falsified > by the very existence of doppler shift in light. There's no ground for > denying that; therefore no such theory is viable, and there's no reason > to bother with disproving it. Or even mentioning it, in my view. > Proceeding without that assumption however, is much more interesting. > A considered emission theory must define frequency and wavelength on > particulate properties: > Frequency : spin rate; ie, the number of revolutions per unit time; > Wavelength: linear distance travelled during one revolution > but those definitions clearly imply constancy. Neither changes as a > result of relative motion betweem source and observer, but the observer > experiences change: a considered emission theory must account for that. > What the observer experiences is the objective wavelength altered by > the change in distance between source and observer while one wavelength > passes; in other words, changed inversely as the speed. It is given by > a quantity which as far as I can tell has never been mooted before: > Apparent Wavelength[*]: the quotient of the speed and the frequency. > Given v[1] = v[0] * (c + v) / c, that gives > lambda[1] = (v[0] * (c + v) / c) / f[0] > = v[0] / f[0] * (c + v) / c > = lambda[0] * c / (c + v) > In other words, a considered emission theory must necessarily define a > wavelength doppler shift factor of c / (c + v), which is the inverse of > the speed change. The conventional assumption turns out to be false. > Doppler shift tests that purport to falsify emission theory are therefore > invalidly and incorrectly interpreted. Their results should be reexamined, > specifically to determine whether within the bounds of experimental error > they are actually consistent with the predictions of just one of SR and > emission theory. > As well, the teaching should be amended, and the hidden variable revealed. > ======== > [*] the predicted wavelength measurement
{"url":"https://groups.google.com/g/sci.physics.particle/c/4aHz-_tdLO0","timestamp":"2024-11-12T20:45:57Z","content_type":"text/html","content_length":"704406","record_id":"<urn:uuid:c76a449c-1446-42cc-a475-0679679c40d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00824.warc.gz"}
How do I check if a variable value in a script is a decimal or not? Just do: I don’t want to check through the output, I want to check through the script, I’m not sure how that’s really going to help me figure out if there is a decimal point in the variable. Like, I want to find a way to identify an Integer and a Decimal You could do something like local IsInt = (Number % math.floor(Number)) == 0 This will check if the remainder after dividing by the rounded version of the given number is zero or not. If it isn’t, then there’s a decimal component and thus it’s not an integer. I don’t think there is another way to find the value of a variable only threw the script and not threw the output. Unless you manually give it a value. you can also do this: local Number1 = 4 local Number2 = 3.1415 print(string.find(Number1, "%.")) -- nil print(string.find(Number2, "%.")) -- 2 2 1 Like You could, yes. FYI you don’t need the == true in the statement, just doing if(string.find(Number, "%.")) then would suffice. yeah, it would work with the == true bc it gives the position of the character or nil Never saw % being used as a math operator, could you explain what it does? It’s called the modulo operator. Like I said in my reply, it performs a division between two numbers and returns the remainder. For example: dividing 4 by 2 has no remainder because 2 perfectly fits into 4 twice; dividing 5 by 2 has a remainder of 1 because there is no integer multiple of 2 that is equal to 5, so we can fit two 2s into 4 with 1 left over. It’s a bit confusing at first, but just think about it a little bit. 2 Likes
{"url":"https://devforum.roblox.com/t/how-do-i-check-if-a-variable-value-in-a-script-is-a-decimal-or-not/1996853","timestamp":"2024-11-03T17:32:50Z","content_type":"text/html","content_length":"41035","record_id":"<urn:uuid:70affe29-1932-4909-a066-589741bf8924>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00762.warc.gz"}
Navier-Stokes equations The Navier-Stokes equations, named after Claude-Louis Navier and George Gabriel Stokes, are a set of equations that describe the motion of fluid substances like liquids and gases. These equations establish that changes in momentum (acceleration) of the particles of a fluid are simply the product of changes in pressure and dissipative viscous forces (similar to friction) acting inside the fluid. These viscous forces originate in molecular interactions and dictate how sticky (viscous) a fluid is. Thus, the Navier-Stokes equations are a dynamical statement of the balance of forces acting at any given region of the fluid. They are one of the most useful sets of equations because they describe the physics of a large number of phenomena of academic and economic interest. They are useful to model weather, ocean currents, water flow in a pipe, motion of stars inside a galaxy, flow around an airfoil (wing). They are used in the design of aircraft and cars, the study of blood flow, the design of power stations, the analysis of the effects of pollution, etc. The Navier-Stokes equations are differential equations which describe the motion of a fluid. These equations, unlike algebraic equations, do not seek to establish a relation among the variables of interest (e.g. velocity and pressure), rather they establish relations among the rates of change or fluxes of these quantities. In mathematical terms these rates correspond to their derivatives. Thus, the Navier-Stokes for the most simple case of an ideal fluid with zero viscosity states that acceleration (the rate of change of velocity) is proportional to the derivative of internal This means that solution of the Navier-Stokes for a given physical problem must be sought with the help of calculus. In practical terms only the simplest cases can be solved in this way and their exact solution is known. These cases often involve non-turbulent flow in steady state (flow does not change with time) in which the viscosity of the fluid is large or its velocity is small (small Reynolds number). For more complex situations, such as global weather systems like El Niño or lift in a wing, solution of the Navier-Stokes equations must be found with the help of computers. This is a field of sciences by its own called computational fluid dynamics. Even though turbulence is an everyday experience it is extremely hard to find solutions for this class of problems. A $1,000,000 prize was offered in May 2000 by the Clay Mathematics Institute to whoever makes substantial progress toward a mathematical theory which will help in the understanding of this phenomenon. Basic assumptions[ ] Before going into the details of the Navier-Stokes equations, first, it is necessary to make several assumptions about the fluid. The first one is that the fluid is continuous. It signifies that it does not contain voids formed, for example, by bubbles of dissolved gases, or that it does not consist of an aggregate of mist-like particles. Another necessary assumption is that all the fields of interest like pressure, velocity, density, temperature, etc., are differentiable (i.e. no phase transitions). The equations are derived from the basic principles of conservation of mass, momentum, and energy. For that matter sometimes it is necessary to consider a finite arbitrary volume, called a control volume, over which these principles can be easily applied. This finite volume is denoted by ${\displaystyle \Omega}$ and its bounding surface ${\displaystyle \partial\Omega}$. The control volume can remain fixed in space or can move with the fluid. This leads, however, to special considerations, as we shall see next. The substantive derivative[ ] Main article substantive derivative. Changes in properties of a moving fluid can be measured in two different ways. This will be illustrated through the use of the following example: the measurement of changes in wind velocity in the atmosphere. One can measure its changes with the help of an anemometer in a weather station or by mounting it on a weather balloon. Clearly, the anemometer in the first case is measuring the velocity of all the moving particles passing through a fixed point in space, whereas in the second case the instrument is measuring changes in velocity as it moves with the fluid. The same situation arises in the measuring of changes in density, temperature, etc. Therefore when differentiating one must separate out these two cases. The derivative of a field with respect to fixed position in space is called the spatial or Eulerian derivative. The derivative following a moving particle is called the substantive or Lagrangian derivative. The substantive derivative is defined as the operator: ${\displaystyle {\frac {D}{Dt}}(\cdot )={\frac {\partial (\cdot )}{\partial t}}+(\mathbf {v} \cdot abla )(\cdot )}$ Where ${\displaystyle \mathbf{v} }$ is the velocity of the fluid. The first term on the right-hand side of the equation is the ordinary Eulerian derivative (i.e. the derivative on a fixed reference frame) whereas the second term represents the changes brought about by the moving fluid. This is effect is referred as advection. Conservation laws[ ] The Navier-Stokes equations are derived from conservation principles of: Aditionally it is necesary to assume a constitutive relation or state law for the fluid. In its most general form a conservation law states that the rate of change of an extensive property ${\displaystyle L}$ defined over a control volume must equal what is lost through the boundaries of the volume carried out by the moving fluid plus what is created/consumed by sources and sinks inside the control volume. This is expressed by the following integral equation: ${\displaystyle {\frac {d}{dt}}\int _{\Omega }L\;d\Omega =-\int _{\partial \Omega }L\mathbf {v\cdot n} d\partial \Omega +\int _{\Omega }Qd\Omega }$ Where v is the velocity of the fluid and ${\displaystyle Q}$ represents the sources and sinks in the fluid. If the control volume is fixed in space then this integral equation can be expressed as ${\displaystyle {\frac {d}{dt}}\int _{\Omega }Ld\Omega =-\int _{\Omega }abla \cdot (L\mathbf {v} )d\Omega +\int _{\Omega }Qd\Omega }$ Note that Green's theorem was used in the derivation of this last equation in order to express the first term on the right-hand side in the interior of the control volume. Thus: ${\displaystyle {\frac {d}{dt}}\int _{\Omega }Ld\Omega =-\int _{\Omega }(abla \cdot (L\mathbf {v} )+Qd\Omega )}$ Instead, as this expression is valid for ${\displaystyle \Omega}$, which is invariant in time (unlike ${\displaystyle \partial\Omega}$), it becomes possible to swap the "${\displaystyle \frac{d}{dt} }$" and "${\displaystyle \int _{\Omega }d\Omega }$" operators. And as the expression is valid for all domains, we can additionally drop the integral. Introducing the substantive derivative, we get: ${\displaystyle {\frac {D}{Dt}}\mathbf {L} +\left(abla \cdot \mathbf {v} \right)\mathbf {L} ={\frac {\partial }{\partial t}}\mathbf {L} +abla \cdot \left(\mathbf {v} \mathbf {L} \right)=0}$ For a quantity which isn't space-dependant (so that it doesn't have to be integrated over space), D/Dt gives the right comoving time rate of change. Equation of continuity[ ] Conservation of mass is written: ${\displaystyle {\frac {\partial \rho }{\partial t}}+abla \cdot \left(\rho \mathbf {v} \right)={\frac {D\rho }{Dt}}+\rho abla \cdot \mathbf {v} =0}$ ${\displaystyle \rho}$ is the density of the fluid. In the case of an incompressible fluid ${\displaystyle \rho}$ is not a function of time or space; the equation is reduced to: ${\displaystyle abla \cdot \mathbf {v} =0}$ Conservation of momentum[ ] Conservation of momentum is written: ${\displaystyle {\frac {\partial }{\partial t}}\left(\rho \mathbf {v} \right)+abla (\rho \mathbf {v} \otimes \mathbf {v} )=\sum \rho \mathbf {f} }$ Note that ${\displaystyle \mathbf {v} \otimes \mathbf {v} }$ is a tensor, the ${\displaystyle \otimes}$ representing the tensor product. We can simplify it further, using the continuity equation, this becomes: ${\displaystyle \rho {\frac {D\mathbf {v} }{Dt}}=\sum \rho \mathbf {f} }$ In which we recognise the usual F=ma. The equations[ ] General form[ ] The form of the equations[ ] The general form of the Navier-Stokes equations is: ${\displaystyle \rho {\frac {D\mathbf {v} }{Dt}}=abla \cdot \mathbb {P} +\rho \mathbf {f} }$ For the conservation of momentum. The tensor ${\displaystyle \mathbb{P}}$ represents the surface forces applied on a fluid particle (the comoving stress tensor). Unless the fluid is made up of spinning degrees of freedom like vortices, ${\displaystyle \mathbb{P}}$ is a symmetric tensor. In general, we have the form: ${\displaystyle \mathbb {P} ={\begin{pmatrix}\sigma _{xx}&\tau _{xy}&\tau _{xz}\\\tau _{yx}&\sigma _{yy}&\tau _{yz}\\\tau _{zx}&\tau _{zy}&\sigma _{zz}\end{pmatrix}}=-{\begin{pmatrix}p&0&0\\0&p&0 \\0&0&p\end{pmatrix}}+{\begin{pmatrix}\sigma _{xx}+p&\tau _{xy}&\tau _{xz}\\\tau _{yx}&\sigma _{yy}+p&\tau _{yz}\\\tau _{zx}&\tau _{zy}&\sigma _{zz}+p\end{pmatrix}}}$ Where the ${\displaystyle \sigma}$ are normal constraints and ${\displaystyle \tau}$ tangential constraints. The trace ${\displaystyle \sigma _{xx}+\sigma _{yy}+\sigma _{zz}}$ is always -3p by definition (unless we have bulk viscosity) regardless of whether or not the fluid is in equilibrium. To which we add the continuity equation: ${\displaystyle {\frac {D\rho }{Dt}}+\rho abla \cdot \mathbf {v} =0}$ p is the pressure finally, we have: ${\displaystyle \rho {\frac {D\mathbf {v} }{Dt}}=-abla p+abla \cdot \mathbb {T} +\rho \mathbf {f} }$ where ${\displaystyle \mathbb{T}}$ is the traceless part of ${\displaystyle \mathbb{P}}$. The closure problem[ ] These equations are incomplete. To complete them, one must make hypotheses on the form of ${\displaystyle \mathbb{P}}$. In the case of a perfect fluid ${\displaystyle \tau}$ components are nil, for example. Those equations used to complete the set are equations of state. For example, the pressure can be function of, notably, density and temperature. The variables to be solved for are the velocity components, the fluid density, static pressure, and temperature. The flow is assumed to be differentiable and continuous, allowing these balances to be expressed as partial differential equations. The equations can be converted to Wilkinson equations for the secondary variables vorticity and stream function. Solution depends on the fluid properties (such as viscosity, specific heats, and thermal conductivity), and on the boundary conditions of the domain of study. The components of ${\displaystyle \mathbb{P}}$ are the constraints on an infinitesimal element of fluid. They represent the normal and shear constraints. ${\displaystyle \mathbb{P}}$ is symmetric unless there is a nonzero spin density. So-called non-Newtonian fluids are simply fluids where this tensor has no special properties allowing for special solutions of the equations. Special forms[ ] Those are certain usual simplifications of the problem for which sometimes solutions are known. Newtonian fluids[ ] Main article Newtonian fluids. In Newtonian fluids the following assumption holds: ${\displaystyle p_{ij}=-p\delta _{ij}+\mu \left({\frac {\partial v_{i}}{\partial x_{j}}}+{\frac {\partial v_{j}}{\partial x_{i}}}-{\frac {2}{3}}\delta _{ij}abla \cdot \mathbf {v} \right)}$ ${\displaystyle \mu}$ is the viscosity of the fluid. To see how to "derive" this, we first note that in equilibrium, p[ij]=-pδ[ij]. For a Newtonian fluid, the deviation of the comoving stress tensor from this equilibrium value is linear in the gradient of the velocity. It obviously can't depend upon the velocity itself because of Galilean covariance. In other words, p[ij]+pδ[ij] is linear in ${\displaystyle \partial _{i}v_{j}}$. The fluids that we are considering here are rotationally invariant (i.e., they are not liquid crystals). p[ij]+pδ[ij] decomposes into a traceless symmetric tensor and a trace. Similarly, ${\displaystyle \partial _{i}v_ {j}}$ decomposes into a traceless symmetric tensor, a trace and an antisymmetric tensor. Any linear map from the latter to the former has to map the antisymmetric part to zero (Schur's lemma) and has two coefficients corresponding to the traceless symmetric part and the trace part. The traceless symmetric part of ${\displaystyle \partial _{i}v_{j}}$ is ${\displaystyle \partial _{i}v_{j}+\partial _{j}v_{i}-{\frac {2}{d}}\delta _{ij}\partial _{k}v_{k}}$ where d is the number of spatial dimensions and the trace part is ${\displaystyle \delta _{ij}\partial _{k}v_{k}}$. Therefore, the most general rotationally invariant linear map is given by ${\displaystyle p_{ij}+p\delta _{ij}=\mu \left(\partial _{i}v_{j}+\partial _{j}v_{i}-{\frac {2}{d}}\delta _{ij}abla \cdot \mathbf {v} \right)+\mu _{B}\delta _{ij}abla \cdot \mathbf {v} }$ for some coefficients μ and μ[B]. μ is called the shear viscosity and μ[B] is called the bulk viscosity. It is an empirical observation that the bulk viscosity is negligible for most fluids of interest, which is why it is often dropped. This explains the factor of −2/3 appearing in this equation. This factor has to be modified in 1 or 2 spatial dimensions. ${\displaystyle \rho \left({\frac {\partial \mathbf {v} }{\partial t}}+abla _{\mathbf {v} }\mathbf {v} \right)=\rho \mathbf {f} -abla p+\mu \left(\Delta \mathbf {v} +{\frac {1}{3}}abla \left(abla \cdot \mathbf {v} \right)\right)}$ ${\displaystyle \rho \left({\frac {\partial v_{i}}{\partial t}}+v_{j}{\frac {\partial v_{i}}{\partial x_{j}}}\right)=\rho f_{i}-{\frac {\partial p}{\partial x_{i}}}+\mu \left({\frac {\partial ^ {2}v_{i}}{\partial x_{j}\partial x_{j}}}+{\frac {1}{3}}{\frac {\partial ^{2}v_{j}}{\partial x_{i}\partial x_{j}}}\right)}$ where we have used the Einstein summation convention. When written-out in full it becomes clear how complex these equations really are (but only if we insist on writing every single component out explicitly): Conservation of momentum: ${\displaystyle \rho \cdot \left({\partial u \over \partial t}+u{\partial u \over \partial x}+v{\partial u \over \partial y}+w{\partial u \over \partial z}\right)=k_{x}-{\partial p \over \partial x}+{\partial \over \partial x}\left[\mu \cdot \left(2\cdot {\partial u \over \partial x}-{\frac {2}{3}}\cdot (abla \cdot \mathbf {v} )\right)\right]+{\partial \over \partial y}\left[\mu \cdot \ left({\partial u \over \partial y}+{\partial v \over \partial x}\right)\right]+{\partial \over \partial z}\left[\mu \cdot \left({\partial w \over \partial x}+{\partial u \over \partial z}\right)\ ${\displaystyle \rho \cdot \left({\partial v \over \partial t}+u{\partial v \over \partial x}+v{\partial v \over \partial y}+w{\partial v \over \partial z}\right)=k_{y}-{\partial p \over \partial y}+{\partial \over \partial y}\left[\mu \cdot \left(2\cdot {\partial v \over \partial y}-{\frac {2}{3}}\cdot (abla \cdot \mathbf {v} )\right)\right]+{\partial \over \partial z}\left[\mu \cdot \ left({\partial v \over \partial z}+{\partial w \over \partial y}\right)\right]+{\partial \over \partial x}\left[\mu \cdot \left({\partial u \over \partial y}+{\partial v \over \partial x}\right)\ ${\displaystyle \rho \cdot \left({\partial w \over \partial t}+u{\partial w \over \partial x}+v{\partial w \over \partial y}+w{\partial w \over \partial z}\right)=k_{z}-{\partial p \over \partial z}+{\partial \over \partial z}\left[\mu \cdot \left(2\cdot {\partial w \over \partial z}-{\frac {2}{3}}\cdot (abla \cdot \mathbf {v} )\right)\right]+{\partial \over \partial x}\left[\mu \cdot \ left({\partial w \over \partial x}+{\partial u \over \partial z}\right)\right]+{\partial \over \partial y}\left[\mu \cdot \left({\partial v \over \partial z}+{\partial w \over \partial y}\right)\ Conservation of mass: ${\displaystyle {\partial \rho \over \partial t}+{\partial (\rho \cdot u) \over \partial x}+{\partial (\rho \cdot v) \over \partial y}+{\partial (\rho \cdot w) \over \partial z}=0}$ Since density is an unknown another equation is required. Conservation of energy: ${\displaystyle \rho \left({\partial e \over \partial t}+u{\partial e \over \partial x}+v{\partial e \over \partial y}+w{\partial e \over \partial z}\right)=\left({\partial \over \partial x}\left (\lambda \cdot {\partial T \over \partial x}\right)+{\partial \over \partial y}\left(\lambda \cdot {\partial T \over \partial y}\right)+{\partial \over \partial z}\left(\lambda \cdot {\partial T \over \partial z}\right)\right)-p\cdot \left(abla \cdot \mathbf {v} \right)+\mathbf {k} \cdot \mathbf {v} +\rho \cdot {\dot {q}}_{s}+\mu \cdot \Phi }$ ${\displaystyle \Phi =2\cdot \left[\left({\partial u \over \partial x}\right)^{2}+\left({\partial v \over \partial y}\right)^{2}+\left({\partial w \over \partial z}\right)^{2}\right]+\left({\ partial v \over \partial x}+{\partial u \over \partial y}\right)^{2}+\left({\partial w \over \partial y}+{\partial v \over \partial z}\right)^{2}+\left({\partial u \over \partial z}+{\partial w \ over \partial x}\right)^{2}-{\frac {2}{3}}\cdot \left({\partial u \over \partial x}+{\partial v \over \partial y}+{\partial w \over \partial z}\right)^{2}}$ ${\displaystyle \Phi}$ is sometimes referred to as "viscous dissipation". ${\displaystyle \Phi}$ can often be neglected unless dealing with extreme flows such as high supersonic and hypersonic flight (e.g., hypersonic planes and atmospheric reentry). Assuming an ideal gas: ${\displaystyle e=c_{p}\cdot T-{\frac {p}{\rho }}}$ The above is a system of six equations and six unknowns (u, v, w, T, e and ${\displaystyle \rho}$). Bingham fluids[ ] Main article Bingham plastic. In Bingham fluids, we have something slightly different: ${\displaystyle \tau _{ij}=\tau _{0}+\mu {\frac {\partial v_{i}}{\partial x_{j}}},\;{\frac {\partial v_{i}}{\partial x_{j}}}>0}$ Those are fluids capable of bearing some shear before they start flowing. Some common examples are toothpaste and Silly Putty. Power-law fluid[ ] Main article Power-law fluid. It is an idealised fluid for which the shear stress, ${\displaystyle \tau}$, is given by ${\displaystyle \tau =K\left({\frac {\partial u}{\partial y}}\right)^{n}}$ This form is useful for approximating all sorts of general fluids. Incompressible fluids[ ] Main article Incompressible fluids. The Navier-Stokes equations are ${\displaystyle \rho {\frac {Du_{i}}{Dt}}=\rho f_{i}-{\frac {\partial p}{\partial x_{i}}}+{\frac {\partial }{\partial x_{j}}}\left[2\mu \left(e_{ij}-{\frac {\Delta \delta _{ij}}{3}}\right)\ for momentum conservation and ${\displaystyle abla \cdot \mathbf {v} =0}$ for conservation of mass. ${\displaystyle \rho}$ is the density, ${\displaystyle u_i}$ (${\displaystyle i = 1, 2, 3}$) the three components of velocity, ${\displaystyle f_i}$ body forces (such as gravity), ${\displaystyle p}$ the pressure, ${\displaystyle \mu}$ the dynamic viscosity, of the fluid at a point; ${\displaystyle e_{ij}={\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{j}}}+{\frac {\partial u_{j}}{\partial x_{i}}}\right)}$; ${\displaystyle \Delta =e_{ii}}$ is the divergence, ${\displaystyle \delta_{ij}}$ is the Kronecker delta. If ${\displaystyle \mu}$ is uniform over the fluid, the momentum equation above simplifies to ${\displaystyle \rho {\frac {Du_{i}}{Dt}}=\rho f_{i}-{\frac {\partial p}{\partial x_{i}}}+\mu \left({\frac {\partial ^{2}u_{i}}{\partial x_{j}\partial x_{j}}}+{\frac {1}{3}}{\frac {\partial \ Delta }{\partial x_{i}}}\right)}$ (if ${\displaystyle \mu = 0}$ the resulting equations are known as the Euler equations; there, the emphasis is on compressible flow and shock waves). If now in addition ${\displaystyle \rho}$ is assumed to be constant we obtain the following system: ${\displaystyle \rho \left({\partial v_{x} \over \partial t}+v_{x}{\partial v_{x} \over \partial x}+v_{y}{\partial v_{x} \over \partial y}+v_{z}{\partial v_{x} \over \partial z}\right)=\mu \left [{\partial ^{2}v_{x} \over \partial x^{2}}+{\partial ^{2}v_{x} \over \partial y^{2}}+{\partial ^{2}v_{x} \over \partial z^{2}}\right]-{\partial p \over \partial x}+\rho g_{x}}$ ${\displaystyle \rho \left({\partial v_{y} \over \partial t}+v_{x}{\partial v_{y} \over \partial x}+v_{y}{\partial v_{y} \over \partial y}+v_{z}{\partial v_{y} \over \partial z}\right)=\mu \left [{\partial ^{2}v_{y} \over \partial x^{2}}+{\partial ^{2}v_{y} \over \partial y^{2}}+{\partial ^{2}v_{y} \over \partial z^{2}}\right]-{\partial p \over \partial y}+\rho g_{y}}$ ${\displaystyle \rho \left({\partial v_{z} \over \partial t}+v_{x}{\partial v_{z} \over \partial x}+v_{y}{\partial v_{z} \over \partial y}+v_{z}{\partial v_{z} \over \partial z}\right)=\mu \left [{\partial ^{2}v_{z} \over \partial x^{2}}+{\partial ^{2}v_{z} \over \partial y^{2}}+{\partial ^{2}v_{z} \over \partial z^{2}}\right]-{\partial p \over \partial z}+\rho g_{z}}$ Continuity equation (assuming incompressibility): ${\displaystyle {\partial v_{x} \over \partial x}+{\partial v_{y} \over \partial y}+{\partial v_{z} \over \partial z}=0}$ Simplified version of the N-S equations. Adapted from Incompressible Flow, second edition by Ronald Panton Note that the Navier-Stokes equations can only describe fluid flow approximately and that, at very small scales or under extreme conditions, real fluids made out of mixtures of discrete molecules and other material, such as suspended particles and dissolved gases, will produce different results from the continuous and homogeneous fluids modelled by the Navier-Stokes equations. Depending on the Knudsen number of the problem, statistical mechanics may be a more appropriate approach. However, the Navier-Stokes equations are useful for a wide range of practical problems, providing their limitations are borne in mind. See also[ ] References[ ] External links[ ] This page uses Creative Commons Licensed content from Wikipedia (view authors).
{"url":"https://engineering.fandom.com/wiki/Navier-Stokes_equations","timestamp":"2024-11-08T15:43:06Z","content_type":"text/html","content_length":"380644","record_id":"<urn:uuid:9457e6f7-44a6-4378-9fab-ea3934e89c33>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00668.warc.gz"}
ball mill charge pattern The screw action pulls the ball charge up the center of the mill, and the charge eventually cascades over the edge of the screw, creating a general downward flow pattern at the mill perimeter. This pattern of flow, coupled with the low velocities involved, ensures that the grinding media and particles stay in contact with one another, thus ... WhatsApp: +86 18203695377 Particle fows in a S m diameter ball mill are presented The charge behaviour, torque and power draw are analysed for a range of rotation rates from SO to 130% of the critical speed Sensitivity of the results to the choice of friction and restitution coefficients and to the particle size distribution are examined. ... A convex pattern surface ... WhatsApp: +86 18203695377 Abstract. In this investigation, we optimize the grinding circuit of a typical chromite beneficiation plant in India. The runofmine ore is reduced to a particle size of less than 1 mm in the ... WhatsApp: +86 18203695377 The ball charge and ore charge volume is a variable, subject to what is the target for that operation. The type of mill also is a factor as if it is an overflow mill (subject to the diameter of the discharge port) is usually up to about 4045%. If it is a grate discharge you will have more flexibility of the total charge. WhatsApp: +86 18203695377 For the planetary ball mill samples which did not exhibit phase transformation an exceptional fraction (15%) of pentacoordinate Al ions (AlO5) was observed, suggesting incipient transformation of WhatsApp: +86 18203695377 Abstract. Ball size distribution is commonly used to optimise and control the quality of the mill product. A simulation model combining milling circuit and ball size distribution was used to determine the best makeup ball charge. The objective function was to find the ball mix that guarantees maximum production of the floatable size range ( ... WhatsApp: +86 18203695377 Finite Element Analysis. To ensure the structural integrity of the ball mill, a quasistatic stress analysis simulation was conducted on SolidWorks. The entire parts were meshed using 4node solid tetrahedral elements (Figure 16) of minimum element size of 8 mm in zones with high stress gradients. WhatsApp: +86 18203695377 It gives also a rough interpretation of the ball charge efficiency: Ball top size (bond formula): calculation of the top size grinding media (balls or cylpebs): Modification of the Ball Charge: This calculator analyses the granulometry of the material inside the mill and proposes a modification of the ball charge in order to improve the ... WhatsApp: +86 18203695377 ball ration used as an initial charge when the mill is started up. Ball rationing is considered for one or more of the following purposes: 1) to increase throughput of the ... balls on each screen will show a certain pattern, or distribution of the charge by weight, whether ball wear varies in direct proportion to its surface area as D ... WhatsApp: +86 18203695377 Refining the powderdiffraction pattern using SrFe 12 O 19 and Fe 3 O 4 results in a satisfactory fit of the data, ... 78%, for the sample being treated for the longest time (42 h) in the ball WhatsApp: +86 18203695377 Ball mill 9000 HP Motor speed 990 rpm Mill speed 14 rpm R/L ill diameter 5 m Mill bearing 22 m Project A Twopinion girth gear drive Fig. 2A 1 Annulus, 2 pinions 2 Reducing gears, each 2stage 2 Main motors 2 Turning gears Project B Central drive Fig. 2B 1 Double planetary gear 1 Main motor 1 Turning gear WhatsApp: +86 18203695377 We carried out a detailed study on the effect of particle load and ball load on grinding kinetics. by carrying out experiments at two different mill speeds (55 and 70 % critical) and four levels WhatsApp: +86 18203695377 The basic parameters used in ball mill design (power calculations), rod mill or any tumbling mill sizing are; material to be ground, characteristics, Bond Work Index, bulk density, specific density, desired mill tonnage capacity DTPH, operating % solids or pulp density, feed size as F80 and maximum 'chunk size', product size as P80 and maximum a... WhatsApp: +86 18203695377 The difference between the average surface area of primary ball charge pattern ( m 2 /ton) and design ball charge ( m 2 /ton) in the compartment 2 was the motivation for increasing the average surface area of balls by changing the ball charge pattern. Generally, in a ball mill, larger balls and smaller balls are charged for crushing and ... WhatsApp: +86 18203695377 Microsoft Word info_ball_mills_ball_charge_ Author: ct Created Date: 4/4/2019 9:22:48 AM ... WhatsApp: +86 18203695377 The effects of ball charge pattern, cement fineness and two additive materials (limestone and pozzolan) on the performance of the CBM unit and the quality of cement were investigated. WhatsApp: +86 18203695377 Material Charge (gms)/ Split feed sample to obtain 8 to 12 samples slightly smaller than IPP. Also split out sample for Material Charge. Place Material Charge and Ball Charge in Mill Run x revolutions x = number of revolutions based on estimate of work index; usually 50, 100, 150 or 200 revolutions. Dump Mill, separate balls and Material ... WhatsApp: +86 18203695377 The mill speed is one of the vital parameters in ball mills, which is normally specified as a fraction of critical speed. It determines whether the load behavior is predominantly the cascading regime, the cataracting regime or the centrifuging regime. In general, the industrial ball mills rotational speed operates at 70%~80% of critical WhatsApp: +86 18203695377 There is a similar pattern of change for both the power and breakage rate. Table 2 lists the values of power and the rate constant, both expressed as a fraction of their respective values at the optimum condition of 45 vol.% solid. ... This shifts the center of gravity of ball charge closer to the mill axis, causing a decrease in cascading ... WhatsApp: +86 18203695377 Discrete element method simulations of a 1:5scale laboratory ball mill are presented in this paper to study the influence of the contact parameters on the charge motion and the power draw. The position density limit is introduced as an efficient mathematical tool to describe and to compare the macroscopic charge motion in different scenarios, with different values of the contact ... WhatsApp: +86 18203695377
{"url":"https://tresorsdejardin.fr/ball/mill/charge/pattern-5934.html","timestamp":"2024-11-13T02:07:49Z","content_type":"application/xhtml+xml","content_length":"20674","record_id":"<urn:uuid:38b0edcc-14e0-4f6d-886c-c30ac8db2a22>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00575.warc.gz"}
Invariant version of cardinality quantifiers in superstable theories We generalize Shelah's analysis of cardinality quantifiers from Chapter V of Classification Theory and the Number of Nonisomorphic Models for a superstable theory. We start with a set of bounds for the cardinality of each formula in some general invariant family of formulas in a superstable theory (in Classification Theory, a uniform family of formulas is considered) and find a set of derived bounds for all formulas. The set of derived bounds is sharp: up to a technical restriction every model that satisfies the original bounds has a sufficiently saturated elementary extension that satisfies the original bounds and such that for each formula the set of its realizations in the extension has arbitrarily large cardinality below the corresponding derived bound of the formula. • Cardinality quantifiers • Superstable theories Dive into the research topics of 'Invariant version of cardinality quantifiers in superstable theories'. Together they form a unique fingerprint.
{"url":"https://cris.ariel.ac.il/en/publications/invariant-version-of-cardinality-quantifiers-in-superstable-theor-3","timestamp":"2024-11-09T19:39:57Z","content_type":"text/html","content_length":"51381","record_id":"<urn:uuid:f745caff-4e3f-4a45-a7db-1f271d34824d>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00296.warc.gz"}
Simple Gear Train: Learn its Working Principle & Applications What uses a simple gear train? The simple gear trains examples in real life or the places where they are used are engines, lathes, clocks, and the gearbox and differential of cars, among other places. What are the 4 types of gear trains? The four types of gear trains are: Simple, Compound, Epicyclic, and Reverted gear train. What are the advantages of a simple gear train? The simple gear train transfers large amounts of power without a ‘slip’. This means that power can be transferred to a larger distance or minimum central distance. Idlers are used to direct rotation in any desired direction. These are the advantages of this gear train. Which is better, a simple or a compound gear train? With smaller gears, a larger gear reduction can be obtained from the first to last gear in a compound gear train compared to the simple gear train. Hence, the compound gear train is the choice. What is a simple gear train? A gear train which has only one gear per shaft in the whole gear assembly is called a simple gear train.
{"url":"https://testbook.com/mechanical-engineering/simple-gear-train-definition-diagram-and-applications","timestamp":"2024-11-09T00:19:32Z","content_type":"text/html","content_length":"864699","record_id":"<urn:uuid:56fa8f2a-09d9-4d9c-89a7-42428fd5a6da>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00028.warc.gz"}
The Hidden Reality of Quantum Physics With Sean Carroll Sean Carroll, the American theoretical physicist who specializes in quantum mechanics, gravity and cosmology explains the hidden reality of the micro-world. The fundamental theory in physics which describes the physical properties of nature at the scale of atoms and subatomic particles, is known as quantum mechanics. It is the foundation of all quantum physics including quantum chemistry, quantum field theory, quantum technology, and quantum information science. Sean Carroll explains to us in simple turns the mysterious properties of quantum physics. He thinks that we should try to understand quantum mechanics even though prominent physicists like Richard Feynman thought no one can understand quantum mechanics. Quantum mechanics arose gradually from theories to explain observations which could not be reconciled with classical physics, such as Max Planck’s solution in 1900 to the black-body radiation Sean Carroll mentions how predictions of quantum mechanics have been verified experimentally to an extremely high degree of accuracy. He also explains the wave function collapse of quantum mechanics. Another spooky aspect in quantum mechanics is when quantum systems interact with each other and the result can be the creation of quantum entanglement: their properties become so intertwined that a description of the whole solely in terms of the individual parts is no longer possible. Contrary to popular belief, entanglement does not allow sending signals faster than light. Quantum mechanics is perhaps the most successful theory ever formulated. For almost a century, experimenters have subjected it to rigorous tests, none of which called its foundations into question. It is truly one of the major triumphs in modern physics.
{"url":"https://immutabledistribution.com/the-hidden-reality-of-quantum-physics-with-sean-carroll/","timestamp":"2024-11-04T21:55:49Z","content_type":"text/html","content_length":"63019","record_id":"<urn:uuid:7e23c384-d4d7-4e74-8bae-316130a76622>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00357.warc.gz"}
Normalized Determinant Function (NDF) Click here to go to our Rollett's stability factor page The normalized determinant function was first described by Platzker and Struble in a classic IEEE MTT-S paper and GaAs IC symposium paper back in 1993. NDF stands for normalized determinant function. Many people refer to the NDF and the NDF function, even though the "F" in NDF stands for function. If you are a serious power amplifier designer, you will need to dive into this topic, to at least understand the difference between stable and potentially unstable networks by examining origin encirclements of the NDF plotted from -∞ to + ∞. Below is a description of what NDF is all about, contributed by one of the original authors on this important topic, Wayne Struble. Thanks, Wayne! All linear systems can be described with a characteristic equation whereby the natural and forced time response of that system is governed by this equation. This characteristic equation can be written as a ratio of two polynomials in terms of s as E=P(s)/Q(s). Stability of the system is governed by the roots of E (i.e. poles and zeroes of the characteristic equation). If any poles of the system fall in the right half plane (RHP), the system’s time response will grow exponentially over time (i.e. the system is unstable). Conversely, if there are no RHP poles, the system’s time response will not grow exponentially over time (i.e. the system is stable). The NDF function is simply a way to determine if there are any RHP poles in the characteristic equation of the system. It does this by counting the number of RHP zeroes in the full network determinant (which is the same as counting the number of RHP poles of the characteristic equation of the system). We do this by using the “Principle of the Argument Theorem” from complex theory. Basically, one plots the complex NDF function (magnitude and phase) versus frequency on a polar plot and counts the number of clockwise encirclements of the origin (0,0) as frequency increases from –infinity to +infinity. The number of encirclements is equal to the number of RHP poles in the system’s characteristic equation. If there are any RHP poles present, the system is unstable. This technique is mathematically rigorous for all linear systems. At the bottom of the page are some good references, and here are some presentations you can download from Microwaves101 (contributed by Wayne Struble, thanks again!) Stability Analysis for RF and Microwave Circuit Design, Wayne Struble and Aryeh Platzker. Appendix 1: NDF Stability Analysis of Linear Networks from Return Ratios, Wayne Struble and Aryeh Platzker. [1] E. Routh, “Dynamics of a System of Rigid Bodies”, 3rd Ed., Macmillan, London, 1877 [2] H. Nyquist, “Regeneration Theory”, Bell System Technical Journal, Vol. 11, pp. 126-147, Jan. 1932 [3] A. Platzker, W. Struble, and K. Hetzler, “Instabilities Diagnosis and the Role of K in Microwave Circuits”, IEEE MTT-S Digest, vol. 3, pp. 1185-1188, Jun. 1993 [4] W. Struble and A. Platzker, “A Rigorous Yet Simple Method For Determining Stability of Linear N-port Networks”, 15th Annual GaAs IC Symposium Digest, pp. 251-254, Oct. 1993 [5] A. Platzker and W. Struble, “Rigorous Determination of The Stability of Linear N-node Circuits From Network Determinants and The Appropriate Role of The Stability Factor K of Their Reduced Two-Ports”, 3rd International Workshop on Integrated Nonlinear Microwave and Millimeterwave Circuits, pp. 93-107, Oct. 1994 Below are two PDF files on NDF, make sure you grab them both. Download the Stability Analysis presentation Download the Stability Analysis appendices
{"url":"https://www.microwaves101.com/encyclopedias/normalized-determinate-function-ndf","timestamp":"2024-11-14T02:00:25Z","content_type":"application/xhtml+xml","content_length":"36095","record_id":"<urn:uuid:363fa465-a291-4dbe-881b-fca32487d809>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00719.warc.gz"}
• About this project British Premier League II This example shows application of DDR to prediction of outcomes of football matches. The predicted season is 2020-2021. The basic concept and betting rules are explained in the previous article, they are not repeated here. The inputs are changed in this model. Since top 17 teams from the previous season are playing in the new season, we can use prvious scores as indicators pointing to the mastership of involved teams. For any two teams with conventional names HOME and AWAY we can find 16 matches played against same opponents, where HOME team played at home and AWAY team played away. Here, for example, the results for 2019-2020 from Wikipedia: The row with name 'Chelsea' shows scores in matches with other 19 teams when Chelsea played at home. The column with the name 'BOU' shows scores with all other 19 teams when Bournemouth played away. This information can be used for predicting match 'Chelsea-Bournemouth' in the next season. The number of inputs for each team is reduced to 3. First input is a sum of all goal differences in games where the team lost, second input is a sum of all goal differences in the games where team won and the third input is a number of draws. So we have 6 inputs per match, the model target is a goal difference as real number. For training we used historical records from 2004 through 2020. They are available in multiple location, we used oddsportal.com. The predicted season is 2020-2021. The previous season 2019-2020 was taken as initial scoring table and this table was updated after prediction of each game with the new result. The draws were excluded from predictions and predicted result was either HOME or AWAY depending on the sign of predicted goal difference. The balance summary can be seen below: The number of bets 272 is less than 380 because we use only on top 17 teams from the previous season and place bets only on games where these 17 teams are playing. Similar to the previous model the bets are placed mostly on underestimated underdogs selectively. Number of correct predictions 78 out of 272 gives positive balance, when betting on favorites typically gives near 50% correct predictions with 5% to 15% money loss. This model is more stable than the previous one, the result varies in each execution due to random initialization, but the profit almost always is between 15% and 20%.
{"url":"http://openkan.org/BPL2.html","timestamp":"2024-11-04T07:28:45Z","content_type":"text/html","content_length":"7861","record_id":"<urn:uuid:d849b9b4-d749-491a-ad24-bffdecd668f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00477.warc.gz"}
dccfit(spec, data, out.sample = 0, solver = "solnp", solver.control = list(), fit.control = list(eval.se = TRUE, stationarity = TRUE, scale = FALSE), cluster = NULL, fit = NULL, VAR.fit = NULL, realizedVol = NULL, ...) A multivariate data object of class xts or one which can be coerced to such. A positive integer indicating the number of periods before the last to keep for out of sample forecasting. Either “nlminb”, “solnp”, “gosolnp” or “lbfgs”. It can also optionally be a vector of length 2 with the first solver being used for the first stage univariate GARCH estimation (in which case the option of “hybrid” is also available). Control arguments passed to the fitting routine. The ‘eval.se’ option determines whether standard errors are calculated (see details below). The ‘stationarity’ option is for the univariate stage GARCH fitting routine, whilst for the second stage DCC this is imposed by design. The ‘scale’ option is also for the first stage univariate GARCH fitting routine. A cluster object created by calling makeCluster from the parallel package. If it is not NULL, then this will be used for parallel estimation (remember to stop the cluster on completion). (optional) A previously estimated VAR object returned from calling the varxfit function. The 2-step DCC estimation fits a GARCH-Normal model to the univariate data and then proceeds to estimate the second step based on the chosen multivariate distribution. Because of this 2-step approach, standard errors are expensive to calculate and therefore the use of parallel functionality, built into both the fitting and standard error calculation routines is key. The switch to turn off the calculation of standard errors through the ‘fit.control’ option could be quite useful in rolling estimation such as in the dccroll routine. The optional ‘fit’ argument allows to pass your own '>uGARCHmultifit object instead of having the routine estimate it. This is very useful in cases of multiple use of the same fit and problems in convergence which might require a more hands on approach to the univariate fitting stage. However, it is up to the user to ensure consistency between the ‘fit’ and supplied ‘spec’.
{"url":"https://www.rdocumentation.org/packages/rmgarch/versions/1.3-7/topics/dccfit-methods","timestamp":"2024-11-06T20:59:14Z","content_type":"text/html","content_length":"71575","record_id":"<urn:uuid:9143e9a5-4770-4b72-8e01-3445208de9e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00819.warc.gz"}
This Is The Thread Where I Fix Problems I've been dispensing copious amounts of advice to people in my life lately and I think it's really turned out well and I've helped some people. I'd like to do the same for my online friends here at This is the thread where you post a problem that you have and I tell you how to solve the problem. (In before Jbutler posts "My problem is arrogant asswipes posting threads about fixing other people's problems".) Nov 6, 2014 Reaction score Dear Berghouse Forum, I never thought it would happen to me. I made it an entire year without buying my wife flowers, which I last gave her on Valentine's Day 2014. I try to be an attentive and caring partner, but flowers were something that I never quite grasped as a gift despite the fact that I know my wife enjoys receiving them. The question, then: do I go the traditional flower route tomorrow because I know she likes them despite the risk that it might also draw her attention to the fact that the last time I gave her flowers was...a year ago on Valentine's day? Or do I go for something a bit more personal and specific, but make a note for myself to send her flowers in a few weeks when she's not expecting them. Counting Flowers On The Wall Nov 2, 2014 Reaction score Not Bergs, but she's a woman so she probably has it noted in her head the last time you bought her flowers. You may be better off getting them for her now, so she can't say it's been over a year since you got me flowers. Trust me she knows. (source - single guy in his 40s) Oct 30, 2014 Reaction score When is it good to 4 bet preflop with 35os, aside from knowing its Chickens favorite hand? Feb 25, 2014 Reaction score Dear Berghouse Forum, I never thought it would happen to me. I made it an entire year without buying my wife flowers, which I last gave her on Valentine's Day 2014. I try to be an attentive and caring partner, but flowers were something that I never quite grasped as a gift despite the fact that I know my wife enjoys receiving them. The question, then: do I go the traditional flower route tomorrow because I know she likes them despite the risk that it might also draw her attention to the fact that the last time I gave her flowers was...a year ago on Valentine's day? Or do I go for something a bit more personal and specific, but make a note for myself to send her flowers in a few weeks when she's not expecting Counting Flowers On The Wall go with something more personal, may I suggest this? youre welcome Aug 24, 2013 Reaction score i have ample ginger ale and ice but am dangerously low on crown. halp? Aug 12, 2013 Reaction score I like to play poker at people's houses who are getting ready to sell. I then like to drink a lot and spill equal amounts on the floor to give it that nice lived in look. I'm having difficulties finding a game. When are you selling your place. I mean, when can I come play? Apr 2, 2013 Reaction score I don't know much about ice dams. Can you help me understand the damn ice? Oct 29, 2014 Reaction score I like to play poker at people's houses who are getting ready to sell. I then like to drink a lot and spill equal amounts on the floor to give it that nice lived in look. I'm having difficulties finding a game. When are you selling your place. I mean, when can I come play? I plan to be in this house for twenty years Mike...come on down. problem #2 from me: a guy posted a thread wherein he purports to fix problems, but has evidently forgotten about the thread. i phoned lisa ortolani, but she is unable to locate OP and claims to be too busy in her political campaigning to look for him any longer. i have nowhere else to turn. Oct 28, 2014 Reaction score I play at a game with lagtards so laggy that my only defense is to be a nit. How do I shed my nitty image? Dear Berghouse Forum, I never thought it would happen to me. I made it an entire year without buying my wife flowers, which I last gave her on Valentine's Day 2014. I try to be an attentive and caring partner, but flowers were something that I never quite grasped as a gift despite the fact that I know my wife enjoys receiving them. The question, then: do I go the traditional flower route tomorrow because I know she likes them despite the risk that it might also draw her attention to the fact that the last time I gave her flowers was...a year ago on Valentine's day? Or do I go for something a bit more personal and specific, but make a note for myself to send her flowers in a few weeks when she's not expecting Counting Flowers On The Wall Dear Flower Fucker: The easy answer here is to get her flowers every 6 months, plus or minus one month as determined by the roll of a 4 sided die. I suspect that you don't have play Dungeons and Dragons because you have a sexual relationship with another live human being, which further invalidates any chance that you have a 4 sided die. You could to a hobby store and buy a 4 sided die, but there are two problems with this - number one, you'll be on a Federal watch list, and number two, it's not going to help your current predicament. You need to find a way to give her flowers, twice, on the same day, and at the same time acknowledge that you've been a degenerative partner and terrible husband and haven't shown your undying affection for her through other material means. Here's what you're gonna do - it's a 3 step plan: 1) Give her a dozen red roses with a vase and all that other happy horseshit. Just tell the floral people to do the needful, they'll hook you up. 2) Get an extra rose - MAKE SURE YOU GET THE THORNS REMOVED - trust me on this. It's vital to step #3 3) Stuff the other rose down your pants and about an hour after you give her the dozen roses, drop your drawers and loudly exclaim "I GOT YOU ROSES AGAIN!" This will ensure that she knows you care about her enough to get her roses on more than one occasion. It also has the happy circumstance of being truly unforgettable. The ancillary benefit is that you're not getting laid until August 2016, which means more time to go play in your regular 2/4 LHE game at the Borg, you fucking nittard. Your pal, A Thorn By Any Other Name - - - - - - - - - Updated - - - - - - - - - Not Bergs, but she's a woman so she probably has it noted in her head the last time you bought her flowers. You may be better off getting them for her now, so she can't say it's been over a year since you got me flowers. Trust me she knows. (source - single guy in his 40s) Don't take advice regarding flowers from a guy named "Chippy". Also, never take any advice from a guy who's first name also appears in his last name. Anyone named "Chippy McChiperson' hasn't had sex since the Carter administration. can i rescind my "like"? i hadn't gotten to the $2/4 LHE part when i committed to the click. When is it good to 4 bet preflop with 35os, aside from knowing its Chickens favorite hand? This is serious poker advice. I'm not kidding here. The only right time to 4 bet 35o is when you're situation meets of the following characteristics: 1) There are 7 field callers in front of you - if you smash the flop you don't want to be heads-up and lose the other player with a modest lead post-flop. 2) At least 6 of the other players are better than you - if you smash the flop you want to make it look like you're overplaying QQ or JJ because you're intimidated by your competition 3) You're very, very deep - you really want to get paid off when you connect with the flop. For example - on a KT5sss flop when you're holding red 53...you're jamming there. You connected with the flop and you want people with middle or top pair to call so you can scoop them when you spike a 3 on the river. Also, you want made flushes to call because if you go runner-runner 53, you're totally getting all of their chips. P.S. It helps a lot if you only do this from the small blind or big blind so that you're guaranteed first in vigorish. That bullshit about position applies to games like PLO and Go Fish. If you really want position, you want to be first, not last. Who wants to be last at anything? That's dumb fucking advice. Be first, play this from the blinds, jam any flop. Post pictures of your stacks after you felt the rest of the players. Your buddy, Jam McJamerson Nov 6, 2014 Reaction score I have two adult daughters (20 and 25). I would like them to make better boyfriend decisions. Please tell me how to fix this. i have ample ginger ale and ice but am dangerously low on crown. halp? Just wow. Not only do you misspell "help" with the wrong vowel, but you've successfully stocked up on ginger ale and ice, but forgotten the booze. Is it that hard to stock up on ice. Jesus H. Christ on a popsicle stick, I live in the Northeast and I have ice on my roof. And the side of the house. And in my driveway. My house is like the fucking Ross Ice Shelf. But you've got ice. Well, congratufuckinglations on that accomplishment. And ginger ale? Ginger ale (by itself) is something only consumed when someone is massively hungover. I mean, it's fine to have a year's supply of ginger ale and crackers if you like to throw them back with authority...but first you have to get drunk, and you're apparently too inept to realize that you need to PURCHASE ALCOHOL IN ORDER TO GET FUCKED UP. I think you might be beyond help, but I'm gonna try anyway. This is really tilting at windmills here, but let's see if we can straighten you out. Here's what you're gonna do. MAKE SURE YOU DO THIS IN ORDER. If you mix it up, I'm not responsible for whatever fucked up predicament you find yourself in. Step 1: Drive to downtown St. Louis. Step 2: Hold a hundred dollar bill out the window Step 3: When the nice man comes to your window and ask what you need, tell him you want the "H". Just trust me on this. Step 4: Inject whatever he gives you into your femoral artery Step 5: There is no step 5. Well, there is, but you're not really gonna care after step 4. If you do this wrong, you'll end up in a bathtub in a Motel 6 in rural Missouri with your kidneys farmed. Actually, that might happen either way. But just rest in peace knowing that your harvested organs are going to the greater good. Some 85 year old Stud/8 player is going to get your kidney. It's for the best. Your best friend ever, Captain Morgan Jan 10, 2014 Reaction score Advice? Perhaps some of us need "Dirty Deeds.... Done Dirt Cheap" Let me know when you open that thread. I like to play poker at people's houses who are getting ready to sell. I then like to drink a lot and spill equal amounts on the floor to give it that nice lived in look. I'm having difficulties finding a game. When are you selling your place. I mean, when can I come play? Dear Dickwad, Last year I hosted a game with 2 full tables. We had BBQ downstairs. Halfway through the game, right when the BBQ came out, my 9 year old half-crippled, fat, mostly deaf, and probably partially retarded bulldog ambled downstairs. He walked around, checked out the situation, and then decided to take a massive dump right in front of the BBQ table. Did this stop people from playing? No. Did this stop people from eating? Fuck no. Did anyone really care at all that an animal half their size came downstairs and defecated while they were inhaling pork feet or whatever the hell I was serving that night at $1.25/lb? Helll no. Come over and spill whatever you want. I don't give a flying shit. Just understand this. You need $1K to play at my game and you're sitting in between me and a fucked-up Guinness who is overplaying 35o out of the blinds and is half out of his mind. Also, he's wearing a flannel shirt and won't shut up. And we're quintruple straddling and 9-betting every hand preflop. Also, when you go upstairs, walk around the right side of the table. Trust me on this. There's some shit you don't want to see on the left hand side of the table. I'm serious. Warmest Regards, Stan the Shitcan Man Jan 10, 2014 Reaction score Well, maybe could use some advice here. How do I get rich quick in real estate by using "other people's money" with no credit, and no down payment? This could end up being one of the funniest threads I don't know much about ice dams. Can you help me understand the damn ice? Dear O'Shea Jackson, An "ice damn" is when you overplay 35o out of the blinds and you get 5 bet, so you 6-bet jam as a Level 7 move, but you don't realize that the guy you're playing against is like Level 9 and shit, so you're behind but you're behind to 36o when you both get it all in for 6500bb. So the flop comes out JKQsss and you're like OK, that's cool because I have the 5s and the Level 9 fucking super-magi you're playing against doesn't have a black card in his hand and then the red deuce comes on the turn and you're like "whatever dude, I'm totally winning or at worst chopping" and you're fantasizing about all of his chips in your stack and you're like totally the mack fucking daddy but then the red 4 comes on the river and Raistlin across the friggin' table from you is like "6 plays biatch, totally ship it" and you start to complain but you're dazzled by his rocks and you decide "you know what, fuck this, I'm going back to the room and totally jerking off to channel 77 on pay per view"' and that's when this guys smiles and says "Yo, you got froze". Sorry, you got froze. Your bestest pal forever, Last edited: Aug 12, 2013 Reaction score red 4 red4 red 4. Not red 5. Damnit bergs. problem #2 from me: a guy posted a thread wherein he purports to fix problems, but has evidently forgotten about the thread. i phoned lisa ortolani, but she is unable to locate OP and claims to be too busy in her political campaigning to look for him any longer. i have nowhere else to turn. Dear Mrs. New Jersey, For an ostensibly accomplished attorney, you're pretty fucking stupid. There isn't a single question in your statement above. Please don't use my forum to post your weird outlandish political bullshit. Nobody cares about your political views. I'm dispensing valuable advice here. I'm making today a changing day in people's lives. Awareness without action is worthless. You don't need a pack of wild horses to learn how to make a sandwich. It's hard to see your own face without a mirror. Sometimes you just gotta give yourself what you wish someone else would give you. Also, Chris Christie is 15 minutes from mistaking your state for a chocolate covered donut and eating the entire thing anyway. Cliff notes: Ask me a question, fucktard. Love and Kisses, Phil McGraw - - - - - - - - - Updated - - - - - - - - - I play at a game with lagtards so laggy that my only defense is to be a nit. How do I shed my nitty image? Dear Nitwadius Maximus: I can definitely help you. I have the advantage of having played with you before. You probably don't remember because you were too busy discussing the 78 foot worm that lived inside your intestine for 2 months, but I was paying attention, and I have an idea. Your problem can be easily fixed. Here's what you need to do to start giving the impression that you're mixing up your game and playing loose and fast like the rest of the guys: 1) Don't bring 8 day old tuna sandwiches to a poker game. Just trust me on this. Infected other players with airborne salmonella is not the way to open up your game, unless you want to play HU with a person with a strong immune system and no sense of smell. 2) Stop bringing your own water to games. It makes people think that you're too cheap to ask them to drink their water. I like the preparedness, but you look like a Boy Scout. Boy Scouts don't jam 3) Bring lots of money to the game. Tons of money. And make sure you buy in with all of it. For a typical .25/.50 game, $15,000 oughta do it. 4) Jam everything blind for the first 4 hours. And drink a lot. Keg stands are compulsory. I'm pretty sure that if you follow all of my recommendations here, nobody will think you're a nit. Now, in the interest of openness, there is a pretty good chance that after 4 or 5 sessions you'll end up confirmed busto and living behind the Burger King in Salem, New Hampshire accompanied only by your pet groundhog "Harvey". But that's OK. Just think of all the free burgers you're going to get out of the trash at about 11:05pm every night. Good luck! P.S. If you get low on funds, let me know and I'll totally paypal you. - Doug "Toolbox" Lee - - - - - - - - - Updated - - - - - - - - - red 4 red4 red 4. Not red 5. Damnit bergs. I meant red 4, not a red 5. Sorry. I was busy reading jbutler's post trying to figure out where dickhead was asking was a question. Your best buddy, Major Richard Cranium, USN - - - - - - - - - Updated - - - - - - - - - can i rescind my "like"? i hadn't gotten to the $2/4 LHE part when i committed to the click. Dear Premature Eclickator: No, you can't rescind a "like". I mean, I couldn't give two flying shits about what you think for a myriad of reasons that I can't even begin to delve into here, but that's not the point. My point is this - why are you prematurely liking? I mean, shit dude, if you need to go to the doctor and get that shit cleaned up than man up and do what you gotta do, but don't just throw it out there on the internet that you prematurely like. Jesus, talk about inviting abuse. I understand that some of this stuff is physiological and can't be helped, but you just sorta threw that out there, and I'm not sure that people are going to treat you the same now that they know you're a premature liker. Cliffs: No, read the entire post. P.S. I redid your avatar for you. Consider this a gift from someone that has everything to someone with a 1st grade education who is fond of pointing and grunting at things he wants like a 4 year old in the Toys section at Wal-Mart. Your penpal, Justice Alfred Moore Nov 2, 2014 Reaction score Real life problem. I signed up for the PLOSAY free roll, but it looks like I'm surrounded by people who eat hockey pucks drowned in maple syrup for breakfast. Do you know any degens from the good ole U S of freakin A who might want to fill in the last couple of slots? I have two adult daughters (20 and 25). I would like them to make better boyfriend decisions. Please tell me how to fix this. Dear Eminem: You've really got two choices. You can shoot the daughters, or you can shoot the boyfriends. The problem with shooting the boyfriends is that you'll probably have to keep doing it for awhile, and after both daughters have dated like 8 or 9 guys each and they've shown up on the evening news, they're probably going to come to the realization that overprotective Dad is busting a cap in their I'm not going to advocate shooting the daughters. It might be easier to shoot yourself, but that's also messy and usually invalidates insurance claims. There's a third option - it's a bit of the long-range plan, but this works, trust me. Step 1: Study to become a ship's captain. Step 2: Acquire a small freighter flying the flag of a nation that can't possibly defend itself Step 3: Wander very close to the coast of Somalia There is no step 4. Some dude that weighs 87 pounds and has been chomping on khat for the last 35 days is going to heave-ho your ass into the Somalian Sea, and you're bloated corpse will wash up somewhere in Kenya a few weeks later. The upside is that you won't care at all about who your daughters are dating at that point. Best of luck! -Richard Phillips - - - - - - - - - Updated - - - - - - - - - Real life problem. I signed up for the PLOSAY free roll, but it looks like I'm surrounded by people who eat hockey pucks drowned in maple syrup for breakfast. Do you know any degens from the good ole U S of freakin A who might want to fill in the last couple of slots? Dear Jim Craig, I'm assuming that you're referring to the people that I charitably refer to as "Canucks". I think it's a moral imperative to ensure that stupid Canadians don't beat us at anything except hockey, which isn't fair anyway because they have a hockey stick entwined in their DNA. Honestly, would anyone miss the State of Canada if it were wiped off the face of the planet? OK, that sounds horrible, but let's look at this realistically...if aliens invaded the Earth and said "we require your natural resources", is there any reason why the UN wouldn't be like "fuck it, your green ass can have Canada"? Canada is full of soil and ice and rocks and trees and shit. Most of the people there have been drunk since they were 14 years old. Shit, the British Empire was crumbling like a piece of overcooked toast, but they took one glance at Canada and said "OK, fine, go fuck yourselves, do whatever you want". Keep in mind that England is a country that went to war with Argentina over a bunch of small islands called the "Falklands". I can't name one thing that has ever come out of the Falklands. Not a single fucking thing. But England thought the Falklands were more important than Canada. Jesus Christ...England fought a battle with the Scottish over their own independence, and the Scots haven't seen the Sun in 4 millennia. They wear kilts. They consider a cow's stomach with the cow's organs boiled within to be a "delicacy". These fuckers have never heard of Ruth Chris' Steakhouse. But there the Brits are, fighting the Scots for Scotland, and giving stupid Canada a lick and a Fuck Canada. Fuck Canada with Guy Lafleur's dick. Just win the tournament and tell me about it afterwards. Your benefactor and number one fan, - Gordie Goddamn Howe Aug 8, 2013 Reaction score Dear provider of real world wisdom, Tell me how to win the lottery. Not those cheap 2 million dollar ones but the big 500+ million dollar ones and not just one of them... all of them! The end Nov 6, 2014 Reaction score Why are the Cleveland Browns more entertaining in the off season? Ray Farmer is going to have draft picks taken away, Josh Gordon is suspended for the whole year, Johnny Manziel is in Rehab and season ticket prices are increasing. Also since when did it become acceptable to chew with ones mouth open? Dear provider of real world wisdom, Tell me how to win the lottery. Not those cheap 2 million dollar ones but the big 500+ million dollar ones and not just one of them... all of them! The end Dear Lord of Lactose: Before I answer your question (and there is an answer - I watched the relevant parts of Real Genius), I need to show you something. It's a fascinating, wondrous, captivating world called "mathematics". In "mathematics" people use logic and reason to create formulas and shit, and these are used to prove or disprove certain beliefs. For example - you want to win a lottery. Excuse me, I stand corrected. Your unrealistic, over-optimistic cheese munching ass wants to "win all the lotteries". We can apply a simple mathematical formula to this to determine whether this is realistic, or whether you're just being a fucking moron. Here's the relevant excerpt from Wikipedia, the fount of all human knowledge since the Dawn of Man: Bayes’ Theorem, A Quick Introduction We all know that the probability of a hypothesis being true often changes in light of the evidence. Wouldn’t it be cool if math could help us show how it works? Fortunately, math is cool enough to help out here thanks to something called Bayes’ theorem. In this article I’ll introduce Bayes’ theorem and the insights it gives about how evidence works. In my next blog entry I’ll show how Bayes’ theorem can be applied in the service of theism. One Form of Bayes’ Theorem Bayes’ theorem is often used to mathematically show the probability of some hypothesis changes in light of new evidence. Bayes’ theorem is named after Reverend Thomas Bayes, an ordained Christian minister and mathematician, who presented the theorem in 1764 in his Essay towards solving a problem in the doctrine of chances . Before showing what the theorem is, I’ll recap some basic probability symbolism. [TABLE="class: inline"] [TD="class: mid_li"]Pr(A) = [TD]The probability of A being true; e.g. Pr(A) = 0.5 means “The probability of A being true is 50%.” [TD="class: mid_li"]Pr(A|B) = The probability of A being true given that B is true. For example: Pr(I am wet|It is raining) = 0.8 This means “The probability that I am wet given that it is raining is 80%.”[/TD] [TD="class: mid_li"]Pr(¬A) = [TD]The probability of A being being false (¬A is read as “not-A”); e.g. Pr(¬A) = 0.5 means “The probability of A being false is 50%.” [TD="class: mid_li"]Pr(B ∪ C) = [TD]The probability that B or C (or both) are true. [TD="class: mid_li"]Pr(B ∩ C) = [TD]The probability that B and C are both true. [TD="class: mid_li"]Pr(A|B ∩ C) = [TD]The probability of A given that both B and C are Some alternate forms: [TABLE="class: inline"] [TH]One Version [TH]Alternate Forms [TD="class: mid_li"]Pr(A) [TD="class: mid_li"]Pr(¬A) [TD]Pr(~A), Pr(−A), Pr(A[SUP] [TD="class: mid_li"]Pr(B ∪ C) [TD]Pr(A ∨ B) [TD="class: mid_li"]Pr(B ∩ C) [TD]Pr(B ∧ C), Pr(B&C) [TD="class: mid_li"]Pr(A|B) The alternate forms can be combined, e.g. an alternate form of Pr(H|E) is P(H/E). Bayes’ theorem comes in a number of varieties, but here’s one of the simpler ones where is the hypothesis and is the evidence: In the situation where hypothesis explains evidence , Pr(E|H) basically becomes a measure of the hypothesis’s explanatory power . Pr(H|E) is called the of H. Pr(H) is the prior probability of H, and Pr(E) is the prior probability of the evidence (very roughly, a measure of how surprising it is that we’d find the evidence). Prior probabilities are probabilities relative to background knowledge, e.g. Pr(E) is the likelihood that we’d find evidence relative to our background knowledge. Background knowledge is actually used throughout Bayes’ theorem however, so we could view the theorem this way where is our background knowledge: [TABLE="class: inline"] [TD]Pr(H|E&B) = [TABLE="class: fraction"] [TD="class: top"]Pr(H|B) × Pr(E|H&B) [TD="class: bottom"]Pr(E|B) So what does tell us? It tells us that people who are really, fantastically, unbelievably smart can't figure out how to win the lottery even once, never mind every time. Do you know how I figured this out? Do you think I understand Bayes Theorem? Fuck no, I can't even do pot odds. I know smart people can't figure out how to win the lottery even once because if they could THEY WOULD BE ON A GODDAMN BEACH SOMEWHERE Think about. If I could win the goddamn lottery, would I be dispensing free (albeit invaluable) information on a cheap knockoff of ChipTalk? I don't expect you to understand any of this. You're from Minn-e-zota, the land that forgot sun and the consonant 'S'. Just try to keep up and don't mention the Vikings, OK, champ? Edward Teller - - - - - - - - - Updated - - - - - - - - - Why are the Cleveland Browns more entertaining in the off season? Ray Farmer is going to have draft picks taken away, Josh Gordon is suspended for the whole year, Johnny Manziel is in Rehab and season ticket prices are increasing. Also since when did it become acceptable to chew with ones mouth open? Dear Forlorn One, Firstly, let's get some housekeeping out of the way. Ask me one question at a time. Is this so goddamn hard? Jesus, Mary, and Ed...I've got the Communist from New Jersey asking me absolutely nothing and just making random statements comprised primarily of sentence fragments that eschewed punctuation, and then you trying to double up and get extra value from a double question single post. I can understand the frustration you must feel. I can sense your overwhelming ire. You're a Cleveland fan. You probably go to bed at night thinking about Craig Ehlo crumpling to the ground after MJ hit the 18 footer. It's gotta be tough with your basketball team's major highlight focusing on the other team and being subject to a Gatorade commercial with legs. And your baseball team's most famous exports are Charlie Sheen and Albert Belle's shivering forearm. But you'll always have football. Good 'ole American football. So let's get to question number one, and then out of the benevolence of my gracious heart, I'll also give you a succinct answer to question number two. Just don't pull this shit again, OK Knute Question #1: Why are the Cleveland Browns more entertaining in the off season? Ray Farmer is going to have draft picks taken away, Josh Gordon is suspended for the whole year, Johnny Manziel is in Rehab and season ticket prices are increasing. You're making the dangerous assumption that the Cleveland Browns are entertaining in any season. They're not, but let's entertain you for the moment and assume they are. Firstly, taking away Ray Farmer's draft picks is really a favor to Ray Farmer. I don't even know who Ray Farmer is. I'll assume he's not Kevin Costner's character in the "The Draft" and that he's, you know, a live human being who purports to be the Cleveland Browns General Manager. Do you really want to hear the commissioner say "With the most useless pick in the 2015 NFL draft, the Cleveland (choking back laughter) Browns select....."? Do you really want him to end that sentence? Do you want to hear the sound of your GM bending over and taking one right in the cornhole? That's basically what's going to happen. Just be happy that the Browns have less exposure to massive embarrassment on ESPN. Honestly, it's best if your franchise just moves to Los Angeles and calls themselves the "Mastadons". Josh Gordon and Johnny Manziel are in the wrong business. Can you imagine if there was a Professional Beer Pong League (BPL)? These guys would be THE BEST OF ALL FUCKING TIME! There would be nobody better, ever. It's like MJ and LeBron playing on the same team. Like Gehrig and Ruth (wait, that actually happened). Like Montana and Brady (OK, that makes no sense because they're both quarterbacks - whatfuckingever). They would be UNSTOPPABLE. Unfortunately, these fucktards play football. Josh Gordon can't keep the bong out of his mouth and Johnny Football is the biggest bust since Ryan Leaf (who was incidentally just incarcerated, again). I don't have a lot of advice for you, except these 3 options: 1) Kill yourself. 2) Volunteer as DetroitDad's First Mate on his cruise to the east of Africa 3) Become a Cubs fan. Then kill yourself. Regarding your second question: Also since when did it become acceptable to chew with ones mouth open?, I've found it that it largely depends on the situation. If I was 6'6", weighed 325 lbs, and had a body fat of 4%, I'd pretty much take a shit on your face and there isn't much that you can do about it. Be happy that it's just chewing and get over it. You can always do what I do and stare uncomfortably at the menu while wondering how many beers you're going to get down before your significant other says "Isn't 14 enough for dinner?" - Brandon Weeden's Brandon Weeden Face - - - - - - - - - Updated - - - - - - - - - Advice? Perhaps some of us need "Dirty Deeds.... Done Dirt Cheap" Let me know when you open that thread. Thank for your writing King Zilla. I'm in a serious relationship at the moment, and am unfortunately not interested in your overt sexual advances, but let me know if you turn into that chick with the big boobs that's banging Justin Verlander. Best wishes, - A Guy That Wishes He Was Justin Verlander for 18 Minutes Oct 29, 2014 Reaction score Dear Fixer, The NCAA men's basketball tournament starts in a month. I'd really like Louisville to win it all. Please fix. Canada is full of soil and ice and rocks and trees and shit. Most of the people there have been drunk since they were 14 years old. Shit, the British Empire was crumbling like a piece of overcooked toast, but they took one glance at Canada and said "OK, fine, go fuck yourselves, do whatever you want". Keep in mind that England is a country that went to war with Argentina over a bunch of small islands called the "Falklands". I can't name one thing that has ever come out of the Falklands. Not a single fucking thing. But England thought the Falklands were more important than Canada. Jesus Christ...England fought a battle with the Scottish over their own independence, and the Scots haven't seen the Sun in 4 millennia. They wear kilts. They consider a cow's stomach with the cow's organs boiled within to be a "delicacy". These fuckers have never heard of Ruth Chris' Steakhouse. But there the Brits are, fighting the Scots for Scotland, and giving stupid Canada a lick and promise. omg literally crying on the stationary bike at the gym reading this Create an account or login to comment You must be a member in order to leave a comment Create account Create an account and join our community. It's easy! Already have an account? Log in here.
{"url":"https://www.pokerchipforum.com/threads/this-is-the-thread-where-i-fix-problems.5102/","timestamp":"2024-11-04T20:15:39Z","content_type":"text/html","content_length":"286871","record_id":"<urn:uuid:3bd9a8e5-7cd6-465e-84a1-7f5a5fe3351c>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00701.warc.gz"}
At a certain distance from the center of the Earth, a 4.0-kg object has a weight... At a certain distance from the center of the Earth, a 4.0-kg object has a weight... At a certain distance from the center of the Earth, a 4.0-kg object has a weight of 3.1 N. Find this distance. If the object is released at this location and allowed to fall toward the Earth, what is its initial acceleration? If the object is now moved twice as far from the Earth, by what factor does its weight change? By what factor does its initial acceleration change?
{"url":"https://justaaa.com/physics/1290608-at-a-certain-distance-from-the-center-of-the","timestamp":"2024-11-02T22:05:15Z","content_type":"text/html","content_length":"39979","record_id":"<urn:uuid:3f9122b0-530e-40c4-8313-4150cd338a06>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00133.warc.gz"}
Randomized Algorithms for Comparison-based Search Randomized Algorithms for Comparison-based Search Part of Advances in Neural Information Processing Systems 24 (NIPS 2011) Dominique Tschopp, Suhas Diggavi, Payam Delgosha, Soheil Mohajer This paper addresses the problem of finding the nearest neighbor (or one of the $R$-nearest neighbors) of a query object $q$ in a database of $n$ objects, when we can only use a comparison oracle. The comparison oracle, given two reference objects and a query object, returns the reference object most similar to the query object. The main problem we study is how to search the database for the nearest neighbor (NN) of a query, while minimizing the questions. The difficulty of this problem depends on properties of the underlying database. We show the importance of a characterization: \emph {combinatorial disorder} $D$ which defines approximate triangle inequalities on ranks. We present a lower bound of $\Omega(D\log \frac{n}{D}+D^2)$ average number of questions in the search phase for any randomized algorithm, which demonstrates the fundamental role of $D$ for worst case behavior. We develop a randomized scheme for NN retrieval in $O(D^3\log^2 n+ D\log^2 n \log\log n^{D^3})$ questions. The learning requires asking $O(n D^3\log^2 n+ D \log^2 n \log\log n^{D^3})$ questions and $O(n\log^2n/\log(2D))$ bits to store. Do not remove: This comment is monitored to verify that the site is working properly
{"url":"https://proceedings.nips.cc/paper_files/paper/2011/hash/eb86d510361fc23b59f18c1bc9802cc6-Abstract.html","timestamp":"2024-11-03T04:36:16Z","content_type":"text/html","content_length":"9141","record_id":"<urn:uuid:c5662606-50c8-486b-8059-a93613714fa8>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00396.warc.gz"}
The Planck length as a minimal length The best scientific arguments are those that are surprising at first sight, yet at second sight they make perfect sense. The following argument, which goes back to Mead's 1964 paper "Possible Connection Between Gravitation and Fundamental Length," is of this type. Look at the abstract and note that it took more than 5 years from submission to publication of the paper. Clearely, Mead's argument seemed controversial at this time, even though all he did was to study the resolution of a microscope taking into account gravity. For all practical purposes, the gravitational interaction is far too weak to be of relevance for microscopy. Normally, we can neglect gravity, in which case we can use Heisenberg's argument that I first want to remind you of before adding gravity. In the following, the speed of light and Planck's constant ℏ are equal to one, unless they are not. If you don't know how natural units work, you should watch this video , or scroll down past the equations and just read the conclusion. Consider a photon with frequency ω, moving in direction , which scatters on a particle whose position on the -axis we want to measure (see image below). The scattered photons that reach the lens (red) of the microscope have to lie within an angle ε to produces an image from which we want to infer the position of the particle. According to classical optics, the wavelength of the photon sets a limit to the possible resolution Δ But the photon used to measure the position of the particle has a recoil when it scatters and transfers a momentum to the particle. Since one does not know the direction of the photon to better than ε, this results in an uncertainty for the momentum of the particle in direction Taken together one obtains Heisenberg's uncertainty principle We know today that Heisenberg's uncertainty principle is more than a limit on the resolution of microscopes; up to a factor of order one, the above inequality is a fundamental principle of quantum Now we repeat this little exercise by taking into account gravity. Since we know that Heisenberg's uncertainty principle is a fundamental property of nature, it does not make sense, strictly speaking, to speak of the position and momentum of the particle at the same time. Consequently, instead of speaking about the photon scattering off the particle as if that would happen in one particular point, we should speak of the photon having a strong interaction with the particle in some region of size (shown in the above image). With gravity, the relevant question now will be what happens with the measured particle due to the gravitational attraction of the test particle. For any interaction to take place and subsequent measurement to be possible, the time elapsed between the interaction and measurement has to be at least of the order of the time, τ, the photon needs to travel the distance , so that τ is larger than . (The blogger editor has an issue with the "larger than" and "smaller than" signs, which is why I avoid using them.) The photon carries an energy that, though in general tiny, exerts a gravitational pull on the particle whose position we wish to measure. The gravitational acceleration acting on the particle is at least of the order is Newton's constant which is, in natural units, the square of the Planck length l[Pl] . Assuming that the particle is non-relativistic and much slower than the photon, the acceleration lasts about the duration the photon is in the region of strong interaction. From this, the particle acquires a velocity of Thus, in the time , the aquired velocity allows the particle to travels a distance of Since the direction of the photon was unknown to within ε, the direction of the acceleration and the motion of the is also unknown. Projection on the -axis then yields the additional uncertainty of Combining this with the usual uncertainty (multiply both, then take the square root), one obtains Thus, we find that the distortion of the measured particle by the gravitational field of the particle used for measurement prevents the resolution of arbitrarily small structures . Resolution is bounded by the Planck length, which is about 10 cm. The Planck length thus plays the role of a minimal length. (You might criticize this argument because it makes use of Newtonian gravity rather than general relativity, so let me add that, in his paper, Mead goes on to show that the estimate remains valid also in general relativity.) As anticipated, this minimal length is far too small to be of relevance for actual microscopes; its relevance is conceptual. Given that Heisenberg's uncertainty turned out to be a fundamental property of quantum mechanics, encoded in the commutation relations, we have to ask then if not this modified uncertainty too should be promoted to fundamental relevance. In fact, in the last 5 decades this simple argument has inspired a great many works that attempted exactly this. But that is a different story and shall be told another time. To finish this story, let me instead quote from a letter that Mead, the author of the above argument, wrote to Physics Today in 2001 . In it, he recalls how little attention his argument originally received: "[In the 1960s], I read many referee reports on my papers and discussed the matter with every theoretical physicist who was willing to listen; nobody that I contacted recognized the connection with the Planck proposal, and few took seriously the idea of [the Planck length] as a possible fundamental length. The view was nearly unanimous, not just that I had failed to prove my result, but that the Planck length could never play a fundamental role in physics. A minority held that there could be no fundamental length at all, but most were then convinced that a [different] fundamental length..., of the order of the proton Compton wavelength, was the wave of the future. Moreover, the people I contacted seemed to treat this much longer fundamental length as established fact, not speculation, despite the lack of actual evidence for it." 31 comments: 1. While extraordinary claims demand extraordinary evidence, I am always amazed when claims which are obvious in hindsight, even mathematical theorems, are treated sceptically. Maybe it is because people don't want to admit that they are ashamed that they didn't think of it first. 2. http://www.nature.com/scitable/topicpage/human-chromosome-number-294 Feynman's Nobel Prize pivoted on a V−A (left-handed) Lagrangian for weak interactions. Everybody knew it was S-T. It had to be S-T. Sudarshan was correct much earlier on. He was told to piss off, as were Yang and Lee. Yang and Lee had Madame Wu. Feynman had Feynman (and Gell-Mann). Sudarshan got nothing. Gravitation is not a fashion statement. The plural of "anecdote" is not "data," and data are not information. Why must the vacuum be fundamentally continuous and isotropic toward fermionic mass? 3. I dunno, Al. Why not? You tell us. 4. Also, thanks Bee, for this wonderful blogpost. I can't help thinking though, you've only told us half a story, which is cool, because it builds interest therefore anticipation. This is the sort of stuff that makes Science fun! Can't wait to hear how this pans out. Good on ya. 5. Hi Bee, I am just trying to orientate from your perspective.:)At that Planck length of course one runs into trouble with some geometrical description so how indeed would some quasi-description ever be satisfied as to defining the shape of things in a matter orientated world? LISA will be sensitive to waves in the frequency band between 0.03 milliHertz to 100 milliHertz, including signals from massive black holes that merge at the center of galaxies, or that consume smaller compact objects; from binaries of compact stars in our Galaxy; and possibly from other sources of cosmological origin, such as the very early phase of the Big Bang, and speculative astrophysical objects like cosmic strings and domain boundaries Sir Roger Penrose of course has his own ideas too. What is the basis of his experimental views? Accepting that wavefunctions are physically real, Penrose believes that things can exist in more than one place at one time. In his view, a macroscopic system, like a human being, cannot exist in more than one position because it has a significant gravitational field. A microscopic system, like an electron, has an insignificant gravitational field, and can exist in more than one location almost indefinitely. See:The Penrose interpretation 6. Bee, Your choice of font makes it difficult to perceive italicized elements of quoted sources as they are demonstrated in comment section. Has comment section been given an italicized choice? This is new I think? If no desire to change will adapt to the way quotes are demonstrated according to that selection. No problem for the future. 7. Shouldn't that be generalized to a Planck volume instead? Which you could combine the the Heisenberg uncertainty principle to ensure that singularities are not observable? \Delta volume \times \Delta mass \geq \frac{\hbar \times length_planck}{c} 8. Hi Bee, The derivation of the Heisenberg principle, interestingly enough, does not involve the electric charge, i.e., how the photon couples to the particle is not relevant. For the gravity case, G enters, however, we are only after Δx. I suppose a careful argument using EM analogous to the Mead argument would yield Δ x ≥ fine structure constant * compton wavelength (or some such). Since the gravity bound is much more stringent than this, I can see why people were skeptical of Mead's argument. Thanks for the post! 9. Hi Plato, It's not so much my choice of font as the default that came with the template. You are right, the italics are difficult to see. I'll try to change the font, I'm not a big fan of arial anyway. 10. Hi Bee, The Compton wavelength limit normally would have a priority. But I guess he considers non relativistic physics only. 11. Hi Giotis, No, he does it fully relativistic also, I just haven't added the more complete argument here. If you turn up the energy of the photon you can get down the wavelength. If you don't take into account gravity, you can do this arbitrarily. The point is here that when you reach Planckian energies, you start perturbing the particle you are trying to measure in such a way that going to even higher frequencies doesn't help. Best, 12. Hi Aaron, Yes, without spherical symmetry one may expect that volumes are the relevant quantity to talk about. This argument has been made eg here (page 5/6), and though plausible it is not particularly bloggable if you see what I mean. Best, 13. and what is the relation with the Compton wavelength? 14. I am probably misunderstanding the question. The Compton wavelength of what? 15. The Compton wavelength as a limit for which positions measurements become ill defined due to creation of particles. I mean if the energy of the photon exceeds some limit according to QFT particles would be created. 16. The photon's energy always exceeds the Planck energy in some restframe. But yes, if that is what you mean, if it interacts with the particle at very high energies, a microscope isn't anymore a very good analogy since, as you say, you'd have a very inelastic scattering and you'd have to figure out what was going on from the outgoing particles rather than watching photons on a screen. 17. Thanks Bee for taking the time to answer these questions. 18. Where could I get a free copy of the paper? I am very interested into study it! Best wishes! 19. Hi Juan, Unfortunately, I cannot be of help. I only have a printed version of the paper, and presently no journal access, so cannot download a PDF. Maybe somebody else can send one? Best, 20. This comment has been removed by the author. 21. Hi All, Mead Article This link expires in 10 days. Best wishes 22. Given frequency, wavelength inversely varies with refractive index. Anomalous dispersion reduces local speed of light to 340 m/s (ruby), to 17 meters/sec (BEC), and to zero. Electromagnetic Planck length may suffer interaction wildly altering scale. Gravitation is interaction. Equivalence Principle (EP) composition, field, and photon tests are inert to 5x10^(-14) difference/average. Physics cannot derive fermionic mass parity asymmetries. They are manually inserted. If gravitation is geometry, the test is massed geometry. Opposite shoes violate the EP. Chemically and macroscopically identical, enantiomorphic crystal lattices violate the EP. 1) Left- versus right-handed alpha-quartz or gamma-glycine single crystals, Eotvos experiment. 2) North-south aligned enantiomorphic crystal lattices and changing enthalpies of fusion to identical achiral melts over 24 hours, benzil. Physics, like Euclid, contains no errors. Euclid fails on a sphere. Falsification occurs external to founding postulates. 23. Hi Bee, It’s indeed interesting to wonder what if anything can be defined as the minimum of length and yet as J.S. Bell would point out quite another thing to consider what exactly it is we are attempting to have measured as to be so defined. “The concept of 'measurement' becomes so fuzzy on reflection that it is quite surprising to have it appearing in physical theory at the most fundamental level. Less surprising is perhaps that mathematicians, who need only simple axioms about otherwise undefined objects, have been able to write extensive works on quantum measurement theory - which experimental physicists do not find it necessary to read. Mathematics has been well called 'the subject in which we never know what we are talking about’ [ Bell quoting Bertrand Russell]. Physicists confronted with such questions, are soon making measurement a matter of degree, talking of ‘good’ measurements and ‘bad’ ones. But the postulates quoted no nothing of ‘good’ and ‘bad. And does not any analysis of measurement require concepts more fundamental than measurement? And should not the fundamental theory be about these more fundamental concepts?” -J.S. Bell “Quantum Mechanics for Cosmologists” Speakable and Unspeakable in Quantum Mechanics (First Edition) p.117-118 24. Hi Genorb, Thanks! That is very useful. Best, 25. Does lightphotons and neutrinos interact at all? Neutrinos are still leptons? At c does the photon behave relativistic also then? A photon is massless. c is measured by em-force as Uncle Al points out. You say Planck energy is always higher. 26. In dense aether model the observable reality appears like fractal landscape under the fog (or like the undulating water surface being observed via its own ripples). The density fluctuations of dark matter replicate the foamy structure of space-time at short scales (Higgs field). After then two AdS/CFT dual approaches could be applied here: 1) Nothing smaller than these density fluctuations can be observed there in similar way, like the objects outside of visibility scope of landscape under the fog. 2) These Universe at such small scales doesn't differ from our Universe at the human observer scale, we just cannot observe it clearly because of omnipresent quantum noise. 27. http://physicsforme.wordpress.com/2012/01/22/are-opera-neutrinos-faster-than-light-because-of-non-inertial-reference-frames/ 28. L.H. Thomas used something Frame like to analyze the precession of the electron, parallel transported around an orbit, so something is not quite a geometrical point. Perhaps one might argue that quanta are more fundamental than idealized spacetime geometry, but merely consistent with it. 29. Just as exceeding the speed of light is "impossible" because it would turn time "inside out," so exceeding the Plank length limit is "impossible" because it would turn space "inside out." So maybe this is where we need to look when we look for those tiny, curled up "extra dimensions." 30. I guess patience is a virtue. 31. About the speed of light limit. I would like to say that in multitemporal relativities in which other speeds of light could appear (I know, it is a crazy idea, there is no evidence of that although DM/DE are puzzling too, and people usually argue hardly against multitime theories and other geometries beyond the riemannian one), and other extensions of relativistic symmetries, the limit is no longer a limit. The speed of light limit is closely related to the Lorentz invariance of our 3+1 world. Either if you change the (pseudo)riemannian structure of the theory, or you include new "degrees of freedom" like extra-times (curiously it doesn't happen with spatial-like coordinates) or some other multivector structures related to Finsler-like objects, there is no problem with speeds greater than c. If vacuum is some kind of medium, it is also reasonable that could exist something faster than light. I found myself puzzled when Bee published her paper about quantum superpositions of speeds of light. Indeed, there is something there to be understood better. COMMENTS ON THIS BLOG ARE PERMANENTLY CLOSED. You can join the discussion on Patreon. Note: Only a member of this blog may post a comment.
{"url":"https://backreaction.blogspot.com/2012/01/planck-length-as-minimal-length.html?ref=20.219.157.96","timestamp":"2024-11-11T11:08:33Z","content_type":"application/xhtml+xml","content_length":"227258","record_id":"<urn:uuid:eead6735-ccad-4b44-81d3-dd8c7c28c819>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00086.warc.gz"}
Study On The Return On Capital Employed Finance Essay - Free Essay Example - 3008 Words | StudyDriver.com Return on capital EmployedA (ROCE) is a financial measure that measures the performance of a company as to how it generates cash flow relative to the capital it has invested in its business. It is defined asA net operating profit less adjusted taxesA divided byA invested capitalA and is usually expressed as aA percentage. When the return on capital is greater than theA cost of capitalA (usually measured as theA WACC), the company can create or destroy on its value terms. The ROCE for Iggle Plc is 35% and while for Piggle is 20% which means that the return on the capital employed is over 15% more for Iggle Inc and it is a healthy company to invest when compared for long term investment. Return on Equity : It is one of the most crucial profit indicator to the shareholders of the firm. It is the relation between Net Income / Average Equity, Net Income is the income after the Profit after Tax. Return on Equity for Iggle Plc is 20% and for Piggle Plc is 10% which says that iggle plc gives more return to the shareholder on the money employed by them when compared to piggle Plc. Iggle Plc is a better option for investment considering long term investment. Average Settlement period for debtors : The average settlement period or average time taken for debtors to pay the amount outstanding. For Iggle Plc the Average settlement period is 78 days and for Piggle it is 45 days. It means that the debtors take longer time to repay to Iggle than to Piggle. Means they have longer credit period and it also shows the soundness of Iggle Plc. Average Settlement period for creditors : The average settlement period or average time taken by the business to pay its creditors. Iggle Plc is 85 days and for Piggle Plc it is 45 days, which means the money lenders of Iggle Plc have provided more number of days for them to pay of their credit. It may be because of their goodwill or strong relationship with buyer. It shows a positive relationship of Iggle Plc with it suppliers. Gross Profit Margin : Gross Profit Margin ratio shows the profits relative to sales after deduction of direct production costs. It is the relation between Gross Profit / Net Sales, where Net Sales = Sales- Excise Duty. Gross Profit margin for Iggle Plc is 44% and for Piggle Plc is 27% which means that Iggle Plc is more efficient for production operation and the relation between production costs and selling price. Since calculation of Gross Profit shows the efficiency of Iggle Plc in production related activities making it more efficient. Fixed Asset Turnover Ratio : It is also known as Sales to Fixed Asset Ratio and it measure the efficiency and the profit earning capacity of the firm. Fixed Asset Ratio is relation between Cost of Sales / Net Fixed Assets. Fixed Asset Turnover Ratio for Iggle Plc is 15 times which means ratio is high. Higher the ratio, greater is the intensive utilization of fixed assets. While for Piggle it is just 3 times which means lower is the intensive utilisation of the fixed assets. Capital Gearing Ratio : It is related with the solvency ratio and is usually used to analyze the capital structure of the firm. It refers to the proportion of relationship between equity share capital and other funds bearing funds and loans. Capital Gearing Ratio sets relation between Equity Share Capital / Fixed Interest Bearing Funds. Capital Gearing Ratio for Iggle is 65% while for Piggle Plc is 15% which means that the capital structure of Iggle Plc is stronger when compared to Piggle. And Equity Share Capital holds much stronger than the other funds and loans. Current Ratio : It is the ability of the enterprise to meet its current obligations. It involves the relationship between current assets and current liabilities. In case of Iggle Plc, Current Ratio is 1.8 : 1 which means that the firm has current assets which are 1.8 times the current liabilities. While in case of Piggle Plc, current ratio is 2.9:1, which means that the firm has current assets which is 2.9 times its current liabilities. Since the Ideal Current .Ratio being 2.1, Piggle Plc has a better ability to meet its short term debts and is more healthy. Acid Test Ratio : It is also known as Quick Test Ratio. It also shows the relationship between the current assets and current liabilities deducting the inventories out of current assets. It is named as Quick test as it gives the abilities of the firm to pay its liabilities with relying on the sale and recovery of inventories. Iggle Plc, Acid Test Ratio 0.6 :1, after deducting the inventory out of the lot the firm has current assets just 0.6 times when compared to liabilities. Piggle Plce, Acid Test Ratio 2.9:1 which shows that piggle plc is more prominent in meeting out the current liabilities. Price Earning Ratio : It is calculated by taking the market price of the stock by dividing it by earning per share. If we consider that Iggle P/E ratio when compared with one share is 6/1 (60%) and while for Piggle Plc P/ E Ratio is 10/1(100%), then we can conclude that the price earning capacity of Piggle Plc is more than Iggle. The P/E ratio method is useful as long as the firm is a viable business entity and its real value is reflected in its profits. Net Profit margin (PBT) It shows the earnings left for shareholders as a percentage of net sales. It tells the over all efficiency of the firm. It is the relationship between Net Profit and Net Sales. Net Profit Margin for Iggle Plc is 15% as compared to Piggle Plc which is 9%. Profit for Iggle is more in terms of Net Sales than Piggle which shows the efficiency of Iggle in production, administration, selling, pricing and tax management. Stock Holding Period Stock holding period states that for how long the inventory was lying in the stock. It is the relation between 365 / average inventory turnover. For Iggle plc Stock Holding period is 88 days while for Piggle it is 21 days. It means there is lack of demand for Iggle goods in the market when compared to Piggle Plc. Inference : Based on the above data in relation with the ratio analysis, I would like to advice all potential investors that Iggle Plc is stronger when compared in relation to Gross Profit Margin Ratio, Net Profit Margin, Average Settlement Period for both debtors and creditors, Return of Capital Employed and ROE, Capital Gearing Ratio as all these ratios where related to the long term investment decisions and showed the positivity toward the stakeholders. While Piggle Plc is better off meeting with the short term requirement with its Current and Acid Test ratio. Since both the companies are software companies it is much prone to market innovation and technology up gradation. These ratios being a source of information states a stable position for Iggle Plc when compared to Piggle Plc. Part 2 Piggle Plc is making investment appraisals of two potential long-term projects namely A and B. Since the initial investment for both the projects are £2m, so both the projects would be considered in terms of the profitability and long term returns. Capital expenditure decisions occupy a very important place in corporate finance like, these decisions once taken has far reaching effects which proceeds over a long term period and influences the risk taking complexion of the company. Moreover, it involves huge amount of money are capital decisions once taken are irreversible and also involves opportunity cost of various equivalent or much fruitful viable investment opportunities. Piggle Plc before taken up any decision regarding Project A or Project B should scrutinize the opportunity in terms of initial investment and returns considering the time value of money, Cost-Benefit Analysis. Evaluation Criteria Non - Discounting Discounting Criteria Criteria (doesn't include time value of money) (includes time value) Pay back period ARR Net Present Value Internal Rate of Return Decisions based on the Payback Period Initial Investment being £2m Payback Project A= 4 years Project B =5 years The Payback period measures the duration of time which is required to recover the initial investment involved in the project. It doesn't include the time value of money and is based on simple calculations. Project A = 4 years Project B = 5 years According to the Pay Back Period Project A should be accepted since the money of £2m invested initial in the project is recovered in the first 4 years, while in Project B it would be recovered in 5 years being Rs.4,00,000 as net annual cash Flow for 5 years. However, it may even approve and go for Project B because since the Pay Back Period doesn't consider time value of money so The decisions based on the Pay Back Period are not wise decisions, coz it doesn't consider the time value of money. The basic conclusion drawn of the payback method is as quickly the cost of anA investmentA can be recovered, the project is considered as more desirable. I would suggest Piggle Plc should not base their decisions of rejecting or accepting the Projects Pay Back Period as it may become riskier for them to revert back once they go ahead with approving the Decisions based on Accounting Rate of Return : Accounting Rate of Return is the method of estimating theA returns rate from an investment using a simple straight-line approach (doesn't consider time value of money). The rate of return is determined when profit is divided by the number of years invested, then by the investment cost. This method is simple and is used by major decision makers for project approvals. Accounting Rate of Return (ARR) = Average Profit After Tax Average Book Value of the Investment Project A ARR = 15% Project B ARR = 20% Considering the average capital invested the average profit for Project B is more than Project A. Since the Accounting Rate of Returns also considers the depreciation amount after lessoning that the average profit after depreciation is involves, which in itself shows the viability of the decision. Piggle Plc must move ahead with the decision of accepting Project B for Capital Budgeting Decisions. Unlike, if Piggle Plc considers the concept of rate of return familiar and easy to work with rather than absolute quantities it should move ahead with accepting Project B. Decision based on Net Present Value : The Net Present Value is equal to the present value of future cash flows and any immediate cash outflow. In the case of Piggle Plc, the immediate cash flow will be investment (cash outflow in terms of annual cash flow for the number of year) and the net present value will be therefore equal to the present value of the all the future cash inflow subtracted by initial investment. Decisions based on the NPV considers factors like Discounting rate, no. Of years, inflations, interest rate. The general criterion based on the Net Present Value is that if the Net Present Value comes as Positive after deducting from Initial Investment is accepted or else if it is negative it is rejected. In case of Piggle Plc the Initial Investment is £2m And the Net Present Value for Project A 145 Project B 120 Since the NPV after considering the time value of money and future cash inflow is positive for both the projects. The NPV is a conceptually sound criterion of investment appraisal since time value is considers. Piggle Plc can go with approving both Project A and Project B based on the above statistics but it should go ahead with accepting Project B because capital decisions and Net Present Value represents the contribution to the wealth of the shareholders and stakeholders, maximising NPV is viable and correct stating the objective of investment decision making which is "maximising shareholders wealth". Decisions based on Internal Rate of Return : Internal rate of return considers the time value of money. Internal Rate of Return is that rate of interest at which the Net Present Value of a Project is equal to zero, or it is the rate which equates the present value of cash outflows to the present value of cash inflows. Under the Net Present Value method of discounting rate is known (the firm's cost of capital) under IRR this rate which makes NPV as zero has to be traced out. In case of Piggle Plc Internal Rate of Return Project A = 16% Internal Rate of Return Project B = 13% Both Project A and B can be approved based on the IRR because at different rate of 16% and 13% which is the IRR the projects are capable of nearing to the Initial Investment all factors being the same. Only external facts like technical and market appraisal decision would make me approve Project A since the IRR is 16%. Both the Projects are viable and difficult to choose between mutually exclusive projects that doesn't differ significantly in terms of outlays. Conclusion : If i have to choose between the two Projects summarising all the Investment Methods I would go ahead with Project B since the ARR is more of Project B and also supporting the time value of money with Net Present Value is positive and more for Project B. Part 3 The Main Sources of finance that are available for Piggle Plc to finance the chosen Project will majorly include : Sources for the Initial Investment or Cash Outflow Firms needs finance mainly for two purpose : To fund the long term decisions For meeting the working capital requirement Since these are long term decisions for Piggle Plc may decide the future decisions and setting up of the firm, expansion, diversification, consolidation and other major capital expenditure decisions. By the nature of the Project of Piggle Plc, long term sources of funds become the best suited means of funding. Factor to be considered here for an investment decision will be proper asset-liability management. Sources of finance available with Piggle Plc are: RAISING CAPITAL : Piggle can issue three types of capital - equity, debenture and preference capital and can be distinguished on the basis of the risk, return and ownership pattern. Equity Capital : Majority of the capital can be raised through equity capital and hence they will become the owners of the company. They will get residual profits after having paid the preferential shareholders and others creditors. One of the added benefit which Piggle Plc would have with raising up of Equity Capital is that the issuing firm doesn't have fixed obligation for dividend payment but offers permanent capital with limited liability for repayment. Preference Capital : Can also raise a part of Capital by Preference capital. However this is similar to equity capital except few differences like preference dividend is not tax deductible. They earn a fixed rate of return for their dividend payment. If Piggle Plc is not able to pay the dividend in a particular year, preference shareholders get arrears in dividend for the cumulative period Cumulative or Non Cumulative Preference Shares Redeemable or Perpetual preference shares Convertible or non Convertible Preference Shares Debenture Capital : Piggle Plc can go raising debenture capital as a marketable legal contract whereby Piggle Plc promises to pay its owner, a specified rate of interest for a defined period of time and to repay the principle amount at the maturity of the period. There are other types of debenture capital Non Convertible : Wherein after the maturity period the same wont be converted in equity shares and will be redeemed back with principle amount Fully Convertible : At the end of the maturity period the same will be converted into Equity Share Capital at once or in instalments Partly Convertible Debentures : At the end of the maturity period some part of the debentures would be redeemed back with the principle amount and partial will be converted into equity capital. The same being previously decided. Companies issue securities to the public in primary market through IPOs and get them listed on stock exchanges. Stocks and securities are then traded in secondary market. It is one of the major sources of debt finance and repayable in more than one year but less than 10 years involves a rate of interest. For approving through the Term Loan Piggle Plc has to first mortgage or by way of depositing title deeds of immovable properties. Advantage of this source of finance is its post-tax cost, which is lower when compared to equity and preference capital. Private placement method of financing involves direct selling of securities to a limited number of institutional investors of high net worth investors. It involves low cost, access to funds faster, few procedural formalities. Piggle can float their stocks in foreign capital markets. Companies are going global and can issue Global Depository Receipts , Euro Convertible bonds which are issued abroad and listed and traded on a foreign exchange. Once converted to equity can be traded on domestic exchange. Part 3 (ii) Major Budgeting techniques which Piggle Plc should incorporate to support the running of the chosen Project Successfully are should involve Budgeting techniques : Pay Back Period : It is the cumulative time which is used to recover or recoup the initial investment in terms of years without considering the time value of money. Since this method is easy the initial decision of Piggle could be based on Pay Back Period Internal Rate of Returns : IRR is the discount rate that equates the present value of the future net cash flows from an investment project with the project's initial cash outflow. It considers the time value of money and will provide an exact idea for Piggle and it is the most used capital decision technique. Net Present Value : It is based on the time value of money. It is the present value of all the cash inflows deducted by the initial investment. Profitability Index : It is also known as Cost Benefit Ratio. It considers the Present Value of the Future Cash Flows to the Initial Investment. Piggle Plc can directly approve or disapprove a Project based on the Benefit Cost Ratio. If the BCR > 1, accept the project, if the BCR < 1, reject the project. These are the four techniques on which Piggle Plc should choose the project running. Cite this page
{"url":"https://studydriver.com/study-on-the-return-on-capital-employed-finance-essay/","timestamp":"2024-11-10T01:26:39Z","content_type":"text/html","content_length":"219048","record_id":"<urn:uuid:08c3edf8-bc80-4e3b-8298-7a164d8fc286>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00144.warc.gz"}
Plus on minus what gives? Positive and negative numbers were invented by mathematicians. The rules for multiplying and dividing positive and negative numbers were also invented by mathematicians. We need to learn these rules in order to tell mathematicians what they want to hear from us. It's easy to remember the rules for multiplying or dividing positive and negative numbers. If two numbers have different signs, the result will always be a minus sign. If two numbers have the same sign, the result will always be a plus. Let's consider all possible options. What gives plus for minus? When multiplying and dividing, plus by minus gives a minus. What gives plus for minus? What gives a minus on plus? When multiplying and dividing, we also get a minus sign as a result. What gives a minus on plus? As you can see, all the options for multiplying or dividing positive and negative numbers have been exhausted, but the plus sign has not appeared. What gives a minus on a minus? There will always be a plus if we do multiplication or division. What gives a minus on a minus? What gives plus for plus? It's quite simple here. Multiplying or dividing plus by plus always gives plus. What gives plus for plus? If everything is clear with the multiplication and division of two pluses (the result is the same plus), then with two minuses, nothing is clear. Why does a minus and a minus make a plus? I can assure you that, intuitively, mathematicians have correctly solved the problem of multiplying and dividing the pros and cons. They wrote down the rules in textbooks without giving us any reasons. For the correct answer to the question, we need to figure out what the plus and minus signs mean in mathematics. One mathematics teacher told us in the classroom: "Mathematics is an exact science, if you lie twice, you get the truth". This statement was very useful to me. Once I was solving a difficult problem with a long solution. I knew exactly what the result should be. But the result was different. I have been looking for an error in the calculations for a long time, but I could not find it. Then, a few steps before the final result, I changed one number so that the result was correct. I lied twice in the calculations and got the correct result. This is very similar to the minus on minus equals plus rule, isn't it? No comments:
{"url":"https://www.mathforblondes.com/2020/12/plus-on-minus-what-gives.html","timestamp":"2024-11-11T05:09:11Z","content_type":"application/xhtml+xml","content_length":"78712","record_id":"<urn:uuid:f91ec6d4-7a00-4645-af87-026c4632d716>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00460.warc.gz"}
Optimal Classification/Rypka Method/Equations/Separatory/Characteristic/Empirical/Target Set - Wikibooks, open books for an open world Target Set Truth Table Value ${\displaystyle t_{i}=\sum _{j=0}^{K}v_{i,j}V^{(K-j)}}$ , where:^[1] ☆ v[i,j] is the element's attribute value, ○ i is the ith element's index value, where, i = 0...G' where G is the number of elements in the bounded class, and, ○ j is the jth characteristic's index value, where, j = 0...K and where, ○ K is the number of characteristics in the target set, ■ V is highest value of logic in the group, ■ V^(K-j) is the positional value of the jth characteristic. ${\displaystyle n_{t_{i}}=n_{t_{i}}+1}$ , where, n[t[i]] contains the multiset count for each truth table value^[2]. 1. ↑ As the characteristic with the greatest separatory value is moved to the next most significant position, K is incremented to expand the target set size from two characteristics to the number of characteristics in the group or the number of characteristics which result in 100% separation. For the initial target set with one characteristic the separatory value is computed individually for each characteristic in the group to find the initial characteristic with the highest separatory value. 2. ↑ coefficient of association, ${\displaystyle coa={\frac {n_{t_{i}}}{n_{R}}}}$ , see page 172 of the primary reference
{"url":"https://en.m.wikibooks.org/wiki/Optimal_Classification/Rypka_Method/Equations/Separatory/Characteristic/Empirical/Target_Set","timestamp":"2024-11-08T21:08:47Z","content_type":"text/html","content_length":"33421","record_id":"<urn:uuid:6f303aaa-8dea-4644-93f7-f1132e6b27e3>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00330.warc.gz"}
International Version Math Mammoth User Guide, Grade 2 International Version (Canada) This document has the following sections: Basic principles in using Math Mammoth Complete Curriculum Math Mammoth Complete Curriculum is not a scripted curriculum. In other words, it is not spelling out in exact detail what the teacher is to do or say in a specific lesson. Instead, Math Mammoth gives you, the teacher, various tools for teaching: • The two student worktexts (part A and B) are the most important part of the curriculum. These contains all the lesson material and exercises, and INCLUDE the explanations of the concepts (the teaching part) in blue boxes. The worktexts also contain some advice for the teacher in the “Introduction” of each chapter. The teacher can read the teaching part of each lesson before the lesson, or read and study it together with the student in the lesson, or let the student read and study on his own. If you are a classroom teacher, you can copy the examples from the “blue teaching boxes” to the board and go through them on the board. • There are hundreds of videos matched to the curriculum available at https://www.mathmammoth.com/videos/ . You can simply have the author teach your child or student! • The “Introduction” part of each chapter (within the student worktext) has a link list to various free online games and other resources on the Internet. These games can be used to supplement the math lessons, for learning math facts, or just for some fun. • The student books contain some mixed revision lessons, and the curriculum also provides you with additional cumulative revision lessons. • There is a chapter test for each chapter of the curriculum, and a comprehensive end-of-year test. • The worksheet maker allows you to make additional worksheets for most calculation-type topics in the curriculum. This is a single html file. You will need Internet access to be able to use it. • You can use the free online exercises at https://www.mathmammoth.com/practice/ • Some grade levels have cutouts to make fraction manipulatives or geometric solids. • And of course there are answer keys to everything. How to get started Have ready the first lesson from the student worktext. Go over the first teaching part (within the blue boxes) together with your child. Go through a few of the first exercises together, and then assign some problems for your child to do on their own. Repeat this if the lesson has other blue teaching boxes. Naturally, you can also use the videos at https://www.mathmammoth.com/videos/ Many children can eventually study the lessons completely on their own — the curriculum becomes self-teaching. However, children definitely vary in how much they need someone to be there to actually teach them. Pacing the curriculum The lessons in Math Mammoth complete curriculum are NOT intended to be done in a single teaching session or class. Sometimes you might be able to go through a whole lesson in one day, but more often, a lesson might span 3-5 pages and take 2-3 days or classes to complete. Therefore, it is not possible to say exactly how many pages a student needs to do in one day. This will vary. However, it is helpful to calculate a general guideline as to how many pages per week should be covered in order to get through the curriculum in one school year (or whatever span of time you are allotting to it). The table below lists how many pages there are for the student to finish in this particular grade level, and gives you a guideline for how many pages per day to finish, assuming a 180-day school Lesson Number of Number of days Number of days Pages to study Pages to study Grade level pages school days for tests for studying per day per week and reviews the student book 2-A 131 89 10 79 1.66 8.3 2-B 136 91 10 81 1.68 8.4 Grade 2 total 267 180 20 160 1.67 8.34 The table below is for you to fill in. First fill in how many days of school you intend to have. Allow several days for tests and for review lessons before tests — at least twice the number of chapters in the curriculum. For example, if a particular grade has 8 chapters, allow at least 16 days for tests & review. Then, to get a count of “pages/day”, divide the number of pages by the number of available days. Then, multiply this number by 5 to get the approximate number of pages to cover in a week. Lesson Number of Number of days Number of days Pages to study Pages to study Grade level pages school days for tests for studying per day per week and reviews the student book 2-A 131 2-B 136 Grade 2 total 267 Now, let's assume you determine that you need to study about 1.6 pages a day, or about 8 pages a week in order to get through the curriculum, based on the number of school days in your school year. As you study each lesson, keep in mind that sometimes most of the page might be filled with those blue teaching boxes, and very few exercises. You might be able to "cover" two full pages on such a day. Then some other day you might only assign one page of word problems. Also, you might be able to go through the pages quicker in some chapters, for example when studying the clock, because the large clock pictures fill the page so that one page does not have many problems. When you have a page or two filled with lots of similar practice problems ("drill") or large sets of problems, feel free to only assign 1/2 or 2/3 of those problems. If your child "gets it" with less amount of exercises, then that is perfect! If not, you can always assign him/her the rest of the problems some other day. In fact, you could even use these unassigned problems the next week or next month for some additional revision. In general, 1st-2nd graders might spend 25-40 minutes a day on maths, 3rd-4th graders might spend 30-60 minutes, and 5th-7th graders 45-75 minutes a day. If your child finds maths enjoyable, he/she can of course spend more time with it! However, it is not good to drag out the lessons on a regular basis, because that can then negatively affect the child's attitude towards maths. Working space and the usage of additional paper The curriculum generally includes working space directly on the page for students to work out the problems. However, feel free to let your students to use extra paper when necessary. They can use it, not only for the "long" algorithms (where you line up numbers to add, subtract, multiply and divide), but also to draw diagrams and pictures to help organize their thoughts. Some students won't need the additional space (and may resist the thought of extra paper), while some will benefit from it. Use your discretion. Some exercises don't have any working space, but just an empty line for the answer (e.g. 200 + _____ = 1 000). Typically, I have intended that such exercises to be done using MENTAL MATHS. However, there are some students who struggle with mental maths (often this is because of not having studied and used it in the past). As always, the teacher has the final say (not me!) as to how to approach the exercises and how to use the curriculum. We do want to prevent extreme frustration (to the point of tears). The goal is always to provide SOME challenge, but not too much, and to let students experience success enough so that they can continue enjoying learning maths. Students struggling with mental maths will probably benefit from studying the basic principles of mental calculations from the earlier grades. This article gives you a few such principles, but to study all of them, one would need to go through all the earlier grade levels of Math Mammoth curriculum, and find the lessons that list mental maths strategies. They are taught in the chapters about addition, subtraction, place value, multiplication and division. Using tests For each chapter, there is a chapter test, which can be administered right after studying the chapter. The tests are optional. Some families might prefer not to give tests at all. The main reason I have provided tests is for diagnostic purposes, and so that homeschooling families can use them for their record keeping. These tests are not aligned or matched to any standards. In the digital version of the curriculum, the tests are provided both as PDF files and as html files. Normally, you would use the PDF files. The html files are provided so that you can edit them (in a word processor such as Microsoft Word or LibreOffice), and change the numbers or he problems in it, in case you want your student to take the test second time. Remember to save the edited file under a different file name, or you will lose the original file. The end-of-year test is best administered as a diagnostic or assessment test, which will tell you how well the student remembers and has mastered the mathematics content of the entire grade level. Using cumulative revisions and the worksheet maker The student books contain mixed revision lessons which revise concepts from earlier chapters. The curriculum also comes with additional cumulative revision lessons, which are just like the mixed revision lessons in the student books, with a mix of problems covering various topics. These cumulative revisions are optional; use them as needed. They are named indicating which chapters of the main curriculum the problems in the revision come from. For example, the revision titled “Cumulative Revision, Chapters 1 - 4” includes problems that cover topics from chapters 1-4. In the digital version of the curriculum, the cumulative revisions are provided both as PDF files and as html files. Again, the html versions are editable. Both the mixed and cumulative revisions allow you to spot areas that the student has not grasped well or has forgotten. Another sign that the student has not understood a concept or skill is if he/ she cannot do word problems in the curriculum that require that concept or skill. When you find such a topic or concept, you have several options: 1. Check if the worksheet maker lets you make worksheets for that topic (for example, conversions between measuring units or equivalent fractions). 2. Check for any online games and resources in the Introduction part of the particular chapter in which this topic or concept was taught. 3. If you have the digital version, you could simply reprint the lesson from the student worktext, and have the student restudy that. 4. Perhaps you only assigned 1/2 or 2/3 of the exercise sets in the student book at first, and can now use the remaining exercises. 5. Check if our online practice area at https://www.mathmammoth.com/practice/ has something for that topic. We are constantly adding more exercises and games to this. 6. Khan Academy has free online exercises, articles, and videos for most any maths topic imaginable. Frequently asked questions Please read the FAQ at the Math Mammoth website. In case of any further questions (but please first check the FAQ!) about the curriculum, you can contact me at www.mathmammoth.com/contact.php. I wish you success in teaching maths! Maria Miller
{"url":"https://www.mathmammoth.com/canada/User_Guide_Math_Mammoth_Light_Blue_Grade_2_ca.htm","timestamp":"2024-11-04T17:36:24Z","content_type":"text/html","content_length":"16126","record_id":"<urn:uuid:939ab039-ecb5-4305-9dd5-0222962d3952>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00285.warc.gz"}
Using your own models David Garcia-Callejas and cxr team One of the design principles of cxr is to allow users to estimate parameters from their own models. cxr already includes four families of well-known population dynamic models: Beverton-Holt (BH), Lotka-Volterra (LV), Ricker (RK), and Law-Watkinson (LW), all in their discrete time formulation (see the accompanying publication of cxr for formulation of these general families). Users can choose between these default families by setting the model_family argument to the acronym of the chosen model. In this vignette we show, step by step, how other user-defined models can be integrated in the First of all, a common feature of the families included is that they rely on a common set of parameters, namely lambda,alpha (separated in alpha_intra and alpha_inter), lambda_cov, and alpha_cov. For now, user-defined models within cxr must be restricted to these parameters. A detailed description of these parameters can be found in previous vignettes. This does not preclude users to hard-code other model parameters inside the model functions, and future releases will aim to relax this assumption. At the end of this vignette we include the full R code of a model template that can be used directly into cxr. This template is also included as a stand-alone R file in the source files of the package (cxr_model_template.R), and here we first explain each section of it. First, model functions should be defined with the common set of arguments recognized by cxr. family_pm_alpha_form_lambdacov_form_alphacov_form <- function(par, neigh_intra_matrix = NULL, Assuming data for \(n\) observations of a focal species and \(s\) neighbour species (including itself), the function arguments are: • par: one-dimensional numeric vector including all parameters to fit. • fitness: one-dimensional vector of length \(n\) with fitness observations in log scale. • neigh_intra_matrix: matrix of \(n\) rows and 1 column, with observations of intraspecific neighbours. • neigh_inter_matrix: matrix of \(n\) rows and \(s-1\) columns, with observations of interspecific neighbours. • covariates: dataframe with \(n\) rows and as many columns as covariates. • fixed_parameters: list with values of parameters that are not to be fitted. the name of the function should be adapted to the parameters you want to fit: • ‘family’ is the two-letter acronym of your model family. • ‘alpha_form’ is one of ‘alpha_none’,‘alpha_global’, or ‘alpha_pairwise’, depending on whether no alphas, a single alpha, or pairwise alphas are included in your model. • ‘lambda_cov_form’ is one of ‘lambda_cov_none’ or ‘lambda_cov_global’, depending on whether you include the effect of covariates over lambda. • ‘alpha_cov_form’ is one of ‘alpha_cov_none’, ‘alpha_cov_global’, or ‘alpha_cov_pairwise’, depending on whether and how to include the effect of covariates over alpha values: no inclusion, a global parameter for each covariate on the alpha values, or interaction-specific parameters for each covariate. Thus, if you code a model from your User Model (UM) family that includes pairwise alphas, but no effects of covariates over lambda or alpha, your function must be named as: UM_pm_alpha_pairwise_lambdacov_none_alphacov_none <- function(par, neigh_intra_matrix = NULL, The first step inside the function is to retrieve the different parameters from the one-dimensional par vector. You should not have to change this section except for commenting or commenting out the section where parameters are retrieved. For example, if your model does not include lambda_cov or alpha_cov, you should comment their sections. Note that it includes a final parameter, sigma, that should always be there, as it keeps track of the error associated to the fit. This is how this part in your ‘UM_pm_alpha_pairwise_lambdacov_none_alphacov_none’ model would look like: # retrieve parameters ----------------------------------------------------- # parameters to fit are all in the "par" vector, # so we need to retrieve them one by one # order is {lambda,lambda_cov,alpha,alpha_cov,sigma} # comment or uncomment sections for the different parameters # depending on whether your model includes them # note that the section on alpha_inter includes two # possibilities, depending on whether a single alpha is # fitted for all interactions (global) or each pairwise alpha is # different (pairwise) # both are commented, you need to uncomment the appropriate one # likewise for the section on alpha_cov # -------------------------------------------------------------------------- pos <- 1 # if a parameter is passed within the "par" vector, # it should be NULL in the "fixed_parameters" list # lambda lambda <- par[pos] pos <- pos + 1 lambda <- fixed_parameters[["lambda"]] # lambda_cov # if(is.null(fixed_parameters$lambda_cov)){ # lambda_cov <- par[pos:(pos+ncol(covariates)-1)] # pos <- pos + ncol(covariates) # }else{ # lambda_cov <- fixed_parameters[["lambda_cov"]] # } # alpha_intra # intra alpha_intra <- par[pos] pos <- pos + 1 alpha_intra <- fixed_parameters[["alpha_intra"]] alpha_intra <- NULL # alpha_inter # uncomment for alpha_global # alpha_inter <- par[pos] # pos <- pos + 1 # uncomment for alpha_pairwise alpha_inter <- par[pos:(pos+ncol(neigh_inter_matrix)-1)] pos <- pos + ncol(neigh_inter_matrix) alpha_inter <- fixed_parameters[["alpha_inter"]] # alpha_cov # if(is.null(fixed_parameters$alpha_cov)){ # # uncomment for alpha_cov_global # # alpha_cov <- par[pos:(pos+ncol(covariates)-1)] # # pos <- pos + ncol(covariates) # # uncomment for alpha_cov_pairwise # # alpha_cov <- par[pos:(pos+(ncol(covariates)* # # (ncol(neigh_inter_matrix)+ncol(neigh_intra_matrix)))-1)] # # pos <- pos + (ncol(covariates)*(ncol(neigh_inter_matrix)+ncol(neigh_intra_matrix))) # }else{ # alpha_cov <- fixed_parameters[["alpha_cov"]] # } # sigma - this is always necessary sigma <- par[length(par)] At this point, you can code your model. This is how the equivalent Beverton-Holt model looks like, for reference: # MODEL CODE HERE --------------------------------------------------------- # the model should return a "pred" value # a function of lambda, alpha_intra, alpha_inter, lambda_cov, alpha_cov # and neigh_intra_matrix, neigh_inter_matrix, and covariates # we do not differentiate alpha_intra from alpha_inter in this model # so, put together alpha_intra and alpha_inter, and the observations # with intraspecific ones at the beginning alpha <- c(alpha_intra,alpha_inter) all_neigh_matrix <- cbind(neigh_intra_matrix,neigh_inter_matrix) alpha <- alpha_inter all_neigh_matrix <- neigh_inter_matrix term = 1 #create the denominator term for the model for(z in 1:ncol(all_neigh_matrix)){ term <- term + alpha[z]*all_neigh_matrix[,z] pred <- lambda/ term # MODEL CODE ENDS HERE ---------------------------------------------------- Note how there are no hard-coded checks for ensuring that alpha values and neigh_matrix values are appropriately sorted. These checks are performed in the cxr_pm_fit function that internally sorts data and call this model for optimization. The last part of the model function is returning the negative log-likelihood value for that particular parameter combination. This is the value that the numerical optimization algorithms aim to minimize. In general, you should not need to change this code. # the routine returns the sum of negative log-likelihoods of the data and model: # DO NOT CHANGE THIS llik <- dnorm(fitness, mean = (log(pred)), sd = (sigma), log=TRUE) When you have a functioning model, you should be able to use it within cxr as soon as you load your model into the global environment, which can be done simply by ‘sourcing’ your R file named with the name of your model, i.e. ‘UM_pm_alpha_pairwise_lambdacov_none_alphacov_none.R’ in our example. Thus, for fitting a certain dataset with your custom model, you would call the cxr_pm_fit function with the appropriate parameters: # load your model into the global environment # fit your data custom_fit <- cxr_pm_fit(data = custom_data, # assuming custom_data is already set... focal_column = my_focal, # assuming my_focal is already set... model_family = "UM", covariates = NULL, # as we have no covariate effects alpha_form = "pairwise", lambda_cov_form = "none", alpha_cov_form = "none") If the function cannot find a model matching the provided ‘model_family’,‘alpha_form’,‘lambda_cov_form’, and ‘alpha_cov_form’, it will output a message and return NULL. On a final note, if you think your bright new model can be useful for fellow scientists, we encourage you to make a pull request for integrating it in the set of default model families included in cxr. An interesting update to the package would be, in addition to include other model families, to implement non-linear relationships of covariate effects over lambda and alpha values. There are several issues that you should have in mind when developing custom models. First, recall that numerical optimization methods may be bounded or unbounded. If your model does not return meaningful values for part of the parameter space defined by the hypervolume of parameter boundaries, bounded optimization methods (such as ‘L-BFGS-B’ or ‘bobyqa’) may fail. This is well exemplified by the discrete Lotka-Volterra model: \(\lambda_i - \alpha_{ii}N_i - \alpha_{ij}Nj\) which can easily produce negative values and thus undefined log-likelihoods. The second point to note is that user-defined population dynamics models are sufficient to obtain tailored lambda and alpha values, but importantly, several MCT metrics are also model-specific. Model-specific coexistence metrics Among the coexistence metrics that can be calculated with cxr, five of them are model-specific, i.e. have different formulations depending on the underlying population dynamics model from which lambda and alpha values are obtained. These are: • demographic ratio (and therefore, average fitness differences in the MCT formulation) • competitive ability • competitive effects and responses • species fitness in the absence of niche differences All these have different formulations for different model families. Therefore, if you want to estimate coexistence metrics based on custom models, you should provide a full set of formulations for these metrics. Here we show their formulation for Beverton-Holt models, for reference. The supplementary material of Godoy and Levine (2014) and that of Hart et al. (2018) are good places to explore the mathematical underpinnings of these particularities. BH_competitive_ability <- function(lambda, pair_matrix){ if(all(pair_matrix >= 0)){ (lambda - 1)/sqrt(pair_matrix[1,1] * pair_matrix[1,2]) • competitive effects and responses: these are obtained with an optimization function cxr_er_fit, which is similar to cxr_pm_fit (see vignette 1). We include at the end of this vignette the complete model template for a custom effect and response model. The formulation of the Beverton-Holt effect and response model is simply where each part potentially depends on neighbour densities and covariates. • species fitness in the absence of niche differences: Therefore, in order to compute coexistence metrics based on your custom population dynamics model, you should code your own versions of these metrics, save them with appropriate names (i.e. changing the model family acronym at the beginning of the functions), and source them in the global environment, in order for the cxr interface to be able to find and use them. Template for population dynamics models family_pm_alpha_form_lambdacov_form_alphacov_form <- function(par, neigh_intra_matrix = NULL, # retrieve parameters ----------------------------------------------------- # parameters to fit are all in the "par" vector, # so we need to retrieve them one by one # order is {lambda,lambda_cov,alpha_intra,alpha_inter,alpha_cov,sigma} # comment or uncomment sections for the different parameters # depending on whether your model includes them # note that the section on alpha_inter includes two # possibilities, depending on whether a single alpha is # fitted for all interactions (global) or each pairwise alpha is # different (pairwise) # both are commented, you need to uncomment the appropriate one # likewise for the section on alpha_cov # -------------------------------------------------------------------------- pos <- 1 # if a parameter is passed within the "par" vector, # it should be NULL in the "fixed_parameters" list # lambda lambda <- par[pos] pos <- pos + 1 lambda <- fixed_parameters[["lambda"]] # lambda_cov lambda_cov <- par[pos:(pos+ncol(covariates)-1)] pos <- pos + ncol(covariates) lambda_cov <- fixed_parameters[["lambda_cov"]] # alpha_intra # intra alpha_intra <- par[pos] pos <- pos + 1 alpha_intra <- fixed_parameters[["alpha_intra"]] alpha_intra <- NULL # alpha_inter # uncomment for alpha_global # alpha_inter <- par[pos] # pos <- pos + 1 # uncomment for alpha_pairwise # alpha_inter <- par[pos:(pos+ncol(neigh_inter_matrix)-1)] # pos <- pos + ncol(neigh_inter_matrix) alpha_inter <- fixed_parameters[["alpha_inter"]] # alpha_cov # uncomment for alpha_cov_global # alpha_cov <- par[pos:(pos+ncol(covariates)-1)] # pos <- pos + ncol(covariates) # uncomment for alpha_cov_pairwise # alpha_cov <- par[pos:(pos+(ncol(covariates)* # (ncol(neigh_inter_matrix)+ncol(neigh_intra_matrix)))-1)] # pos <- pos + (ncol(covariates)*(ncol(neigh_inter_matrix)+ncol(neigh_intra_matrix))) alpha_cov <- fixed_parameters[["alpha_cov"]] # sigma - this is always necessary sigma <- par[length(par)] # now, parameters have appropriate values (or NULL) # next section is where your model is coded # MODEL CODE HERE --------------------------------------------------------- # the model should return a "pred" value # a function of lambda, alpha_intra, alpha_inter, lambda_cov, alpha_cov # and neigh_intra_matrix, neigh_inter_matrix, and covariates pred <- 0 # MODEL CODE ENDS HERE ---------------------------------------------------- # the routine returns the sum of log-likelihoods of the data and model: # DO NOT CHANGE THIS llik <- dnorm(fitness, mean = (log(pred)), sd = (sigma), log=TRUE) Template for effect and response models family_er_lambdacov_form_effectcov_form_responsecov_form <- function(par, num.sp <- nrow(target) # parameters to fit are all in the "par" vector, # so we need to retrieve them one by one # order is {lambda,lambda_cov,effect,effect_cov,response,response_cov,sigma} # comment or uncomment sections for the different parameters # depending on whether your model includes them # note that effect and response models must always include # lambda, effect, and response, at least. pos <- 1 # if a parameter is passed within the "par" vector, # it should be NULL in the "fixed_parameters" list # lambda lambda <- par[pos:(pos + num.sp - 1)] pos <- pos + num.sp lambda <- fixed_parameters[["lambda"]] # lambda_cov # the covariate effects are more efficient in a matrix form # with species in rows (hence byrow = T, because by default # the vector is sorted first by covariates) lambda_cov <- matrix(par[pos:(pos+(ncol(covariates)*num.sp)-1)], nrow = num.sp, byrow = TRUE) pos <- pos + ncol(covariates)*num.sp lambda_cov <- fixed_parameters[["lambda_cov"]] # effect effect <- par[pos:(pos + num.sp - 1)] pos <- pos + num.sp effect <- fixed_parameters[["effect"]] # effect_cov effect_cov <- matrix(par[pos:(pos+(ncol(covariates)*num.sp)-1)], nrow = num.sp, byrow = TRUE) pos <- pos + ncol(covariates)*num.sp effect_cov <- fixed_parameters[["effect_cov"]] # response response <- par[pos:(pos + num.sp - 1)] pos <- pos + num.sp response <- fixed_parameters[["response"]] # response_cov response_cov <- matrix(par[pos:(pos+(ncol(covariates)*num.sp)-1)], nrow = num.sp, byrow = TRUE) pos <- pos + ncol(covariates)*num.sp response_cov <- fixed_parameters[["response_cov"]] sigma <- par[length(par)] # now, parameters have appropriate values (or NULL) # next section is where your model is coded # MODEL CODE HERE --------------------------------------------------------- # the model should return a "pred" value # a function of lambda, effect, response, lambda_cov, effect_cov, response_cov pred <- 0 # MODEL CODE ENDS HERE ---------------------------------------------------- # the routine returns the sum of log-likelihoods of the data and model: # DO NOT CHANGE THIS llik <- dnorm(fitness, mean = (log(pred)), sd = (sigma), log=TRUE) Godoy, O., & Levine, J. M. (2014). Phenology effects on invasion success: insights from coupling field experiments to coexistence theory. Ecology, 95(3), 726-736. Hart, S. P., Freckleton, R. P., & Levine, J. M. (2018). How to quantify competitive ability. Journal of Ecology, 106(5), 1902-1909.
{"url":"https://cran.rediris.es/web/packages/cxr/vignettes/V4_Models.html","timestamp":"2024-11-04T22:07:10Z","content_type":"text/html","content_length":"65127","record_id":"<urn:uuid:c985a6cb-b75f-433d-91dc-37e0ee3519e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00300.warc.gz"}
Is My Network Module Preserved and Reproducible? In many applications, one is interested in determining which of the properties of a network module change across conditions. For example, to validate the existence of a module, it is desirable to show that it is reproducible (or preserved) in an independent test network. Here we study several types of network preservation statistics that do not require a module assignment in the test network. We distinguish network preservation statistics by the type of the underlying network. Some preservation statistics are defined for a general network (defined by an adjacency matrix) while others are only defined for a correlation network (constructed on the basis of pairwise correlations between numeric variables). Our applications show that the correlation structure facilitates the definition of particularly powerful module preservation statistics. We illustrate that evaluating module preservation is in general different from evaluating cluster preservation. We find that it is advantageous to aggregate multiple preservation statistics into summary preservation statistics. We illustrate the use of these methods in six gene co-expression network applications including 1) preservation of cholesterol biosynthesis pathway in mouse tissues, 2) comparison of human and chimpanzee brain networks, 3) preservation of selected KEGG pathways between human and chimpanzee brain networks, 4) sex differences in human cortical networks, 5) sex differences in mouse liver networks. While we find no evidence for sex specific modules in human cortical networks, we find that several human cortical modules are less preserved in chimpanzees. In particular, apoptosis genes are differentially co-expressed between humans and chimpanzees. Our simulation studies and applications show that module preservation statistics are useful for studying differences between the modular structure of networks. Data, R software and accompanying tutorials can be downloaded from the following webpage: http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/ModulePreservation. Author Summary In network applications, one is often interested in studying whether modules are preserved across multiple networks. For example, to determine whether a pathway of genes is perturbed in a certain condition, one can study whether its connectivity pattern is no longer preserved. Non-preserved modules can either be biologically uninteresting (e.g., reflecting data outliers) or interesting (e.g., reflecting sex specific modules). An intuitive approach for studying module preservation is to cross-tabulate module membership. But this approach often cannot address questions about the preservation of connectivity patterns between nodes. Thus, cross-tabulation based approaches often fail to recognize that important aspects of a network module are preserved. Cross-tabulation methods make it difficult to argue that a module is not preserved. The weak statement (“the reference module does not overlap with any of the identified test set modules”) is less relevant in practice than the strong statement (“the module cannot be found in the test network irrespective of the parameter settings of the module detection procedure”). Module preservation statistics have important applications, e.g. we show that the wiring of apoptosis genes in a human cortical network differs from that in chimpanzees. Citation: Langfelder P, Luo R, Oldham MC, Horvath S (2011) Is My Network Module Preserved and Reproducible? PLoS Comput Biol 7(1): e1001057. https://doi.org/10.1371/journal.pcbi.1001057 Editor: Philip E. Bourne, University of California San Diego, United States of America Received: December 18, 2009; Accepted: December 13, 2010; Published: January 20, 2011 Copyright: © 2011 Langfelder et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Funding: This work was supported by grants 1U19AI063603-01, 5P30CA016042-28, P50CA092131, and DK072206. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Competing interests: The authors have declared that no competing interests exist. Network methods are frequently used in genomic and systems biologic studies, but also in general data mining applications, to describe the pairwise relationships of a large number of variables [1], [2]. For example, gene co-expression networks can be constructed on the basis of gene expression data [3]–[10]. In many network applications, one is interested in studying the properties of network modules and their change across conditions [11]–[16]. For example, [17]–[19] studied modules across multiple mouse tissues, [20] studied module preservation between human brain and blood tissue, and [21] studied module preservation between human and mouse brains. This article describes several module preservation statistics for determining which properties of a network module are preserved in a second (test) network. The module preservation statistics allow one to quantify which aspects of within-module topology are preserved between a reference network and a test networks. For brevity, we will refer to these aspects as connectivity patterns, but we note that our statistics are not based on network motifs. We use the term “module” in a broad sense: a network module is a subset of nodes that forms a sub-network inside a larger network. Any subset of nodes inside a larger network can be considered a module. This subset may or may not correspond to a cluster of nodes. Many cluster validation statistics proposed in the literature can be turned into module preservation statistics. In the following, we briefly review cluster validation statistics. Traditional cluster validation (or quality) statistics can be split into four broad categories: cross-tabulation, density, separability, and stability statistics [22]–[24]. Since cross-tabulation statistics compare cluster assignments in the reference and test clusterings, they require that a clustering procedure is also applied to the test data. On the other hand, density and density/separability statistics do not require a clustering in the test data set. These statistics typically evaluate clusters by how similar objects are within each cluster and/or how dis-similar objects are between different clusters [25]. Stability statistics typically study cluster stability when a controlled amount of artificial noise is added to the data. Although stability statistics also evaluate clusters, they are more relevant to comparing clustering procedures rather than quantifying cluster preservation and hence we do not consider them here. While many cluster validation statistics are based on within- and/or between cluster variance, several recent articles used prediction error to evaluate the reproducibility (or validity) of clusters in gene expression data [24], [26], [27]. These papers argued that the use of a measure of test set clusters defined by a classifier made from the reference data is an appropriate approach to cluster validation when the aim is to identify reproducible clusters of genes or microarrays with similar expression profiles. For example, the in-group proportion (IGP), which is similar to the cluster cohesion statistic [28], is defined as the proportion of observations classified to a cluster whose nearest neighbor is also classified to the same cluster [24]. One can also calculate a significance level (p-value) for the IGP statistic. A comparison of the IGP statistic to alternative cluster quality statistics found that the IGP performs well [24]. Thus, we use the IGP statistic as benchmark statistic for assessing the use of module preservation statistics in case that modules are defined as clusters. Our simulation studies and applications show that one of our module preservation statistics is sometimes closely correlated with the IGP statistic if the modules are defined as clusters. But cluster validation statistics (such as the IGP) may not be appropriate when modules are not defined as clusters. In general, assessing module preservation is a different task from assessing cluster preservation. In our simulations, we demonstrate that module preservation statistics can detect aspects of module preservation that are missed by existing cluster validation statistics. Overview of module preservation statistics Table 1 presents an overview of the module preservation statistics studied in this article. We distinguish between cross-tabulation based and network based preservation statistics. Cross-tabulation based preservation statistics require independent module detection in the test network and take the module assignments in both reference and test networks as input. Several cross-tabulation based statistics are described in the first section of Supplementary Text S1. While cross-tabulation approaches are intuitive, they have several disadvantages. To begin with, they are only applicable if the module assignment in the test data results from applying a module detection procedure to the test data. For example, a cross-tabulation based module preservation statistic would be meaningless when modules are defined as gene ontology categories since both reference and test networks contain the same sets of genes. But a non-trivial question is whether the network connections of a module (gene ontology category) in the reference network resemble those of the same module in the test network. To measure the resemblance of network connectivity, we propose several measures based on network statistics. Network terminology is reviewed in Table 2 and in Methods. Even when modules are defined using a module detection procedure, cross-tabulation based approaches face potential pitfalls. A module found in the reference data set will be deemed non-reproducible in the test data set if no matching module can be identified by the module detection approach in the test data set. Such non-preservation may be called the weak non-preservation: “the module cannot be found using the current parameter settings of the module detection procedure”. On the other hand, one is often interested in strong non-preservation: “the module cannot be found irrespective of the parameter settings of the module detection procedure”. Strong non-preservation is difficult to establish using cross-tabulation approaches that rely on module assignment in the test data set. A second disadvantage of a cross-tabulation based approach is that it requires that for each reference module one finds a matching test module. This may be difficult when a reference module overlaps with several test modules or when the overlaps are small. A third disadvantage is that cross-tabulating module membership between two networks may miss that the fact that the patterns of connectivity between module nodes are highly preserved between the two networks. Network based statistics do not require the module assignment in the test network but require the user to input network adjacency matrices (described in Methods). We distinguish the following 3 types of network based module preservation statistics: 1) density based, 2) separability based, and 3) connectivity based preservation statistics. Density based preservation statistics can be used to determine whether module nodes remain highly connected in the test network. Separability based statistics can be used to determine whether network modules remain distinct (separated) from one another in the test network. While numerous measures proposed in the literature combine aspects of density and separability, we keep them separate and provide evidence that density based approaches can be more useful than separability based approaches in determining whether a module is preserved. Connectivity based preservation statistics can be used to determine whether the connectivity pattern between nodes in the reference network is similar to that in the test network. As detailed in Methods, several module preservation statistics are similar to previously proposed cluster quality and preservation statistics, while others (e.g. connectivity based statistics) are novel. Table 1 reports the required input for each preservation statistic. Since each preservation statistic is used to evaluate the preservation of modules defined in a reference network, it is clear that each statistic requires the module assignment from the reference data. But the statistics differ with regard to the module assignment in the test data. Only cross-tabulation based statistics require a module assignment in the test data. Network based preservation statistics do not require a test set module assignment. Instead, they require the test set network adjacency matrix (for a general network) or the test data set of numeric variables (for a correlation network). We distinguish network statistics by the underlying network. Some preservation statistics are defined for a general network (defined by an adjacency matrix) while others are only defined for a correlation network (constructed on the basis of pairwise correlations between numeric variables). Our applications show that the correlation structure facilitates the definition of particularly powerful module preservation statistics. Preservation statistics 4–11 (Table 1) can be used for general networks while statistics 12–19 assume correlation networks. Network density and module separability statistics only need the test set adjacency matrix while the connectivity preservation statistics also require the adjacency matrix in the reference data. It is often not clear whether an observed value of a preservation statistic is higher than expected by chance. As detailed in Methods, we attach a significance level (permutation test p-value) to observed preservation statistics, by using a permutation test procedure which randomly permutes the module assignment in the test data. Based on the permutation test we are also able to estimate the mean and variance of the preservation statistic under the null hypothesis of no relationship between the module assignments in reference and test data. By standardizing each observed preservation with regard to the mean and variance, we define a statistic for each preservation statistic. Under certain assumptions, each statistic (approximately) follows the standard normal distribution if the module is not preserved. The higher the value of a Z statistic, the stronger the evidence that the observed value of the preservation statistic is significantly higher than expected by chance. Composite preservation statistics and threshold values. Because preservation statistics measure different aspects of module preservation, their results may not always agree. We find it useful to aggregate different module preservation statistics into composite preservation statistics. Composite preservation statistics also facilitate a fast evaluation of many modules in multiple networks. We define several composite statistics. For correlation networks based on quantitative variables, the density preservation statistics are summarized by (Equation 30), the connectivity based statistics are summarized by (Equation 31), and all individual statistics are summarized by defined as follows(1)As detailed in the Methods, our simulations suggest the following thresholds for : if there is strong evidence that the module is preserved; if there is weak to moderate evidence of preservation; if , there is no evidence that the module preserved. For general networks defined by an adjacency matrix, we find it expedient to summarize the preservation statistics into a summary statistic denoted (Equation 35). Since biologists are often more familiar with p-values as opposed to Z statistics, our R implementation in function modulePreservation also calculates empirical p-values. Analogous to the case of the Z statistics, the p-values of individual preservation statistic are summarized into a descriptive measure called . The smaller , the stronger the evidence that the module is preserved. In practice, we observe an almost perfect inverse relationship (Spearman correlation ) between and . The Z statistics and permutation test p-values often depend on the module size (i.e. the number of nodes in a module). This fact reflects the intuition that it is more significant to observe that the connectivity patterns among hundreds of nodes are preserved than to observe the same among say only nodes. Having said this, there will be many situations when the dependence on module size is not desirable, e.g., when preservation statistics of modules of different sizes are to be compared. In this case, we recommend to either focus on the observed values of the individual statistics or alternatively to summarize them using the composite module preservation statistic (Equation 34). The is useful for comparing relative preservation among multiple modules: a module with lower median rank tends to exhibit stronger observed preservation statistics than a module with a higher median rank. Since is based on the observed preservation statistics (as opposed to Z statistics or p-values) we find that it is much less dependent on module size. Application 1: Preservation of the cholesterol biosynthesis module between mouse tissues Several studies have explored how co-expression modules change between mouse tissues [19] and/or sexes [18]. Here we re-analyze gene expression data from the liver, adipose, muscle, and brain tissues of an F2 mouse intercross described in [13], [17]. The expression data contain measurements of 17104 genes across the following numbers of microarray samples: 137 (female (F) adipose), 146 (male (M) adipose), 146 (F liver), 145 (M liver), 125 (F muscle), 115 (M muscle), 148 (F brain), and 141 (M brain). We consider a single module defined by the genes of the gene ontology (GO) term “Cholesterol biosynthetic process” (CBP, GO id GO:0006695 and its GO offspring). Of the 28 genes in the CBP, 24 could be found among our 17104 genes. Cholesterol is synthesized in liver and we used the female liver network as the reference network module. As test networks we considered the CBP co-expression networks in other tissue/sex combinations. Each circle plot in Figure 1 visualizes the connection strengths (adjacencies) between CBP genes in different mouse tissue/sex combination. The color and width of the lines between pairs of genes reflect the correlations of their gene expression profiles across a set of microarray samples. Before delving into a quantitative analysis, we invite the reader to visually compare the patterns of connections. Clearly, the male and female liver networks look very similar. Because of the ordering of the nodes, the hubs are concentrated on the upper right section of the circle and the right side of the network is more dense. The adipose tissues also show this pattern, albeit much more weakly. On the other hand, the figures for the brain and muscle tissues do not show these patterns. Thus, the figure suggests that the CBP module is more strongly preserved between liver and adipose tissues than between liver and brain or muscle. The module is defined as a signed weighted correlation network among genes from the GO category Cholesterol Biosynthetic Process. Module preservation statistics allow one to quantify similarities between the depicted networks. The figure depicts the connectivity patterns (correlation network adjacencies) between cholesterol biosynthesis genes in 4 different mouse tissues from male and female mice of an F2 mouse cross. The thickness of the line reflects the absolute correlation. The line is colored in red if the correlation is positive and green if it is negative. The size of each black circle indicates the connectivity of the corresponding gene; hubs (i.e., highly connected) genes are represented by larger circles. Visual inspection suggests that the male and female liver networks are rather similar and show some resemblance to those of the adipose tissue. Module preservation statistics can be used to measure the similarity of connectivity patterns between pairs of networks. We now turn to a quantitative assessment of this example. We start out by noting that a cross-tabulation based approach of module preservation is meaningless in this example since the module is a GO category whose genes can trivially be found in each network. However, it is a very meaningful exercise to measure the similarity of the connectivity patterns of the module genes across networks. To provide a quantitative assessment of the connectivity preservation, it is useful to adapt network concepts (also known as network statistics or indices) that are reviewed in Methods. Figure 2 provides a quantitative assessment of the preservation of the connectivity patterns of the cholesterol biosynthesis module between the female liver network and networks from other sex/tissue combinations. Figure 2A presents the composite summary statistic (, Equation 1) in each test network. Overall, we find strong evidence of preservation (, Equation 1) in the male liver network but no evidence () of preservation in the female brain and muscle networks. We find that the connectivity of the female liver CBP is most strongly preserved in the male liver network. It is also weakly preserved in adipose tissue but we find no evidence for its preservation in muscle and brain tissues. The summary preservation statistic measures both aspects of density and of connectivity preservation. We now evaluate which of these aspects are preserved. Figure 2B shows that the module shows strong evidence of density preservation () (Equation 30) in the male liver network but negligible density preservation in the other networks. Interestingly, Figure 2C shows that the module has moderate connectivity preservation (Equation 31) in the adipose networks. Quantitative evaluation of the similarities among the networks depicted in Figure 1. As reference module, we define a correlation network among the genes of the GO term “Cholesterol biosynthetic process” (CBP) in the female mouse liver network. Panels A–C show summary preservation statistics in other tissue and sex combinations. Panel A shows the composite preservation statistic . The CBP module in the female liver network is highly preserved in the male liver network () and moderately preserved in adipose networks. There is no evidence of preservation in brain or muscle tissue networks. Panels B and C show the density and connectivity statistics, respectively. Panel D shows the results of the in group proportion analysis [24]. According to the IGP analysis, the CBP module is equally preserved in all networks. E–K show the scatter plots of in one test data set (indicated in the title) vs. the liver female reference set. Each point corresponds to a gene; Pearson correlations and the corresponding p-values are displayed in the title of each scatter plot. The eigengene-based connectivity is strongly preserved between adipose and liver tissues; it is not preserved between female liver and the muscle and brain tissues. The measure summarizes the statistical significance of 3 connectivity based preservation statistics. Two of our connectivity measures evaluate whether highly connected intramodular hub nodes in the reference network remain hub nodes in the test network. Preservation of intramodular connectivity reflects the preservation of hub gene status between the reference and test network. One measure of intramodular connectivity is the module eigengene-based connectivity measures (Equation 17), which is also known as the module membership measure of gene [13], [29], [30]. Genes with high values of are highly correlated with the summary profile of the module (module eigengene defined as the first principal component, see the fifth section in Supplementary Text S1). A high correlation of between reference and test network can be visualized using a scatter plot and quantified using the correlation coefficient . For example, Figure 2I shows that in the female liver module is highly correlated with that of the male liver network (, ). Further, the scatter plots in Figure 2 show that the measures between liver and adipose networks show strong correlation (preservation): (), (), (), while the correlation between in female liver and the brain and muscle data sets are not significant. This example demonstrates that connectivity preservation measures can uncover a link between CBP in liver and adipose tissues that is missed by density preservation statistics. We briefly compare the performance of our network based statistics with those from the IGP method [24]. The R implementation of the IGP statistic requires that at least 2 modules are being evaluated. To get it to work for this application that involves only a single module, we defined a second module by randomly sampling half of the genes from the rest of the entire network. Figure 2D shows high, nearly constant values of the IGP statistic across networks, which indicates that the CBP module is present in all data sets. Note that the IGP statistic does not allow us to argue that the CBP module in the female liver network is more similar to the CBP module in the male liver than in other networks. This reflects the fact that the IGP statistic, which is a cluster validation statistic, does not measure connectivity preservation. Application 2: Preservation of human brain modules in chimpanzee brains Here we study the preservation of co-expression between human and chimpanzee brain gene expression data. The data set consists of 18 human brain and 18 chimpanzee brain microarray samples [31]. The samples were taken from 6 regions in the brain; each region is represented by 3 microarray samples. Since we used the same weighted gene co-expression network construction and module identification settings as in the original publication, our human modules are identical to those in [32]. Because of the relatively small sample size only few relatively large modules could be detected in the human data. The resulting modules were labeled by colors: turquoise, blue, brown, yellow, green, black, red (see Figure 3A). Oldham et al (2006) determined the biological meaning of the modules by examining over-expression of module genes in individual brain regions. For example, heat maps of module expression profiles revealed that the turquoise module contains genes highly expressed in cerebellum, the yellow module contains genes highly expressed in caudate nucleus, the red module contains genes highly expressed in anterior cingulate cortex (ACC) and caudate nucleus, and the black module contains mainly genes expressed in white matter. The blue, brown and green modules contained genes highly expressed in cortex, which is why we refer to these modules as cortical modules. Visual inspection of the module color band below the dendrograms in Figures 3A and 3B suggests that most modules show fairly strong preservation. Oldham et al argued that modules corresponding to evolutionarily older brain regions (turquoise, yellow, red, black) show stronger preservation than the blue and green cortical modules [32]. Here we re-analyze these data using module preservation A. Hierarchical clustering tree (dendrogram) of genes based on human brain co-expression network. Each “leaf” (short vertical line) corresponds to one gene. The color rows below the dendrogram indicate module membership in the human modules (defined by cutting branches of this dendrogram at the red line) and in the chimpanzee network (defined by branch cutting the dendrogram in panel B.) The color rows show that most human and chimpanzee modules overlap (for example, the turquoise module). B. Hierarchical clustering tree of genes based on the chimpanzee co-expression network. The color rows below the dendrogram indicate module membership in the human modules (defined by cutting branches of dendrogram in panel A.) and in the chimpanzee network (defined by branch cutting the dendrogram in this panel.) C. Cross-tabulation of human modules (rows) and chimpanzee modules (columns). Each row and column is labeled by the corresponding module color and the total number of genes in the module. In the table, numbers give counts of genes in the intersection of the corresponding row and column module. The table is color-coded by , the Fisher exact test p value, according to the color legend on the right. Note that the human yellow network is highly preserved while the human blue network is only weakly preserved in the chimpanzee network. The most common cross-tabulation approach starts with a contingency table that reports the number of genes that fall into modules of the human network (corresponding to rows) versus modules of the chimpanzee network (corresponding to columns). The contingency table in Figure 3C shows that there is high agreement between the human and chimpanzee module assignments. The human modules black, brown, red, turquoise, and yellow have well-defined chimpanzee counterparts (labeled by the corresponding colors). On the other hand, the human green cortical module appears not to be preserved in chimpanzee since most of its genes are classified as unassigned (grey color) in the chimpanzee network. Further, the human blue cortical module (360 genes) appears to split into several parts in the chimpanzee network: 27 genes are part of the chimpanzee blue module, 85 genes are part of the chimpanzee brown module, 52 fall in the chimpanzee turquoise module, 155 genes are grey in the chimpanzee network, etc. To arrive at a more quantitative measure of preservation, one may quantify the module overlap or use Fisher's exact test to attach a significance level (p-value) to each module overlap (as detailed in the first section of Supplementary Text S1). The contingency table in Figure 3C shows that every human module has significant overlap with a chimpanzee module. However, even if the resulting p-value of preservation were not significant, it would be difficult to argue that a module is truly a human-specific module since an alternative module detection strategy in chimpanzee may arrive at a module with more significant overlap. In order to quantify the preservation of human modules in chimpanzee samples more objectively, one needs to consider statistics that do not rely on a particular module assignment in the chimpanzee data. We now turn to approaches for measuring module preservation that do not require that module detection has been carried out in the test data set. Figures 4A,B show composite module preservation statistics of human modules in chimpanzee samples. The overall significance of the observed preservation statistics can be assessed using (Equation 1) that combines multiple preservation statistics into a single overall measure of preservation, Figure 4A. Note that shows a strong dependence on module size, which reflects the fact that observing module preservation of a large module is statistically more significant than observing the same for a small module. However, here we want to consider all modules on an equal footing irrespective of module size. Therefore, we focus on the composite statistic which shows no dependence on module size (Figure 4B). The median rank is useful for comparing relative preservation among modules: a module with lower median rank tends to exhibit stronger observed preservation statistics than a module with a higher median rank. Figure 4B shows that the median ranks of the human brain modules. The median rank of the yellow module is 1, while the median ranks of the blue module is 6, indicating that the yellow module is more strongly preserved than the blue module. Our quantitative results show that modules expressed mainly in evolutionarily more conserved brain areas such as cerebellum (turquoise) and caudate nucleus (yellow and partly red) are more strongly preserved than modules expressed primarily in the cortex that is very different between humans and chimpanzees (green and blue modules). Thus the module preservation results of , corroborate Oldham's original finding regarding the relative lack of preservation of cortical modules. A. The summary statistic (-axis), Equation 1, as a function of the module size. Each point represents a module, labeled by color and a secondary numeric label (1=turquoise, 2=blue, 3=brown, 4= yellow, 5=green, 6=red, 7=black). The dashed blue and green lines indicate the thresholds and , respectively. B. The composite statistic (y-axis), Equation 34, as a function of the module size. Each point represents a module, labeled by color and a secondary numeric label as in panel A. Low numbers on the axis indicate a high preservation. C. Observed IGP statistic (Kapp and Tibshirani, 2007) versus module size. D. P-value of the IGP statistic versus module size. E. and F. show scatter plots between the observed IGP statistic and and , respectively. In this example, where modules are defined as clusters, the IGP statistic has a high positive correlation () with and a moderately large negative correlation () with . The negative correlation is expected since low median ranks indicate high preservation. Since the modules of this application are defined as clusters, it makes sense to evaluate their preservation using cluster validation statistics. Figure 4C shows that the IGP statistic implemented in the R package clusterRepro [24] also shows a strong dependence on module size in this application. The IGP values of all modules are relatively high. However, the permutation p-values (panels C and D) identify the green module as less preserved than the other modules (, Bonferroni corrected p-value 0.43). Figures 4E,F show scatter plots between the observed IGP statistic and and , respectively. In this example, where modules are defined as clusters, the IGP statistic has a high positive correlation () with and a moderately large negative correlation () with . The negative correlation is expected since low median ranks indicate high preservation. While composite statistics summarize the results, it is advisable to understand which properties of a module are preserved (or not preserved). For example, module density based statistics allow us to determine whether the genes of a module (defined in the reference network) remain densely connected in the test network. As an illustration, we will compare the module preservation statistics for the human yellow module whose genes are primarily expressed in caudate nucleus (an evolutionarily old brain area), and the human blue module whose genes are expressed mostly in the cortex which underwent large evolutionary changes between humans and chimpanzees. In chimpanzees, the mean adjacency of the genes comprising the human yellow module is significantly higher than expected by chance, with a high permutation statistic , . But the corresponding permutation statistic for the human blue module is only weakly significant, , (see Supplementary Text S2 and Supplementary Table S1). Thus, the mean adjacency permutation statistic suggests that the blue module is less preserved than the yellow module. For co-expression modules, one can define an alternative density measure based on the module eigengene (Figures 5A and E). The higher the proportion of variance explained by the module eigengene (defined in the fifth section in Supplementary Text S1) in the test set data, the tighter is the module in the test set. The human yellow module exhibits a high proportion of variance explained, , and the corresponding permutation statistic is , . In contrast, for the human blue module we find and the corresponding permutation statistic is , . The permutation statistics again suggest that the yellow module is more preserved than the blue module. A. Heatmaps and eigengene plots for visualizing the gene expression profiles of the yellow module genes (rows) across human brain microarray samples (columns). In the heat map, green indicates under-expression, red over-expression, and white mean expression. The module eigengene expression depicted underneath the heat map shows how the eigengene expression (y-axis) changes across the samples (x-axis) which correspond to the columns of the heat map. The eigengene can be interpreted as a weighted average gene expression profile. The color bar below the eigengene indicates the region from which the sample was taken: light blue color indicates cortical samples, magenta indicates cerebellum samples, and orange indicates caudate nucleus samples. Scatter plots B.–D. show that the connectivity patterns of the yellow module genes tends to be highly preserved between the two species. B. Scatter plot of gene-gene correlations in chimpanzee samples (-axis) vs. human samples (-axis) within the human yellow module. Each point corresponds to a gene-gene pair. The scatter plot exhibits a significant correlation (cor.cor and p-value displayed in the title), indicating that the correlation pattern among the genes is preserved between the human and chimpanzee data. C. Scatter plot of intramodular connectivities, Equation 7, of genes in the human yellow module in chimpanzee samples (-axis) vs. human samples (-axis). Each point corresponds to one gene. The scatter plot exhibits a significant correlation (cor.kIM and p-value displayed in the title), indicating that the hub gene status in the human yellow module is preserved in the chimpanzee samples. D. Scatter plot of eigengene-based connectivities, Equation 17, of genes in chimpanzee samples (-axis) vs. human samples (-axis). Each point corresponds to one gene. The scatter plot exhibits a significant correlation (cor.kME and p-value displayed in the title), indicating that fuzzy module membership in the human yellow module is preserved in the chimpanzee samples. Scatter plots E.–H. show that the human blue module is less preserved in the chimpanzee network. Note that the correlations in scatter plots F.–H. are lower than the corresponding correlations in the yellow module plots B.–D., indicating weaker preservation of the human blue module in the chimpanzee samples. Overall, these results agree with those from the cross-tabulation based analysis reported in Figure 3. Although density based approaches are intuitive, they may fail to detect another form of module preservation, namely the preservation of connectivity patterns among module genes. For example, network module connectivity preservation can mean that, within a given module , a pair of genes with a high connection strength (adjacency) in the reference network also exhibits a high connection strength in the test network. This property can be quantified by correlating the pairwise adjacencies or correlations between reference and test networks. For the genes in the human yellow module, the scatter plot in Figure 5B shows pairwise correlations in the human network (-axis) versus the corresponding correlations in the chimpanzee network (-axis). The correlation between pairwise correlations (denoted by ) equals and is highly significant, . The analogous correlation for the blue module, Figure 5F is lower, 0.56, but still highly significant, , in part because of the higher number of genes in the blue module. A related but distinct connectivity preservation statistic quantifies whether intramodular hub genes in the reference network remain intramodular hub genes in the test network. Intramodular hub genes are genes that exhibit strong connections to other genes within their module. This property can be quantified by the intramodular connectivity (Equation 7): hub genes are genes with high . Intramodular hub genes often play a central role in the module [5], [33]–[35]. Preservation of intramodular connectivity reflects the preservation of hub gene status between the reference and test network. For example, the intramodular connectivity of the human yellow module is preserved between the human and chimpanzee samples, (Figure 5C). In contrast, the human blue (cortical) module exhibits a lower correlation (preservation) (Figure 5G). The value is more significant because of the higher number of genes in the blue module. Another intramodular connectivity measure is , which turns out to be highly related with [29]. Figure 5D shows that for the human yellow module is highly preserved in the chimpanzee network (). The corresponding correlation in the human blue module is lower, (Figure 5H). In summary, the observed preservation statistics show that the human yellow module (related to the caudate nucleus) is more strongly preserved in the chimpanzee samples than the human blue module (related to the cortex). Application 3: Preservation of KEGG pathways between human and chimpanzee brains To further illustrate that modules do not have to be clusters, we now describe an application where modules correspond to KEGG pathways. KEGG (Kyoto Encyclopedia of Genes and Genomes) is a knowledge base for systematic analysis of gene functions, linking genomic information with higher order functional information [36]. KEGG also provides graphical representations of cellular processes, such as signal transduction, metabolism, and membrane transport. To illustrate the use of the module preservation approach, we studied the preservation of selected KEGG pathway networks across human and chimpanzee brain correlation networks. While pathways in the KEGG database typically describe networks of proteins, our analysis describes the correlation patterns between mRNA expression levels of the corresponding genes. As before, we define a weighted correlation network adjacency matrix between the genes (described in the third section of Supplementary Text S1 and [5]). For the sake of brevity, we focused the analysis on the following 8 signaling pathways: Hedgehog signaling pathway (12 genes in our data sets), apoptosis (24 genes in our data sets), TGF-beta signaling pathway (26 genes), Phosphatidylinositol signaling system (39 genes), Wnt signaling pathway (55 genes), Endocytosis (59 genes), Calcium signaling pathway (78 genes), and MAPK signaling pathway (93 genes). All of these pathways have been shown to play critical roles in normal brain development and function [37]–[41]. We provide a brief description of the functions of these pathways in Methods; more detailed description can be found in the KEGG database and in numerous textbooks. Figures 6A,B show the composite preservation statistics and . Both statistics indicate that the apoptosis module is the least preserved module. To visualize the lack of preservation, consider the circle plots of apoptosis genes in Figures 7 L, M that show pronounced differences in the connectivity patterns among apoptosis genes. While we caution the reader that additional data are needed to replicate these differences, prior literature points to an evolutionary difference for apoptosis genes. For example, a scan for positively selected genes in the genomes of humans and chimpanzees found that a large number of genes involved in apoptosis show strong evidence for positive selection [42]. Further, it has been hypothesized that natural selection for increased cognitive ability in humans led to a reduced level of neuron apoptosis in the human brain [43]. Here we present the composite statistics (panel A) and (panel B), and the IGP statistic (panels C and D). Panels E. and F. show scatter plots between the observed IGP statistic and and , respectively. Here we find no significant relationship between the IGP statistic and the composite module preservation statistic. Since KEGG modules do not correspond to clusters, it is not clear whether cluster preservation statistics are useful in this example. The first column presents summary preservation statistics (y-axis) for selected KEGG pathways (interpreted as modules) versus the number of genes in the pathway (x-axis). Panel A shows (Equation 1), panel B shows the density summary statistic (Equation 30), and panel C shows the connectivity summary statistic (Equation 31). Pathway names are shortened for readability. Panel A shows that MAPK, Calcium, Endocytosis, Wnt, and Phosphatidylinositol show strong evidence of preservation () while the apoptosis module is not preserved. Panel C shows that this preservation signal mainly reflects connectivity preservation (Equation 31) while panel B reveals that most modules have weak to moderate density preservation () (Equation 30). Note that the apoptosis pathway shows no evidence of preservations. Panels D–H display scatter plots of eigengene-based connectivities in the chimpanzee data (-axis) vs. in the human data (-axis). Each point represents a gene in the pathway. Higher correlation means that the internal co-expression structure of the pathway is more strongly preserved. The apoptosis pathway has the lowest statistic, while the Phosphatidylinositol pathway has the highest. The circle plots in panels L and M show connection strengths among apoptosis genes in humans and chimpanzees, respectively. Figure 6A shows that exhibits some dependence on module size. Since we want to compare module preservation irrespective of module size, we focus on the results for the statistic (Figure 6B). A reviewer of this article hypothesized that gene sets (modules) known to be controlled by coexpression (such as Wnt, TGF-beta, SRF, interferon, lineage specific differentiation markers, and NF kappa B) would show stronger evidence of preservation than gene sets without a priori reason for suspecting such control (calcium signaling, MAPK, apoptosis, chemotaxis, endocytosis). Interestingly, the results for the statistic largely validate this hypothesis. Specifically, the 4 most highly preserved pathways according to are Wnt (controlled by coexpression), calcium (not controlled), Hedgehog (controlled), and Phosphatidylinositol (not commented upon). The 4 least preserved pathways are apoptosis (not controlled), TGF-beta (controlled), MAPK (not controlled), endocytosis (not controlled). Since KEGG pathways are not defined via a clustering procedure it is not clear whether cluster preservation statistics are appropriate for analyzing this example. But to afford a comparison, we also report the findings for the IGP statistic [24]. Figures 6C and D show that IGP identifies Phosphatidilinositol and TGF-beta as the least preserved modules while apoptosis genes are highly preserved. We find no significant relationship between the IGP statistic and our module preservation statistics and (Figures 6E and F). This example highlights that module preservation statistics can lead to very different results from cluster preservation statistics. To understand which aspects of the pathways are preserved, one can study the preservation of density statistics (Figure 7B) and of connectivity statistics (Figure 7C). According to , the coexpresssion network formed by apoptosis genes is not preserved. It neither shows evidence of connectivity preservation () nor evidence of density preservation (, ). The Hedgehog pathway also shows no evidence of density preservation (, ) but it shows weak evidence of connectivity preservation (, ). The relatively low preservation Z statistics of the Hedgehog pathway may reflect a higher variability due to a small module size (it contains only genes while the other pathways contain at least 22 genes). To explore this further, we studied the observed preservation statistics, which are less susceptible to network size effects than the corresponding statistics. The scatter plots in Figure 7D–H show the correlations between eigengene based connectivity measures between the two species. For the Hedgehog pathway, we find that () which turns out to be higher than that of the TGF- pathway. The lack of preservation of the apoptosis pathway cannot be explained in terms of low module size. Figure 7E shows that it has the lowest observed statistic, . This application outlines how module preservation statistics can be used to study the preservation of KEGG pathway networks. The analysis presented here is but a first step towards characterizing molecular pathway preservation between human and chimpanzee brains, and should be extended through more detailed analyses with additional data sets in the future. A limitation of our microarray data is that they measured expression levels in heterogeneous mixtures of cells. KEGG and GO (gene ontology) pathways all essentially describe interactions that take place within cells. So when data have been generated from a heterogeneous mixture of different cell types, it is possible that these relationships are somewhat obscured. It is not obvious that all of the elements of a KEGG pathway should be co-expressed, particularly since the pathways describe protein-protein interactions. Application 4: Preservation of modules between male and female cortex co-expression networks We briefly describe an application that quantifies module preservation between male and female cortical samples. The details are described in Supplementary Text S3 and in Supplementary Table S2. We used microarray data from a recent publication [30] to construct consensus modules [44] in male samples from 2 different data sets. We then studied the preservation of these modules in the corresponding female samples. Cross-tabulation measures indicate that for 3 of the male modules there are no corresponding modules in the female data. However, our network preservation statistics show that in fact the three modules show moderate to strong evidence of preservation. Thus, in this application the network preservation statistics protect one from making erroneous claims of significant sex differences. Application 5: Preservation of female mouse liver co-expression modules in male mice In Supplementary Text S4, we re-analyze the mouse liver samples of the F2 mouse intercross [13], [17] to study whether “female” co-expression modules (i.e., modules found in a network based on female mice) are preserved in the corresponding male network. This application demonstrates that module preservation statistics allow us to identify invalid, non-reproducible modules due to array outliers. A comprehensive table of module preservation statistics for this application is presented in Supplementary Table S3. Application 6: Preservation of consensus modules Our preservation statistics allow one to evaluate whether a given module is preserved in another network. A related but distinct data analysis task is to construct modules that are present in several networks. By construction, a consensus module can be detected in each of the underlying networks. A challenge of many real data applications is that it is difficult to obtain independent information (a “gold standard”) that allows one to argue that a module is truly preserved. To address this challenge, we use the consensus network application where by construction, modules are known to be preserved. This allows us to determine the range of values of preservation statistics when modules are known to be preserved. In Supplementary Text S5 and Supplementary Table S4, we report three empirical studies of consensus modules [44] which are constructed in such a way that genes within consensus modules are highly co-expressed in all given input microarray data sets. The consensus module application provides further empirical evidence that module preservation statistics and the recommended threshold values provide sufficient statistical power to implicate preserved modules. Relationships among module preservation statistics In Table 1, we categorize the statistics according to which aspects of module preservation they measure. For example, we present several seemingly different versions of density and connectivity based preservation statistics. But for correlation network modules, close relationships exist between them as illustrated in Figure 8. The hierarchical clustering trees in Figure 8 show the correlations between the observed preservation statistics in our real data applications. As input of hierarchical clustering, we used a dissimilarity between the observed preservation statistics, which was defined as one minus the correlation across all studied reference and test data sets. Overall we observe that statistics within one category tend to cluster together. We also observe that separability appears to be weakly related to the density and connectivity preservation statistics. Cross-tabulation statistics correlate strongly with density and connectivity statistics in the study of human and chimpanzee brain data, but the correlation is weak in the study of sex differences in human brain data. The (average linkage) hierarchical cluster trees visualize the correlations between the preservation statistics. The preservation statistics are colored according to their type: density statistics are colored in red, connectivity preservation statistics are colored in blue, separability is colored in green, and cross-tabulation statistics are colored in black. Note that statistics of the same type tend to cluster together. A derivation of some of these relationships is presented in Supplementary Text S1. We derive relationships between module preservations statistics in the sixth section of Supplementary Text S1. In particular, the geometric interpretation of correlation networks [29], [45] can be used to describe situations when close relationship exist among the density based preservation statistics (, , , ), among the connectivity based preservation statistics (, , , ), and between the separability statistics (, ). These relationships justify aggregating the module preservation statistics into composite preservation statistics such as (Equation 1) and (Equation 34). Simulation studies and comparisons To illustrate the utility and performance of the proposed methods, we consider 7 different simulation scenarios that were designed to reflect various correlation network applications. An overview of these simulations can be found in Figure 9. A more detailed description of the simulation scenarios is provided below. The first column outlines 6 (out of 7) simulation scenarios. Results for the seventh simulation scenario can be found in Supplementary Text S6. Preserved and non-preserved modules are marked in red and black, respectively. The grey module (labeled 0) represents genes whose profiles are simulated to be independent (that is, without any correlation structure). The second and third columns report values of composite statistics and , respectively, as a function of module size. The blue and green horizontal lines show the thresholds of and , respectively. Each figure title reports the Kruskal-Wallis test p-value for testing whether the preservation statistics differ between preserved and non-preserved modules. Note that the proposed thresholds ( for preserved and for non-preserved modules) work quite well. The fourth column shows the permutation p-values of IGP obtained by the R package clusterRepro. The blue and brown lines show p-value thresholds of 0.05 and its Bonferroni correction, respectively. The IGP permutation p-value is less successful than at distinguishing preserved from non-preserved modules. The fifth and last column shows scatter plots of observed IGP vs. . We observe that IGP and tend to be highly correlated when modules correspond to clusters with varying extents of preservation. Table 3 shows the performance grades of module preservation statistics in the different simulation scenarios. The highest grade of indicates excellent performance. We find that the proposed composite statistics (mean grade ) and (mean grade ) perform very well in distinguishing preserved from non-preserved modules. In contrast, cross-tabulation based statistics only obtain a mean grade of . Since several simulation scenarios test the ability to detect connectivity preservation (as opposed to density preservation), it is no surprise that on average cluster validation statistics do not perform well in these simulations. For example, the IGP cluster validation statistic (Table 4) obtains a mean grade of across the scenarios. But the IGP performs very well (grade 4) when studying the preservation of strongly preserved clusters (scenario 2). Table 3 also shows the performance of individual preservation statistics. Note that density based preservation statistics perform well in scenarios 1 through 5 but fail in scenarios 6 and 7. On the other hand, all connectivity based statistics perform well in scenarios 6 and 7. The relatively poor performance of is one of the reasons why we did not include it into our composite statistics. In the following, we describe the different simulation scenarios in more detail. 1. In the weak preservation simulation scenario, we simulate a total of 20 module in the reference data. Each of the reference modules contains 200 nodes. But only of the modules are simulated to be preserved in the test network. We call it the weak preservation simulation since the intramodular correlations of preserved modules are relatively low. The intramodular correlations of non-preserved modules are expected to be zero. Note that the summary statistic successfully distinguishes preserved from non-preserved modules (second column of Figure 9), with for of the preserved modules. Similarly, the statistic distinguishes preserved from non-preserved modules (third column of Figure 9). In comparison, the IGP permutation p-value (fourth column of Figure 9) is less successful: only of the preserved modules pass the Bonferroni-corrected threshold; of the modules that pass the threshold, are preserved and are non-preserved. In this simulation we observe a moderate relationship between the observed IGP and , with Pearson correlation . 2. In the half-preserved simulation scenario, we simulate modules of varying sizes (between and nodes), labeled 1–10. Modules 1–5 are preserved in the test set, while modules 6–10 are not preserved. All 5 preserved modules have , and all non-preserved modules have . Likewise, separates preserved and non-preserved modules. Permutation p-values of IGP are also successful with respect to the Bonferroni-corrected threshold. In this simulation we observe a strong correlation between IGP and : . 3. In the permuted simulation scenario, none of the 10 modules are preserved. Specifically, we simulate modules of varying sizes in the reference set and modules of the same sizes in the test set but there is no relationship between the modules: the module membership is randomly permuted between the networks. The low value of the summary preservation statistic accurately reflects that none of the modules are preserved. In contrast, the IGP permutation p-value for 2 of the 10 modules is lower than the Bonferroni threshold . In this simulation the correlation between IGP and is not significant. 4. In the half-permuted simulation scenario, we simulate modules labeled 1–10 in the reference set. Modules 1–5 are preserved in the test set, while modules 6–10 are not. The test set contains modules 6′–10′ of the same sizes as modules 6–10, but their module membership is randomly permuted with respect to the modules 6–10. The summary preservation statistic is quite accurate: all preserved modules have and non-preserved modules have . The observed values of the IGP statistic are highly correlated (, ) with but the IGP permutation p-values do not work well: 2 preserved modules have an IGP p-value above . 5. In the intramodular permuted scenario, we simulate modules whose density is preserved but whose intramodular connectivity is not preserved. Specifically, we simulate a total of modules labeled 1–10 in the reference set. The density of modules 1–5 is preserved in the test set but the node labels inside each module are permuted, which entails that their intramodular connectivity patterns is no longer preserved in the test network. For modules 6–10 neither the density nor the connectivity is preserved. Both composite statistics and work well though not as good as in the previous studies. Both composite statistics successfully detect the density preservation. IGP performs quite well: it misclassifies only one non-preserved module as preserved. In this simulation we observe a strong correlation between IGP and : . 6. In the pathway simulations scenario, we simulate (preserved) modules whose connectivity patterns are preserved but whose density is not. Further, we simulate modules for which neither connectivity nor density are preserved. In the following description, we refer to the modules from scenario 4 as clusters to distinguish them from the non-cluster modules studied here. The preserved (non-preserved) modules of the pathway scenario are created by randomly selecting nodes from the preserved (non-preserved) clusters in scenario 4. Thus, the preserved modules contain nodes from multiple preserved clusters of scenario 4. Since the pairwise correlations between and within the preserved clusters (of scenario 4) are preserved, the intramodular connectivity patterns of the resulting pathway modules are preserved in the test network. But since nodes from different clusters may have low correlations, the density of the pathway modules tends to be low. The two pathway simulations differ by the module sizes: in the small scenario, modules range from 25 to 100 nodes; in the large scenario, modules range from 100 to 500 nodes. Because module membership is trivially preserved between reference and test networks, cross-tabulation statistics are not applicable. The composite statistics and distinguish preserved from non-preserved modules (Figure 9 since they also measure aspects of connectivity preservation. By considering individual preservation statistics, we find that all connectivity preservation statistics successfully distinguish preserved from non-preserved modules. As expected, density based statistics and the IGP statistic fail to detect the preservation of the connectivity patterns of the preserved modules (Figure 9) but these statistics correctly indicate that the density is not preserved. Detailed results are provided in Supplementary Text S6. Additional descriptions of the simulations can be found Supplementary Text S6 and in Supplementary Table S5. As caveat, we mention that we only considered 7 scenarios that aim to emulate selected situations encountered in co-expression networks. The performance of these preservation statistics may change in other scenarios. A comprehensive evaluation in other scenarios is needed but lies beyond our scope. R software tutorials describing the results of our simulation studies can be found on our web page and will allow the reader to compare different methods using our simulated data. Software implementation Preservation statistics described in this article have been implemented in the freely available statistical language and environment R. A complete evaluation of observed preservation statistics and their permutation statistics is implemented in function modulePreservation, which is included in the updated WGCNA package originally described in [46]. For each user-defined reference network both preservation and quality statistics are calculated considering each of the remaining networks as test network. Our tutorials illustrate the use of the modulePreservation function on real and simulated data. All data, code and tutorials can be can be downloaded from http://www.genetics.ucla.edu/labs/horvath/CoexpressionNetwork/ModulePreservation. This article describes powerful module preservation statistics that capture different aspects of module preservation. The network based preservation statistics only assume that each module forms a sub-network of the original network. Thus, we define a module as a subset of nodes with their corresponding adjacencies. In particular, our connectivity preservation statistics (, , , and ) do not assume that modules are defined as clusters. While we have used connectivity based statistics in biologic applications (e.g., modular preservation in human and mouse networks [20], [21]), this article provides the first methodological description and evaluation of these and other module preservation statistics. We also demonstrate that it is advantageous to aggregate multiple preservation statistics into composite statistics and . While we propose module preservation statistics for general networks (e.g., ), all of our applications involve gene co-expression networks. For a special class of networks, called approximately factorizable networks, one can derive simple relationships between network concepts [29], [45]. Analogously, we characterize correlation modules where simple relationships exist between i) density-based preservation statistics, ii) connectivity based preservation statistics, and iii) separability based preservation statistics (see the sixth section of Supplementary Text S1). We also briefly describe relationships between preservation statistics in general networks. Table 3 shows the performance grades of module preservation statistics in different simulation scenarios. We find that composite statistics and perform very well in distinguishing preserved from non-preserved modules. While the dependence of on the module size is often attractive, our applications show situations when it is unattractive. In this case, we recommend to use the composite statistic , which has an added bonus: its computation is much faster than that of since it does not involve a permutation test procedure. Our applications provide evidence that the statistic can lead to biologically meaningful preservation rankings among modules. Uses of module preservation statistics Our applications provide a glimpse of the types of research questions that can be addressed with the module preservation statistics. In general, methods for quantifying module preservation have several uses. First and foremost they can be used to determine which properties of a network module are preserved in another network. Thus, module preservation statistics are a valuable tool for validation as well as differential network analysis. Second, they can be used to define a global measure of module structure preservation by averaging the preservation statistic across multiple modules or by determining the proportion of modules that are preserved. A third use of module preservation statistics is to define measures of module quality (or robustness), which may inform the module definition. For example, to measure how robustly a module is defined in a given correlation network, one can use resampling techniques to create reference and test sets from the original data and evaluate module preservation across the resulting networks. Thus, any module preservation statistic naturally gives rise to a module quality statistic by applying it to repeated random splits (interpreted as reference and test set) of the data. By averaging the module preservation statistic across multiple random splits of the original data one arrives at a module quality statistic. We briefly point out situations when alternative procedures may be more appropriate. To identify modules that are present in multiple data sets it can be preferable to consider all data sets simultaneously in a consensus module detection procedure. For example, the consensus module approach described in application 6 results in modules that are present in multiple networks by construction. To identify individual genes that diverge between two data sets, one can use standard discriminative analysis techniques. For example, differentially expressed genes can be found with differential expression analysis and differentially co-expressed genes can be found using differential co-expression analysis [17]. Comparison to cluster preservation statistics While cluster analysis and network analysis are different approaches for studying high-dimensional data, there are some commonalities. For example, it is straightforward to turn a network adjacency matrix (which is a similarity measure) into a dissimilarity measure which can be used as input of a clustering procedure (e.g., hierarchical clustering or partitioning around medoids) [25]. If a module is defined using a clustering procedure, one can use cluster preservation statistics as module preservation statistics. Conversely, our adjacency based module preservation statistics give rise to cluster preservation statistics since a dissimilarity measure (used for the cluster definition) can also be transformed into a network adjacency matrix. In some of our applications where modules are defined as clusters, we find that is highly correlated with the IGP cluster validation statistic [24] across modules. In our simulations, we observe that IGP and tend to be highly correlated when modules correspond to clusters with varying extents of preservation. This illustrates that leads to sensible results in the special case when modules are defined as clusters. When modules are not defined via a clustering procedure (e.g. in our KEGG pathway application), we find pronounced differences between and the IGP statistic. The proposed composite preservation statistics and outperform (or tie with) the IGP statistic in all simulation scenarios (see Table 4). More comprehensive comparisons involving additional simulation scenarios and other cluster preservation statistics are needed but lie beyond our scope. Module quality measures Although not the focus of this work, we mention that a major application of density-based statistics is to measure module quality in the reference data (for example, to compare various module detection procedures). Module quality measures can be defined using density-based and separability-based module preservation measures: the density and separability of a module in the reference network measures its homogeneity and separateness, respectively. In contrast, connectivity based measures (which contrast the reference adjacency matrix with the test network adjacency matrix) are not directly related to module quality measures (unless a data splitting approach is used in the reference data). Module quality measures based on density and separability measures can be used to confirm that the reference modules are well defined. A section in Supplementary Text S1 describes module quality measures that are implemented in the R function modulePreservation. The proposed preservation statistics have several limitations including the following. First, our statistics only apply to undirected networks. Generalization of our statistics to directed networks is possible but outside of our scope. A second limitation concerns statistics of connectivity preservation that are based on correlating network adjacencies, intramodular connectivities, etc, between the reference and the test networks. Because Pearson correlation is sensitive to outliers, it may be advantageous to use an outlier-resistant correlation measure, e.g., the Spearman correlation or the biweight midcorrelation [47], [48] implemented in the WGCNA package [46]. Robust correlation options have been implemented in the R function modulePreservation. A third limitation is that a high value of a preservation statistic does not necessarily imply that the module could be found by a de novo module detection analysis in the test data set. For example, if a module is defined using cluster analysis, then the resulting test set modules may not have significant overlap with the original reference module in a cross-tabulation table. As explained before, this potential limitation is a small price to pay for making a module preservation analysis independent from the vagaries of module detection. A fourth limitation is that it is difficult to pick thresholds for preservation statistics. To address this issue, we use permutation tests to adjust preservation statistics for random chance by defining Z statistics (Equation 29). The R function modulePreservation also calculates empirical p-values for the preservation statistics. A potential disadvantage of permutation test based preservation statistics (compared to observed statistics and ) is that they typically depend on module sizes. The choice of thresholds is discussed in the Methods section. A fifth limitation is computational speed when it comes to calculating permutation test based statistics (e.g. ). When only and observed preservation statistics are of interest, we recommend to avoid the computationally intensive permutation test procedure by setting nPermutations=0 in the modulePreservation function. A sixth limitation is that the different preservation statistics may disagree with regard to the preservation of a given module. While certain aspects of a module may be preserved, others may not be. In our simulation studies, we present scenarios where connectivity statistics show high preservation but density measures do not and vice versa. Since both types of preservation statistics will be of interest in practice, our R function modulePreservation outputs all preservation statistics. Although we aggregate several preservation statistics into composite statistics, we recommend to consider all of the underlying preservation statistics to determine which aspects of a module are preserved. While we describe situations when cross-tabulation based preservation statistics are not applicable, we should point out that cross-tabulation statistics also have the following advantages. First, they are often intuitive. Second, they can be applied when no network structure is present. Third, they work well when module assignments are strongly preserved and the modules remain separate in the test network. In the first section of Supplementary Text S1, we describe cross-tabulation based module preservation statistics which we have found to be useful. Discussion of the functional significance of co-expression relationships We note that the interpretation of gene co-expression relationships depends heavily on biological context. For example, in a dataset consisting of samples from multiple tissue types, co-expression modules (that is, modules defined by co-expression similarity) will often distinguish genes that are expressed in tissue-specific patterns (e.g., [32], [49]). In a dataset consisting of samples from a single tissue type, co-expression modules may distinguish sets of genes that are preferentially expressed in distinct cell types that comprise that tissue (e.g., [30]). In a dataset consisting of samples from a homogeneous cellular population, co-expression modules may correspond more directly to sets of genes that work in tandem to perform various intracellular functions. In many cases, co-expression modules may not present immediate functional interpretations. However, previous work has shown that many co-expression modules are conserved across phylogeny [4], [21], [32], [50], enriched with protein-protein interactions [7], [21], [30], and enriched with specific functional categories of genes, including ribosomal, mitochondrial, synaptic, immune, hypoxic, mitotic, and many others [7], [21], [30], [33]. Although elucidating the functional significance of identified co-expression modules requires substantial effort from biologists and bioinformaticians, the importance of co-expression modules lies not only in their functional interpretation, but also in their reproducibility. Because transcriptome organization in a given biological system is highly reproducible [30], co-expression modules provide a natural framework for comparisons between species, tissues, and pathophysiological states. This framework can reduce dimensionality by approximately three orders of magnitude (e.g., moving from say 40,000 transcripts to 40 modules) [29], [33], while simultaneously placing identified gene expression differences within specific cellular and functional contexts (inasmuch as the cellular and functional contexts of the modules are understood). The co-expression modules themselves are simply summaries of interdependencies that are already present in the data. Preservation statistics can be used to address an important question in co-expression module based analyses: how to show whether the modules are robust and reproducible across data sets. Given the above-mentioned limitations, it is reassuring that the proposed module preservation statistics perform well in 6 real data applications and in 7 simulation scenarios. Although it would be convenient to have a single statistic and a corresponding threshold value for deciding whether a module is preserved, this simplistic view fails to realize that module preservation should be judged according to multiple criteria (e.g., density preservation, connectivity preservation, etc). Individual preservation statistics provide a more nuanced and detailed view of module preservation. Before deciding on module preservation, the data analyst should decide which aspects of a module preservation are of interest. Cross-tabulation based preservation statistics Due to space limitations, we have moved our description of cross-tabulation based preservation statistics to the first section of Supplementary Text S1. We briefly mention related measures reported in the literature. Our co-clustering statistic (in the first section of Supplementary Text S1) is similar to the cluster robustness measure [23], [51] and the accuracy based measures are conceptually related to a cluster discrepancy measure proposed in [23]. Cluster validation measures and approaches are reviewed in [52]. Many cross-tabulation based methods have been proposed to compare two clusterings (module assignments), e.g., the Rand index [53] or prediction based statistics [26], [27]. Review of network adjacency matrix and network concepts Our methods are applicable to weighted or unweighted networks that are specified by an adjacency matrix, an matrix with entries in . The component encodes the network connection strength between nodes and . In an unweighted network, the nodes , can be either connected () or disconnected (). In a weighted network, the adjacency takes on a value in that encodes the connection strength between the nodes. Networks do not have to be defined with regard to correlations. Instead, they may reflect protein binding information, participation in molecular pathways, etc. In the following, we assume that we are dealing with an undirected network encoded by a symmetric adjacency matrix: . But several of our module preservation statistics can easily be adapted to the case of directed network represented by a non-symmetric adjacency matrix. To simplify notation, we introduce the function that takes a symmetric matrix and turns it into a vector of non-redundant components,(2)We assume that the diagonal of the matrix is fixed (for example, if is an adjacency matrix, the diagonal is defined to be 1), so we leave the diagonal elements out. Thus, the vector contains components. A network represented by its adjacency matrix can be characterized by a number of network concepts (also known as network indices) [29], [45]. The network density is the mean adjacency,(3)Higher density means more (or more strongly) interconnected nodes. The connectivity (also known as degree) of node is defined asThe connectivity of node measures its connection strength with other nodes. The higher the more centrally located is the node in the The Maximum Adjacency Ratio (MAR) [29] of node is defined as(4)The is only useful for distinguishing the connectivity patterns of nodes in a weighted network since it is constant () in unweighted The clustering coefficient [54] of node is defined as(5)While the clustering coefficient was originally defined for unweighted networks, Equation 5 can be used to extend its definition to weighted networks [5]: one can easily show that implies . Intramodular network concepts Many network analyses define modules, that is subsets of nodes that form a sub-network in the original network. Modules are labeled by integer labels , and sometimes by color labels. Color labels can be convenient for visualizing modules in network plots. For module with nodes, the dimensional adjacency matrix between the module nodes is denoted by . Denote by the set of node indices of the nodes in module . Network concepts (such as the connectivity, clustering coefficient, MAR etc) defined for are defined as intramodular network concepts. For example, the density of module is defined as the mean adjacency of :(6)The intramodular connectivity of node in module is defined as the sum of connection strengths to other nodes within the same module,(7)Nodes with high intramodular connectivity are referred to as intramodular hub nodes. Module preservation statistics for general networks Here we describe module preservation statistics that can be used to determine whether a module that is present in a reference network (with adjacency ) can also be found in an independent test network (with adjacency ). Specifically, assume the vector encodes the module assignments in the reference network. Thus () if node is assigned to module . We reserve the label (and color grey) for nodes that are not assigned to any module. For a given module with nodes, the module adjacency matrices are denoted and in the reference and test networks, respectively. We propose network concepts that can be useful for determining whether a module (found in the reference network) is preserved in the test network. Intuitively, one may call a module preserved if it has a high density in the test network. We define the mean adjacency for module as the module density in the test network,(8)Some of the density statistics such as the mean adjacency are similar to previously described methods based on within-cluster and between-cluster dissimilarities [22]. For example, the mean intramodular adjacency (Equation 8) is oppositely related to the within-module scatter used in assessing the quality of clusters based on a dissimilarity [55]. The network density measure can be considered a generalization of the cluster cohesiveness measure [28] to (possibly weighted) networks. Other network concepts may be used to obtain a summary statistic of a module. For example, our R function modulePreservation also calculate preservation statistics based on the mean (Equation 5):(9) and mean MAR (Equation 4):(10)in the test network. Connectivity preservation statistics quantify how similar connectivity of a given module is between a reference and a test network. For example, module connectivity preservation can mean that, within a given module , nodes with a high connection strength in the reference network also exhibit a high connection strength in the test network. This property can be quantified by the correlation of intramodular adjacencies in reference and test networks. Specifically, if the entries of the first adjacency matrix are correlated with those of the second adjacency matrix then the adjacency pattern of the module is preserved in the second network. Therefore, we define the adjacency correlation of the module network as(11)High indicates that adjacencies within the module in the reference and test networks exhibit similar patterns. If module is preserved in the second network, the highly connected hub nodes in the reference network will often be highly connected hub nodes in the test network. In other words, the intramodular connectivity in the reference network should be highly correlated with the corresponding intramodular connectivity in the test network. Thus, we define the correlation of intramodular connectivities, (12)where and are the vectors of intramodular connectivities of all nodes in module in the reference and test networks, respectively. Analogously, we define the correlation of clustering coefficients and maximum adjacency ratios,(13)(14) Correlation networks. Correlation networks are a special type of undirected networks in which the adjacency is constructed on the basis of correlations between quantitative measurements that can be described by an matrix where the column indices correspond to network nodes () and the row indices () correspond to sample measurements. We refer to the -th column as the -th node profile across sample measurements. For example, if contains data from expression microarrays, the columns correspond to genes (or probes), the rows correspond to microarrays, and the entries report transcript abundance measurements. Networks based on gene expression data are often referred to as gene co-expression networks. An important choice in the construction of a correlation network concerns the treatment of strong negative correlations. In signed networks negatively correlated nodes are considered unconnected. In contrast, in unsigned networks nodes with high negative correlations are considered connected (with the same strength as nodes with high positive correlations). As detailed in Supplementary Text S1, a signed weighted adjacency matrix can be defined as follows [5], [56](15)and an unsigned adjacency by(16)The choice of signed vs. unsigned networks depends on the application; both signed [56] and unsigned [13], [30], [33] weighted gene networks have been successfully used in gene expression analysis. Weighted correlation networks enjoy several advantages over unweighted networks including the following: i) they preserve the continuous nature of the underlying correlation structure; ii) they are highly robust with regard to parameters (e.g. ) used in the network construction [5], iii) they allow for a geometric interpretation of network concepts [29]. The default method for defining modules in weighted correlation networks is to use average linkage hierarchical clustering coupled with dynamic branch cutting [5], [57]. Eigennode summarizes a correlation module and provides a measure of module membership. Many module construction methods lead to correlation network modules comprised of highly correlated variables. For such modules one can summarize the corresponding module vectors using the first principal component denoted by (fifth section of Supplementary Text S1), referred to as the module eigennode (ME) or (in gene co-expression networks) the module eigengene. For example, the gene expression profiles of a given co-expression module can be summarized with the module eigengene [19], [29], [44]. To visualize the meaning of the module eigengene, consider the heat map in Figure 5A. Here rows correspond to genes inside a given module and columns correspond to microarray samples. The heat map color-codes high (red) and low (green) gene expression values. The barplot underneath the heat map visualizes the expression level of the corresponding module eigengene. Note that the module eigengene has a high expression value for samples (columns) where the module genes tend to be over-expressed. The module eigengene can be considered the best summary of the standardized module expression data since it explains the maximum proportion of variance of the module expressions. The module eigennode can be used to define a quantitative measure of module membership [29] of node in module :(17)where is the profile of node . The module membership lies in and specifies how close node is to module . is sometimes referred to as module eigengene-based connectivity [13], [17]. Both intramodular network concepts (e.g., ) and inter modular network concepts (e.g., module separability Equation 27) can be used to study the preservation of network modules. By measuring how these network concepts are preserved from a reference network to a test network, one can define network module preservation statistics as described below. Module preservation statistics for correlation networks The specific nature of correlation networks allows us to define additional module preservation statistics. The underlying information carried by the sign of the correlation can be used to further refine the statistics irrespective of whether a signed or unsigned similarity is used in network construction. To simplify notation, we define(18)We will use the notation for the correlation matrix restricted to the nodes in module . We define the mean correlation density of module as(19)Thus the correlation measure of module preservation is the mean correlation in the test network multiplied by the sign of the corresponding correlations in the reference network. We note that a correlation that has the same sign in the reference and test networks increases the mean, while a correlation that changes sign decreases the mean. Because the preservation statistic keeps track of the sign of the corresponding correlation in the reference network, we call it the mean sign-aware correlation. To measure the preservation of connectivity patterns within module between the reference and test networks, we define a correlation-based measure similar to the statistic (Equation 11):(20)In our applications we find that the correlation-based preservation statistic is preferable to its general network counterpart ; therefore, we only report . Eigennode-based density preservation statistics. The concept of the module eigennode also gives rise to several preservation statistics that in effect measure module density, or, from a different point of view, how well the eigennode represents the whole module. For example, one can use the proportion of variance explained (defined in the fifth section in Supplementary Text S1) by the module eigennode to arrive at a density measure. In Supplementary Text S1, we prove that the proportion of variance explained (PVE) can also be calculated as mean squared value:(21)where is the eigennode of module in the test network. The mean sign-aware module membership is defined as(22)It measures the mean module membership, Equation 17, in which nodes whose module memberships in the reference and test networks have the same sign contribute positively, and nodes whose module memberships in the reference and test networks have opposite signs contribute negatively. Our statistic is conceptually related to the homogeneity score [22], [24] which is defined as the average correlation between a cluster's centroid and the members of the cluster. While [24] define the cluster centroid by an average, we use the first principal component (the module eigennode) as cluster centroid since it explains the maximum amount of variation. In several applications, we have found that the use of either cluster centroid leads to very similar results. Eigennode-based connectivity preservation statistics. Intuitively, if the internal structure of a module is preserved between a reference and a test network, we expect that a variable with a high module membership in the reference network will have a high module membership in the test network as well; conversely, variables with relatively low module membership in the reference network should also have a relatively low module membership in the test network. In other words, intramodular hubs in the reference network should also be intramodular hubs in the test network. For a given module we define the statistic as(23)where the correlation runs only over variables that belong to module . We also define an analogous statistic by correlating the module membership of all network variables in the reference and test networks:(24)The advantage of using all nodes is that the statistic is less dependent on cutoffs (for example, branch cut parameters) of the method used to define modules. On the other hand, for relatively small modules (compared to the size of the full network) the signal of the few nodes with high module membership may be overwhelmed by the noise contribution of the many nodes that have very low module Module separability statistics. A network module is distinct if it is well separated from the other modules in the network. A distinct module in a reference network may be considered well preserved in a test network if it remains well separated from the other modules in the test network. In the following, we describe several separability based preservation statistics. Denote by and the sets of node indices that correspond to modules and , respectively. Our separability statistics contrast inter modular adjacencies with intramodular adjacencies. To measure the intermodular adjacencies between modules and , we use(25)but alternative measures based on the minimal or the maximal intermodular adjacency could also be defined. As measure of mean intramodular adjacency in the two modules, we use the geometric mean of the two module densities (Equation 8):(26)We define separability statistics as minus the ratio of intermodular adjacency divided by intramodular density:(27)The separability statistics take on (possibly negative) values smaller than . The closer a separability statistic value is to , the more separated (distinct) are the two modules. Since is statistically more robust than the maximum or the minimum based separability measures, it is in general preferable, but in specific applications the minimum and maximum based measures may be useful as well. In clustering applications based on Euclidean distance it is customary to measure module distinctiveness, or separability, by the between-cluster distance. For correlation networks we propose to measure module separability by 1 minus the correlation of their respective eigennodes. Specifically, for two modules , their test separability is defined as(28)Low test separability suggests the modules are not preserved as separate clusters. Differences in separability between networks may also reflect biologically interesting differences in correlation relationships between whole modules In the sixth section of Supplementary Text S1, we outline when close relationships exist between and eigennode based separability . Since the eigennode based separability can be computed much more efficiently, we focus on the eigennode based separability in our applications. Our separability statistic is conceptually related to the separability score used in [22], [24] which for cluster is the weighted average of the correlation between the centroid of cluster and every other centroid ,Since we wanted to put all modules on the same footing irrespective of module size, we do not use module size in our definition of the separability statistics. Having said this, it straightforward to adapt our definitions to include module size. Assessing significance of observed statistics by permutation tests Typical values of module preservation statistics depend on many factors, for example on network size, module size, number of observations etc. Thus, instead of attempting to define thresholds for considering a preservation statistic significant, we use permutation tests. Specifically, we randomly permute the module labels in the test network and calculate corresponding preservation statistics. This procedure is repeated times. For each statistic labeled by index we then calculate the mean and the standard deviation of the permuted values. We define the corresponding statistic as(29)where is the observed value for the statistic . Under certain conditions, one can prove that under the null hypothesis of no preservation the statistic asymptotically follows the standard normal distribution . Thus, under the assumption that the number of permutations is large enough to approximate the asymptotic regime, one can convert the statistics to p-values using the standard normal distribution. Our R function modulePreservation outputs the asymptotic p-values for each statistic. But we should point out that it would be preferable to use a full permutation test to calculate permutation test p-values. We often report Z statistics (instead of p-values) for the following two reasons: First, permutation p-values of preserved modules are often astronomically significant (say ) and it is more convenient to report the results on a Z scale. The second reason is computational speed. The calculation of a Z statistic only requires one to estimate the mean and variance under the null hypothesis, for which fewer permutations are needed. To estimate a permutation test p-value accurately would require computational time far beyond practical limits. Composite preservation statistic for correlation networks In the sixth section of Supplementary Text S1, we describe when close relationships exist between many of the preservation statistics presented above. This suggests that one can combine the individual preservation statistics into a composite preservation statistic. We propose two composite preservation statistics. The first composite statistic (Equation 1) summarizes the individual Z statistic values that result from the permutation test. The second composite statistic (Equation 34) summarizes the ranks of the observed preservation statistics. The relationships derived in Supplementary Text S1 suggest to summarize the density based preservation statistics as follows:(30)Similarly, the connectivity based preservation statistics can be summarized as follows:(31)When density and connectivity based preservation statistics are equally important for judging the preservation of a network module, one can consider the composite summary statistic (Eq. 1)Alternatively, a weighted average between and can be formed to emphasize different aspects of module preservation. Future research could investigate alternative ways of aggregating preservation statistics. While our simulations and applications show that works well for distinguishing preserved from non-preserved modules, we do not claim that it is optimal. In practice, we recommend to consider all individual preservation statistics. Our simulated as well as empirical data show that the separability tends to have low agreement (as measured by correlation) with the other preservation statistics (Figure 8). Since the statistic often performs poorly, we did not include it in our composite statistics. Calculating empirical p-values for module preservation Since is not a permutation statistic but rather the median of other statistics, we do not use it to calculate a p-value. Instead, the R function modulePreservation calculates a summary p-value () as follows. For each permutation Z statistic, it calculates the corresponding p-value assuming that, under the null, the Z statistic has a normal distribution . The normal distribution can be justified using relatively weak assumptions described in statistics textbooks. As a caveat, we mention that we use preservation p-values as descriptive (and not inferential) measures. On the other hand, we cannot assume normality for . Hence, instead of calculating a p-value corresponding to , we calculate a summary log-p-value instead, given as the median of the log-p-values of the corresponding permutation statistics. Because of the often extremely significant p-values associated with the permutation statistics, it is desirable to use logarithms (here base 10). We emphasize that the summary log-p-value is not directly associated with ; rather, it is a separate descriptive summary statistic that summarizes the p-values of the individual permutation statistics. Thresholds for It seems intuitive to call a module with preserved, but our simulation studies argue for a more stringent threshold. We recommend the following threshold guidelines: if , there is strong evidence that the module is preserved. If there is weak to moderate evidence of preservation. If , there is no evidence that the module preserved. As discussed below, these threshold values should be considered rough guidelines since more (or less) stringent thresholds may be required depending on the application. The modulePreservation R function calculates multiple preservation statistics and corresponding asymptotic p-values. Similar to the case of statistics, a threshold that is appropriate in one context may not be appropriate in another. The choice of thresholds depends not only on the desired significance level but also on the research question. When several preservation statistics are analyzed individually for any indication of module preservation then the threshold should correct for the these multiple comparisons. Since several “tests” for preservation are considered, an obvious choice is to use one of the standard correction approaches (e.g., Bonferroni correction) for determining the threshold that should be put on multiple tests. Toward this end, one can use the uncorrected, individual preservation statistics and p-values output by the modulePreservation function. A Bonferroni correction would be a conservative but probably too stringent approach in light of the fact that many of the preservation statistics are closely related (see the 6th section in Supplementary Text S1). Given the strong relationships among some preservation statistics, we have found it useful to aggregate the statistics (and optionally the empirical p-values) in a statistically robust fashion using the median but many alternative procedures are possible. To provide some guidance, we recommend thresholds for that we have found useful in our simulations studies (Supplementary Text S6) and in our empirical studies. Composite preservation statistic In some applications such as the human vs. chimpanzee comparison described above, one is interested in ranking modules by their overall preservation in the test set, i.e., one is interested in a relative measure of module preservation. Since our simulations and applications reveal that (Equation 1) strongly depends on module size, this statistic may not be appropriate when comparing modules of very different sizes. Here we define an alternative rank-based measure that relies on observed preservation statistics rather than the permutation statistics. For each statistic , we rank the modules based on the observed values . Thus, each module is assigned a rank for each observed statistic. We then define the median density and connectivity ranks(32)(33)Analogously to the definition of , we then define the statistic as the mean of and ,(34)Alternatively, a weighted average of the ranks could be formed to emphasize different aspects of module preservation. It is worth repeating that a composite rank preservation statistic is only useful for studying the relative preservation of modules, e.g., we use for studying which human brain co-expression modules are least preserved in chimpanzee brain networks. Composite preservation statistic for general networks While all examples in this article relate to correlation (in particular, co-expression) networks, we have also implemented methods and R function that can be applied to general networks specified only by an adjacency matrix. For example, this function could be used to study module preservation in protein-protein interaction networks. We also define a composite statistic , which is defined for a general network specified by an adjacency matrix (Eq. 35).(35)where and . Note that is only computed with regard to a subset of the individual statistics. To invoke this preservation statistic, set dataIsExpr=FALSE in the modulePreservation R function. Detailed methods description in Supplementary Text S1 A detailed description of the methods is provided Supplementary Text S1 which contains the following sections. In the first section of Supplementary Text S1, we describe standard cross-tabulation based module preservation statistics. Specifically, we present three basic cross-tabulation based statistics for determining whether modules in a reference data set are preserved in a test data set. These statistics do not assume that a test network is available. Instead, module assignments in both the reference and the test networks are needed. In the second section, we briefly review a hierarchical clustering procedure for module detection. Many methods exist for defining network modules. In this section, we describe the method used in our applications but it is worth repeating that our preservation statistics apply to most alternative module detection procedures. In the third section, we review the definition of signed and unsigned correlation networks. Correlation networks are a special case of general undirected networks in which the adjacency is constructed on the basis of correlations between quantitative variables. In the fourth section, we present module quality statistics, which we are implemented in the modulePreservation R function. While our main article focuses on statistics that measure preservation of modules between a reference and a test network, we briefly discuss the application of some of the preservation statistics to the related but distinct task of measuring module quality in a single (reference) network. More precisely, the density and separability statistics can be applied to the reference network without a reference to a test network. The results can then be interpreted as measuring module quality, that is how closely interconnected the nodes of a module are or how well a module is separated from other modules in the network. In the fifth section, we review the notation for the singular value decomposition and for defining a module eigennnode. The section describes conditions when the eigenvector is an optimal way of representing a correlation module. It also reviews the definition of (the proportion of the variance explained by the eigennode). We derive a relationship between and the module membership measures , which will be useful for deriving relationships between preservation statistics. In the sixth section, we investigate relationships between preservation statistics in correlation networks. Brief overview of KEGG pathways studied in Application 3 The KEGG database and many textbooks describe these fundamental pathways in more detail but the following terse descriptions may be helpful. The Wnt signaling pathway describes a network of proteins most well known for their roles in embryogenesis and cancer, but also involved in normal physiological processes in adult animals. The Hedgehog signaling pathway is one of the key regulators of animal development conserved from flies to humans. The apoptosis pathway mediates programmed cell death. Endocytosis is the process by which cells absorb molecules (such as proteins) from outside the cell by engulfing them with their cell membrane. The Transforming growth factor beta (TGF-) signaling pathway is involved in many cellular processes in both the adult organism and the developing embryo including cell growth, cell differentiation, apoptosis, cellular homeostasis and other cellular functions. The Phosphatidylinositol signaling system facilitates environmental information processing and signal transduction. The mitogen-activated protein kinase (MAPK) cascade is a highly conserved pathway that is involved in various cellular functions, including cell proliferation, differentiation and migration. The Calcium signaling pathway describes how calcium can act in signal transduction after influx resulting from activation of ion channels, or as a second messenger caused by indirect signal transduction pathways such as G protein-coupled receptors. Supporting Information Preservation statistics of human brain modules in chimpanzee samples and vice-versa. This table contains observed preservation statistics and their permutation Z scores of human brain modules in chimpanzee samples and vice-versa. Columns indicate the reference data set, test data set, module label (color), module type, module size, observed preservation statistics, their Z scores, empirical p-values, and Bonferoni-corrected empirical p-values. The grey (improper) modules contain all unassigned genes, and the gold module is a random sample representing the entire network as a single (0.03 MB CSV) Preservation statistics of male human brain modules in the corresponding female samples and vice-versa. This table contains observed preservation statistics and their permutation Z scores of male human brain modules in the corresponding female samples and vice-versa. Columns indicate the reference data set, test data set, module label (color), module type, module size, observed preservation statistics, their Z scores, empirical p-values, and Bonferoni-corrected empirical p-values. The grey (improper) modules contain all unassigned genes, and the gold module is a random sample of genes representing the entire network as a single module. (0.26 MB CSV) Preservation statistics of female mouse liver modules in the corresponding male samples. This table contains observed preservation statistics and their permutation Z scores of female mouse liver modules in the corresponding male samples. Columns indicate the reference data set, test data set, module label (color), module size, observed preservation statistics, their Z scores, empirical p-values, and Bonferoni-corrected empirical p-values. (0.02 MB CSV) Preservation statistics of consensus modules across the data sets in which they were identified. This table contains observed preservation statistics and their permutation Z scores of consensus modules across the data sets from which the consensus modules were obtained. Columns indicate the reference data set, test data set, module label (color), module type, module size, observed preservation statistics, their Z scores, empirical p-values, and Bonferoni-corrected empirical p-values. The grey (improper) modules contain all unassigned genes, and the gold module is a random sample representing the entire network as a single module. (0.34 MB CSV) Preservation statistics of simulated modules. This table contains observed preservation statistics and their permutation Z scores of simulated modules in our simulation studies. Columns indicate simulation model, module label, simulated status (preserved or non-preserved), observed preservation statistics, Z scores, empirical p-values, and Bonferoni-corrected empirical p-values. The grey (improper) modules contain all unassigned genes, and the gold module is a random sample representing the entire network as a single module. (0.16 MB CSV) Detailed methods description. A detailed description of the methods is provided which contains the following sections. First, we describe standard cross-tabulation based module preservation statistics. Specifically, we present three basic cross-tabulation based statistics for determining whether modules in a reference data set are preserved in a test data set. These statistics do not assume that a test network is available. Instead, module assignments in both the reference and the test networks are needed. Second, we briefly review a hierarchical clustering procedure for module detection. Many methods exist for defining network modules. In this section, we describe the method used in our applications but it is worth repeating that our preservation statistics apply to most alternative module detection procedures. Third, we review the definition of signed and unsigned correlation networks. Correlation networks are a special case of general undirected networks in which the adjacency is constructed on the basis of correlations between quantitative variables. Fourth, we present module quality statistics, which we are implemented in the modulePreservation R function. While our main article focuses on statistics that measure preservation of modules between a reference and a test network, we briefly discuss the application of some of the preservation statistics to the related but distinct task of measuring module quality in a single (reference) network. More precisely, the density and separability statistics can be applied to the reference network without a reference to a test network. The results can then be interpreted as measuring module quality, that is how closely interconnected the nodes of a module are or how well a module is separated from other modules in the network. Fifth, we review the notation for the singular value decomposition and for defining a module eigennnode. The section describes conditions when the eigenvector E is an optimal way of representing a correlation module. It also reviews the definition of the proportion of the variance explained by the eigennode). We derive a relationship between PVE and the module membership measures kME, which will be useful for deriving relationships between preservation statistics. Sixth, we investigate relationships between preservation statistics in correlation networks. An advantage of an (unsigned) weighted correlation network is that it allows one to derive simple relationships between network concepts (Horvath and Dong 2008). We characterize correlation modules where simple relationships exist between i) density-based preservation statistics, ii) connectivity based preservation statistics, and iii) separability based preservation statistics. Apart from studying relationships among preservation statistics in correlation networks, we also briefly describe relationships between preservation statistics in general networks. (0.17 MB PDF) Details regarding module preservation between human and chimpanzee brain networks. In this document we provide detailed results regarding the preservation of human brain modules in chimpanzee brains. (0.22 MB PDF) Detailed description of the human brain. In this document we provide detailed results of Application 4: Preservation of cortical modules between male and female samples. (2.51 MB PDF) Detailed description of female mouse liver modules in male mice. Detailed results of Application 5: Preservation of female mouse liver modules in male mice. (3.82 MB PDF) Detailed description of the consensus module application. Here we study preservation of consensus modules constructed previously, namely the consensus modules across human and chimpanzee brain samples, across samples from 4 tissues of female mice, and across samples from male and female mouse livers. (1.41 MB PDF) Detailed description of the simulation study. Detailed performance analysis of the proposed module preservation statistics in seven simulation scenarios. The design and main results of the simulations are summarized in Figure 9 of the main text. (2.78 MB PDF) We thank our UCLA collaborators Tova Fuller, Chaochao Cai, Lin Song, Jeremy Miller, Dan Geschwind, Giovanni Coppola, Aldons J. Lusis, Art Arnold, and Roel Ophoff for their input. The mouse data were generated by the lab of A.J. Lusis lab. Author Contributions Conceived and designed the experiments: PL SH. Analyzed the data: PL RL MCO. Wrote the paper: PL SH.
{"url":"https://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1001057","timestamp":"2024-11-14T19:01:43Z","content_type":"text/html","content_length":"415746","record_id":"<urn:uuid:4338e137-9785-4e29-8767-38ea2122e2a4>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00565.warc.gz"}
Purpose: To compare the ability of linear mixed models with different random effect distributions to estimate rates of visual field loss in glaucoma patients. Methods: Eyes with five or more reliable standard automated perimetry (SAP) tests were identified from the Duke Glaucoma Registry. Mean deviation (MD) values from each visual field and associated timepoints were collected. These data were modeled using ordinary least square (OLS) regression and linear mixed models using the Gaussian, Student’s t, or log-gamma (LG) distributions as the prior distribution for random effects. Model fit was compared using the Watanabe–Akaike information criterion (WAIC). Simulated eyes of varying initial disease severity and rates of progression were created to assess the accuracy of each model in predicting the rate of change and likelihood of declaring progression. Results: A total of 52,900 visual fields from 6558 eyes of 3981 subjects were included. Mean follow-up period was 8.7 ± 4.0 years, with an average of 8.1 ± 3.7 visual fields per eye. The LG model produced the lowest WAIC, demonstrating optimal model fit. In simulations, the LG model declared progression earlier than OLS (P < 0.001) and had the greatest accuracy in predicted slopes (P < 0.001). The Gaussian model significantly underestimated rates of progression among fast and catastrophic progressors. Conclusions: Linear mixed models using the LG distribution outperformed conventional approaches for estimating rates of SAP MD loss in a population with glaucoma. Translational Relevance: Use of the LG distribution in models estimating rates of change among glaucoma patients may improve their accuracy in rapidly identifying progressors at high risk for vision Detection of disease progression is essential in caring for patients with glaucoma. Standard automated perimetry (SAP) is the main testing modality used to evaluate functional vision loss in this patient population. An accurate assessment of rates of SAP change is essential in clinical decision-making to determine aggressiveness of therapy and follow-up. Identifying patients who exhibit fast rates of progression as soon as possible is paramount, as these individuals are at greatest risk for developing visual disability. Estimation of rates of change has traditionally been made with ordinary least square (OLS) regression applied to global parameters such as mean deviation (MD) over time. However, OLS-derived estimates can be very imprecise in the presence of few measurements, a situation that is commonly seen in clinical practice. Previous studies have shown that, on average, clinicians obtain less than one visual field test per year for glaucoma patients. With such low frequency of testing, OLS-derived rates of change would take more than 7 years to detect eyes progressing at a moderate rate of visual field loss. OLS-derived rates of change utilize only measurements of the individual patient without accounting for the overall population from which the patient comes. Previous work has shown that estimates of rates of change can be improved using linear mixed models, which allow data regarding the overall population to influence these estimates; the accuracy of estimates can be increased by “borrowing strength” from the population when fewer data points are available for a particular patient. Mixed model estimates include a fixed-effect component that represents the overall rate of a population and a random-effect component that reflects the degree of deviation of an individual eye from the population average. This process creates eye-specific intercepts and slopes. Although not yet incorporated in routine clinical practice, estimates of rates of change using linear mixed models have been widely applied in research settings. A standard linear mixed model assumes that the random effects follow a Gaussian distribution. When applied to estimating rates of change, the assumption is that these rates are normally distributed in the population. However, it is known that only a relatively small proportion of glaucoma patients exhibit moderate or fast progression, which leads to a skewed distribution of rates of change in the population. Prior work has demonstrated that the assumption of normally distributed random effects may cause biased estimations of parameters when heterogeneity is present in a population, as would be expected in the rates of progression of glaucoma patients. Thus, fast progressors may not be properly identified due to shrinkage to the population mean in a Gaussian model. Given how ubiquitous the use of mixed models is in glaucoma research and their potential for clinical applications, it is essential to determine whether the use of a normal distribution of random effects is appropriate in this context. In the present work, we investigated the impact of the random effects distribution on the estimates of rates of visual field loss and we assessed whether different distributions, such as Student’s t and log-gamma (LG), would allow for more accurate estimation of rates of change and detection of eyes exhibiting fast progression. Data Collection The dataset used in this study was derived from the Duke Glaucoma Registry developed by the Vision, Imaging and Performance Laboratory of Duke University. Institutional review board approval was obtained for this analysis, and a waiver of informed consent was provided due to the retrospective nature of this work. All methods adhered to the tenets of Declaration of Helsinki for research involving human participants. The database contained clinical information from baseline and follow-up visits, including patient diagnostic and procedure codes, medical history, and imaging and functional tests. The study included patients previously diagnosed with primary open-angle glaucoma or suspected of glaucoma based on International Classification of Diseases (ICD) codes. Patients were excluded if they presented with other ocular or systemic diseases that could affect the optic nerve or visual field, including retinal detachment, retinal or malignant choroidal tumors, non-glaucomatous disorders of the optical nerve and visual pathways, atrophic and late-stage dry age-related macular degeneration, amblyopia, uveitis, and/or venous or arterial retinal occlusion, according to ICD codes. Tests performed after treatment with panretinal photocoagulation, according to Current Procedural Terminology (CPT) codes, were excluded. ICD and CPT codes used to construct this database have been extensively detailed in a previous work. In addition, eyes that underwent trabeculectomy or aqueous shunt surgery were identified using CPT codes. For those eyes, only visual fields obtained before surgery were included, given the likely abrupt postsurgical alteration in the rate of change of SAP MD. Glaucomatous eyes were identified as having an abnormal visual field at baseline (i.e., Glaucoma Hemifield Test [GHT] “outside normal limits” or pattern standard deviation [PSD] probability < 5%). Eyes suspected of glaucoma were identified with a “normal” or “borderline” GHT result or PSD probability >5% at baseline. All eligible subjects had SAP testing completed using the Humphrey Field Analyzer II or III (Carl Zeiss Meditec, Jena, Germany). SAP tests included 24-2 and 30-2 Swedish Interactive Threshold Algorithm tests with size III white stimulus. Visual fields were excluded from this analysis if they had greater than 15% false-positive errors, greater than 33% fixation losses, or greater than 33% false-negative errors or if the result of the GHT was “abnormally high sensitivity.” For this study, subjects were required to have five or more visual fields and ≥2 years of follow-up time. Model Formulation OLS regressions were completed using standard linear regression for each eye. Bayesian linear mixed models were subsequently constructed. Bayesian statistics provide a probabilistic framework to address questions of uncertainty, such as the true rate of change in a glaucomatous eye. Prior distributions, which reflect an initial belief, are used in conjunction with available data (referred to as the likelihood) in order to generate estimates of specified parameters (posterior distributions.) For these models, a random-intercept and random-slope Bayesian hierarchical model was fitted for the SAP MD data: \begin{eqnarray*}{Y_{it}} = {\beta _0} + {\beta _{0i}} + ({\beta _1} + {\beta _{1i}})*{x_{it}} + {\varepsilon _{it}}\end{eqnarray*} represents the SAP MD value at time of eye , β represents the fixed intercept for the overall population, β represents the fixed slope for the overall population, and β and β represent eye-specific random intercepts and slopes, respectively. In all models, the prior distributions for β , β , and the error term (ε ) were normally distributed. However, the prior distributions of the random effects differed as noted below. A correlation term with an unstructured correlation matrix was included in the model to account for associations between intercept and slope values. Of note, random effects were placed at the eye level; a more complex model with the eye nested within the patient did not provide additional improvement in the model, thus the simpler model is described here. The correlation between intercept and slope was modeled using an unstructured covariance for the Gaussian and Student’s -distribution models, whereas a covariance structure previously described was used for the log-gamma model. Gaussian, Student's , and LG distributions were used to model the random effects of intercepts and slopes. The LG distribution is a left-skewed distribution that is sufficiently flexible to allow for more extreme negative values while maintaining a peak close to zero. Recent work has suggested that the LG distribution may be a more appropriate distribution in estimating the intercepts and slopes of MD and VFI, given the inherent left skew of these data ; the majority of eyes have values of intercepts and slopes near zero, but a smaller proportion of eyes have more extreme values. In each model, the same distribution was used to model the random effects for both the intercept and slope. All statistical analyses were performed using R 3.6.3 (R Foundation for Statistical Computing, Vienna, Austria). For the Gaussian and Student’s distributions, the package in R was utilized. This package computes estimates of the posterior distributions using Stan, which is a C++ probabilistic Bayesian programming interface using open-source Hamiltonian Monte Carlo (HMC) sampling (Stan Development Team). HMC sampling is thought to be superior to traditional Markov chain Monte Carlo sampling, as this method can achieve a more effective exploration of the posterior probability space without inducing high rates of autocorrelation. For the LG distribution, the prior distribution was directly coded into Stan via the R package. Data Analysis Bayesian linear mixed models were compared using the Watanabe–Akaike information criterion (WAIC), a metric that reflects the overall fit of a Bayesian model. For each model, estimates of the posterior distributions of the parameters were obtained after running four chains with 8000 iterations (burn-in of 1000 iterations) per chain (i.e., a total of 28,000 iterations). These models were completed using high-performance computing servers on the University of Miami Triton supercomputer. Convergence of the generated samples was confirmed by evaluating trace plots and autocorrelation diagnostics. Summary measures, including posterior estimates of the fixed effects (β[0] and β[1]), were calculated. Mean posterior estimated intercepts and slopes were calculated for each eye by adding the fixed and random effects of each draw and averaging these values for all draws corresponding to each eye. Eyes were defined as progressors if the one-sided Bayesian P value was less than 0.05. OLS progressors were defined as those with a statistically significant negative rate of change (P < 0.05, one-sided). For predictive modeling, OLS and Bayesian models were constructed using different numbers of visual fields and assessing their ability to predict future observations. For example, a model using the MD values from the first three visual fields was constructed. This model was then used to generate a predicted value for the MD of the fourth, fifth, sixth, seventh, and eighth visual field. This process was repeated using the first four visual fields to predict the MD of the fifth, sixth, seventh, and eighth visual field and so on, up to a model that used the first seven visual fields to predict the MD of the eighth visual field. The mean squared prediction error (MSPE) of all Bayesian models and the OLS were compared at each visual field. Bootstrapped 95% confidence intervals were calculated for the MSPE for each model and at each visual field visit using 200 bootstrap samples. In addition to confidence intervals, we used ANOVA to perform a formal statistical hypothesis test that compared the MSPE across models. Tukey's honest significant difference test was used to test pairwise comparisons. Simulation Description In preparation for simulations, the observed dataset from the Duke Glaucoma Registry was split in an 80%–20% fashion at the patient level. The 80% portion was used to train the Gaussian, Student’s t, and LG models that were subsequently used to evaluate the simulated eyes. The remaining 20% of the observed dataset was used to create a distribution of residuals for use in the simulations as detailed below. In order to evaluate the ability of the models to estimate a diverse range of potential rates of change in glaucomatous and stable eyes, a set of simulated eyes was created. A total of 15 different “settings” were then generated from the combination of three intercept categories (mild, moderate, and severe) and five slope categories (non-progressor, slow, moderate, fast, and catastrophic). An intercept corresponding to mild, moderate, and severe disease at baseline was defined as an eye with a baseline MD between 0 and −6 decibels (dB), −6 and −12 dB, and −12 and −18 dB, respectively. These values were chosen to simulate patients with mild, moderate, and severe glaucoma at baseline using the Hodapp–Anderson–Parrish classification system. Non-progressors were defined as those with a slope of 0 dB/y. Slow, moderate, fast, and catastrophic progressors were defined as eyes with a slope between 0 and −0.5 dB/y, −0.5 and −1.0 dB/y, −1.0 and −2.0 dB/y, and −2.0 and −4.0 dB/y, respectively. These categories have been previously defined and were chosen to simulate eyes with varying rates of disease progression. A total of 100 simulated eyes were generated for each setting, with the individual intercept and slope values randomly selected from the respective range of values. For each eye, a longitudinal sequence of visual field tests was simulated. Simulated timepoints of visual field testing were 0, 0.5, 1.5, 2.5, 3.5, 4.5, 5, and 5.5 years. Given that clinicians typically obtain visual fields every 6 to 12 months, timepoints with intervals of 6 or 12 months were chosen. At each timepoint, the “true” MD value was based on the simulated intercept and slope. For example, assuming a “true” intercept of −4 dB and a “true” slope of −1 dB/y, “true” MD values would be −4, −4.5, −5.5, −6.5, −7.5, −8.5, −9, and −9.5 dB at the simulated timepoints. As visual field data are affected by noise, we added a residual value to each “true” MD value, according to a previously described methodology. As noted above, 20% of the observed dataset was set aside and not used to train the models but rather was used to create a distribution of residuals binned to each decibel value. The only exceptions were MD values ≤ −23 dB; in order to create a sufficiently large sampling distribution, residuals binned to these extreme values were pooled. Each bin contained at least 45 residuals. This process constructed multiple distributions of residuals that reflect the heterogeneity in test variability that exists across the spectrum of disease severity. For each test in the sequence of visual fields, a residual value was randomly sampled from the distribution corresponding to the “true” MD. This noise component was then added to the “true” value. For example, for a “true” MD of −4 dB, the distribution of residuals corresponding to −4 dB would be randomly sampled and a residual of +0.5 dB might be selected. This sampling would result in a simulated MD value of −3.5 dB. Using randomly selected residuals for the example above, a simulated set of values might be −3.5, −3, −4.7, −7, −7.8, −9, −8.3, and −11 dB. This simulated eye with these data points mimicking “real-world” observations and their inherent variability was then evaluated by OLS and Bayesian models as described below. Evaluating the Simulated Data These 1500 simulated eyes were then independently evaluated by the OLS, Gaussian, Student’s t, and LG models (which had been trained on 80% of the dataset) to obtain estimates of the eye-specific intercepts and slopes. Performance of the models was assessed within each simulation setting using bias and by calculating the rates of declared glaucomatous progression. Bias was defined as the difference between the true and estimated posterior slope; negative values of bias indicate underestimation of slope, positive values indicate overestimation of slope, and values closer to zero reflect more optimal prediction. Bias values were pooled across the intercept groups and were compared at each timepoint and for each progressor group using the Kruskal–Wallis test with the Dunn test for pairwise comparisons with Šidák correction. Given multiple comparisons, Bonferroni correction was applied, and an alpha value of 0.05/24 = 0.0021 was used to determine statistical significance. The 95% credible intervals of bias for each model at each setting were also calculated. These intervals reflect that there is a 95% probability that the true value of bias lies within the calculated range. The 95% credible intervals that exclude zero indicate significant underestimation of the slope (e.g., a 95% credible interval of −2.5 to −0.5 indicates that the model significantly underestimates the slope). Rates of declaring progression were presented using cumulative event curves to compare the percentage of eyes that were declared to be progressing at each timepoint by the different models. To make these rates comparable, the P value cutoff used for declaring progression in these simulated eyes was set to only allow 2.5% of non-progressor eyes to be erroneously identified as progressors (i.e., a false-positive rate of 2.5%). P value cutoffs were specified uniquely for each intercept range and timepoint. For each setting, the log-rank test was used to determine if the curves were significantly different. Finally, hazard ratios were calculated to determine if the Bayesian models differed in time to declaring progression, using Cox proportional hazards regression. Median time to declared progression was calculated for the different settings as the timepoint at which ≥50% of simulated eyes were declared as progressors. The study included 6558 eyes of 3981 subjects with a mean age of 58.7 ± 16.0 years at the time of the baseline visual field. A total of 52,900 visual fields were deemed reliable and evaluated. Mean follow-up period was 8.7 ± 4.0 years, with an average of 8.1 ± 3.7 visual fields per eye (range, 5–34). Table 1 contains additional patient characteristics. Female subjects comprised 58.2% of the cohort, and 31.5% identified as black. A total of 4615 eyes (70.4%) had glaucomatous disease at baseline, and 1943 eyes (29.6%) were suspected of glaucoma. Mean MD at baseline was −4.23 ± 5.29 dB in the overall cohort. There was large variation in the baseline MD of the eyes, ranging from −31.72 to 2.58 dB. Distributions of OLS slopes and the posterior estimated slopes of Bayesian models varied greatly ( Fig. 1 Table 2 ). The Gaussian model demonstrated a substantial shrinkage of estimates with a smaller range of slopes. In contrast, the range of slopes of the Student’s model included both extreme negative and positive values, whereas the range of slopes of the LG model captured extreme negative values without extreme positive slopes ( Fig. 2 ; “eye-specific slopes” in Table 2 ). The LG model produced the lowest WAIC value, indicating that the LG model provided the optimal fit for the data compared with the Gaussian and Student's models ( Table 2 ). When comparing the results of predictive modeling using a limited number of visual fields, Bayesian models consistently performed better compared with OLS, producing lower MSPE values for each predicted visual field MD value ( Fig. 2 ). Overall mean MSPE values of the OLS, Gaussian, Student's , and LG predictions were 232.6 ± 91.3, 5.2 ± 0.3, 24.2 ± 9.6, and 7.9 ± 0.7, respectively, with significant differences noted between each Bayesian model and OLS ( < 0.01 for pairwise comparisons) at each timepoint until five visual fields were utilized in the models. At this point, the Student's predictions were no longer significantly different compared with those of OLS ( = 0.84), but MSPE from the Gaussian and LG models remained significantly lower than those of OLS ( = 0.01 and = 0.02, respectively). When seven visual fields had been utilized, all Bayesian model predictions were non-significant compared with OLS. Of note, differences in the MSPE of the Gaussian, Student's , and LG predictions were not statistically significant at any timepoint. The distributions of slopes of all eyes and progressors from the various models are presented in Tables 3 , respectively. Compared with the Gaussian model, the LG and Student's models identified a greater number of eyes with faster rates of MD loss among all eyes and progressors. For example, the Gaussian model identified only 8.0% of all progressors as having fast progression and only 0.5% as having catastrophic progression. The LG model identified almost 2 times more progressor eyes as having fast progression (15.2%) and over 5 times more as having catastrophic progression (2.7%) ( Table 4 Simulations demonstrated that the LG model was optimal in terms of accuracy as evidenced by the lowest degree of bias. Bias from the LG model was significantly lower than that of the Gaussian and models in all settings, most notably among fast and catastrophic progressors ( Fig. 3 ). Among fast and catastrophic progressors, mean bias values from the LG, Student's , and Gaussian models were −0.51 ± 0.49, −0.62 ± 0.51, and −1.20 ± 0.67 dB/y, respectively ( = 0.008, Kruskal–Wallis). When evaluating 95% credible intervals of bias, Gaussian models persistently underestimated the true slope. Gaussian credible intervals excluded zero when using the first three visual fields among moderate progressors; when using the first three, four, or five visual fields among fast progressors; and when using the first three, four, five, six, or seven visual fields among catastrophic progressors. In contrast, all 95% credible intervals of the Student's and LG models contained zero, indicating that these models did not severely underestimate the slope. Cumulative event curves demonstrated a significant difference among the regression models (P < 0.01, log-rank) ( Fig. 4 ). Although all three Bayesian models performed similarly in terms of time to declaring progression ( > 0.05, Cox hazard ratio), they were significantly quicker to identify progression compared to OLS among moderate, fast, and catastrophic progressors ( < 0.001, Cox hazard ratio). Median times to progression in the moderate, fast, and catastrophic progressors were consistently lower among Bayesian models compared with OLS ( Table 5 ). The average median time to progression was lower in the LG model (2.8 years) compared with the Student’s (3.0 years), Gaussian (3.2 years), and OLS (4 years) models. In this study, we compared the effect of various random effect distributions on estimating rates of visual field change using Bayesian linear mixed models with a large dataset of over 6000 eyes. Bayesian models provided significantly improved predictions compared with conventional OLS regression when only a limited number of visual fields were available. Among the distributions tested for Bayesian models, the LG was optimal in terms of overall model fit with the lowest WAIC value. In addition, simulations showed that the LG model had the lowest bias and was sufficiently flexible to rapidly identify fast progressors. These findings suggest that Bayesian models using the LG distribution may offer significant advantages compared with more traditional approaches in modeling rates of change in glaucoma. Our results showed the value of the Bayesian models compared with OLS regression when estimating rates of change in the presence of relatively few observations. Bayesian models consistently outperformed OLS in quickly declaring progression, especially among fast and catastrophic progressors. For example, after only 1.5 years (three visual fields) in the “mild/catastrophic” setting ( Fig. 4 ), the LG and Gaussian models declared progression in over 80% of true progressors, whereas OLS detected only 18% of progressors. Wu et al. previously demonstrated that with the use of OLS 80% of eyes progressing at −2 dB/y would be identified as progressors only after 2.1 years if three visual fields were performed per year (i.e., after six visual fields were completed). Although the benefit of Bayesian linear mixed models over OLS appeared to decline when seven tests were available ( Fig. 2 ), obtaining visual fields at a sufficiently high frequency to procure such a large number of tests is often challenging in clinical practice. The reduction in time to progression using a minimal number of visual fields with Bayesian modeling may be of great value to the clinician. Median time to progression was lower among Bayesian models, especially for the LG model ( Table 5 The LG model demonstrated the greatest accuracy with the lowest amount of bias among different progressor groups ( Fig. 3 ). Zhang et al. previously demonstrated the value of the LG model, as it provided a better fit for SAP data derived from 203 patients in a prospective study compared with a Gaussian model. The authors also constructed a joint longitudinal model using functional SAP and structural optical coherence tomography data, which demonstrated a stronger correlation between functional and structural rates of change when the LG model was utilized. Our work confirms the better fit of the LG model in a much larger dataset. We found it interesting that the Bayesian models identified fewer eyes as progressors compared with OLS. Prior studies had indicated that OLS identified fewer progressors compared with Bayesian models, although these studies evaluated smaller datasets of eyes with fewer numbers of tests. We believe that this discrepancy is due to the greater number of tests that were available in the current dataset (mean of 8.1 visual fields per eye). When evaluating those eyes identified as progressors by OLS but not by the Gaussian, Student’s , and LG models, the average OLS rates of change were −0.30 ± 0.25 dB/y (interquartile range [IQR], −0.35 to −0.14), −0.31 ± 0.26 dB/y (IQR, −0.38 to −0.14), and −0.28 ± 0.21 dB/y (IQR, −0.34 to −0.14), respectively. These values are reflective of slow rates of change, which would not be as worrisome to the clinician and would be unlikely to lead to severe vision loss. In contrast, when evaluating eyes identified as progressors by the Gaussian, Student’s , and LG models but not by OLS, the average OLS rates of change were −0.64 ± 0.57 dB/y (IQR, −0.78 to −0.30), −0.62 ± 0.57 dB/y (IQR, −0.76 to −0.26), and −0.68 ± 0.58 dB/y (IQR, −0.86 to −0.31) respectively. OLS was unable to confirm progression among these eyes displaying a faster rate of change, which would be of greater concern and clinical importance. Although the Bayesian models may have identified fewer progressors, the clinical relevance of the progressors identified by these models appears to be greater. Given the higher percentage of eyes with faster rates of change in the LG model ( Table 4 ), one might also be concerned about overestimation of slopes. However, bias data from the simulations demonstrated that 95% credible intervals of the LG model never included 0, indicating that this model did not significantly overestimate rates of change. Although MSPE values were comparable between LG and Gaussian models ( Fig. 2 ), the LG model was able to estimate the rate of fast and catastrophic progressors more accurately. In contrast, the Gaussian model underestimated these rates, with bias values twice as large on average. In the observed data, the Gaussian model was more likely to shrink estimates closer to the population mean ( Tables 3 ). These findings serve as a warning that linear mixed models using the Gaussian distribution to describe visual field data will likely underestimate the rates of change among this subset of patients. These individuals are arguably the most important to identify because they are at high risk for visual disability. Although most glaucoma patients will progress if followed for a sufficient amount of time, rates of change vary greatly. The magnitudes of these rates are crucial to clinical care; whereas slow progressors may be carefully observed, fast progressors may need to be treated more aggressively in order to prevent vision loss. Therefore, accurate estimation of rates of change is essential to characterizing the nature of a patient's disease. The LG model was able to accurately identify fast progressors while still characterizing the majority of eyes as slow or non-progressors. Limitations of this study include the assumption that eyes exhibit a linear rate of change over time. In particular, fast progressors may demonstrate some degree of nonlinear change. These nonlinear rates of change might have affected the magnitude of residuals in the sampling distribution utilized in the simulation portion of this study. Although it is likely that visual field losses are nonlinear over the full course of the disease, a linear approximation is likely a reasonable approximation within the limited timeframe used to make most clinical decisions. We also assumed a constant correlation between intercept and slope regardless of disease severity. In clinical practice, severe glaucoma patients are often aggressively monitored and treated, leading to a reduction in correlation between baseline disease (i.e., the intercept) and rate of change (the slope). In addition, the retrospective data collection does not provide insight into augmentation of medical therapy, which could affect rate of progression. It is possible that additional medical or laser therapies may have occurred between visual field tests. The censoring protocol described above only pertained to surgical glaucoma cases. Finally, other potential distributions exist to model random effects which were not evaluated in the present study. We empirically chose the Gaussian, Student’s , and LG models for comparison given clinical knowledge regarding the distribution of SAP MD in large populations. Further studies should investigate whether other distributions may provide advantages compared with the ones assessed in our work. In summary, we have demonstrated that a Bayesian hierarchical model using the LG distribution provides the optimal model fit for a large SAP dataset compared with Gaussian and Student’s t distributions. The LG model is sufficiently flexible to accurately characterize non-progressors, slow progressors, and fast progressors. Although Gaussian and LG models are comparable in predicting future SAP MD values, Gaussian models tend to underestimate fast progressors. The LG model was optimal in predicting the rates of change with greatest accuracy while rapidly identifying progressors. These findings may have significant implications for estimation of rates of visual field progression in research and clinical practice. This research was conducted using the resources of the University of Miami Institute for Data Science and Computing. Supported in part by grants from the National Eye Institute, National Institutes of Health (EY029885 and EY031898 to FAM), American Glaucoma Society (Mentoring for the Advancement of Physician Scientists (MAPS) grant to SSS), and NIH Center Core Grant P30EY014801. The funding organizations had no role in the design or conduct of this research. Disclosure: S.S. Swaminathan, Sight Sciences (C), Ivantis (C), Heidelberg Engineering (S), Lumata Health (C); S.I. Berchuck, None; A.A. Jammal, None; J.S. Rao, None; F.A. Medeiros, Alcon Laboratories (C, L, S), Allergan (C, L), Aerie Pharmaceuticals (C), Galimedix (C), Stuart Therapeutics (C), Google (S), Genentech (S), Apple (L), Bausch & Lomb (F), Carl Zeiss Meditec (C, S), Heidelberg Engineering (L), nGoggle (P), Reichert (C, S), National Eye Institute, National Institutes of Health (S) Crabb DP, Russell RA, Malik R, et al. Frequency of visual field testing when monitoring patients newly diagnosed with glaucoma: mixed methods and modelling. HS&DR 2014; 2: 1–102. Wu Z, Saunders LJ, Daga FB, Diniz-Filho A, Medeiros FA. Frequency of testing to detect visual field progression derived using a longitudinal cohort of glaucoma patients. . 2017; 124: 786–792. Medeiros FA, Leite MT, Zangwill LM, Weinreb RN. Combining structural and functional measurements to improve detection of glaucoma progression using Bayesian hierarchical models. Invest Ophthalmol Vis Sci . 2011; 52: 5794–5803. Medeiros FA, Zangwill LM, Mansouri K, Lisboa R, Tafreshi A, Weinreb RN. Incorporating risk factors to improve the assessment of rates of glaucomatous progression. Invest Ophthalmol Vis Sci . 2012; 53: 2199–2207. Medeiros FA, Weinreb RN, Moore G, Liebmann JM, Girkin CA, Zangwill LM. Integrating event- and trend-based analyses to improve detection of glaucomatous visual field progression. . 2012; 119: 458–467. Medeiros FA, Zangwill LM, Girkin CA, Liebmann JM, Weinreb RN. Combining structural and functional measurements to improve estimates of rates of glaucomatous progression. Am J Ophthalmol . 2012; 153: 1197–1205.e1. Abe RY, Diniz-Filho A, Costa VP, Gracitelli CP, Baig S, Medeiros FA. The impact of location of progressive visual field loss on longitudinal changes in quality of life of patients with glaucoma. . 2016; 123: 552–557. Jammal AA, Thompson AC, Mariottoni EB, et al. Impact of intraocular pressure control on rates of retinal nerve fiber layer loss in a large clinical population. . 2021; 128: 48–57. Chauhan BC, Malik R, Shuba LM, Rafuse PE, Nicolela MT, Artes PH. Rates of glaucomatous visual field change in a large clinical population. Invest Ophthalmol Vis Sci . 2014; 55: 4135–4143. Heijl A, Buchholz P, Norrgren G, Bengtsson B. Rates of visual field progression in clinical glaucoma care. Acta Ophthalmol . 2013; 91: 406–412. Fujino Y, Asaoka R, Murata H, et al. Evaluation of glaucoma progression in large-scale clinical data: the Japanese Archive of Multicentral Databases in Glaucoma (JAMDIG). Invest Ophthalmol Vis Sci . 2016; 57: 2012–2020. Verbeke G, Lesaffre E. A linear mixed-effects model with heterogeneity in the random-effects population. J Am Stat Assoc. 1996; 91: 217–221. Jammal AA, Thompson AC, Mariottoni EB, et al. Rates of glaucomatous structural and functional change from a large clinical population: the Duke Glaucoma Registry Study. Am J Ophthalmol . 2020; 222: 238–247. Bradley JR, Holan SH, Wikle CK. Computationally efficient multivariate spatio-temporal models for high-dimensional count-valued data (with discussion). Bayesian Anal. 2018; 13: 253–310. Zhang P, Song PX, Qu A, Greene T. Efficient estimation for patient-specific rates of disease progression using nonnormal linear mixed models. . 2008; 64: 29–38. Zhang P, Luo D, Li P, Sharpsten L, Medeiros FA. Log-gamma linear-mixed effects models for multiple outcomes with application to a longitudinal glaucoma study. Biom J . 2015; 57: 766–776. Nishio M, Arakawa A. Performance of Hamiltonian Monte Carlo and No-U-Turn Sampler for estimating genetic parameters and breeding values. Genet Sel Evol . 2019; 51: 73. Hodapp E, Parrish RK, Anderson DR. Clinical decisions in glaucoma. St. Louis, MO: Mosby; 1993. Fung SS, Lemer C, Russell RA, Malik R, Crabb DP. Are practical recommendations practiced? A national multi-centre cross-sectional study on frequency of visual field testing in glaucoma. Br J Ophthalmol . 2013; 97: 843–847. Gracitelli CPB, Zangwill LM, Diniz-Filho A, et al. Detection of glaucoma progression in individuals of African descent compared with those of European descent. JAMA Ophthalmol . 2018; 136: 329–335. Stagg B, Mariottoni EB, Berchuck SI, et al. Longitudinal visual field variability and the ability to detect glaucoma progression in black and white individuals [published online ahead of print May 13, 2021]. Br J Ophthalmol Russell RA, Garway-Heath DF, Crabb DP. New insights into measurement variability in glaucomatous visual fields from computer modelling. PLoS One . 2013; 8: e83595. Strouthidis NG, Vinciotti V, Tucker AJ, Gardiner SK, Crabb DP, Garway-Heath DF. Structure and function in glaucoma: the relationship between a functional visual field map and an anatomic retinal map. Invest Ophthalmol Vis Sci . 2006; 47: 5356–5362. Schlottmann PG, De Cilla S, Greenfield DS, Caprioli J, Garway-Heath DF. Relationship between visual field sensitivity and retinal nerve fiber layer thickness as measured by scanning laser Invest Ophthalmol Vis Sci . 2004; 45: 1823–1829. Hood DC, Kardon RH. A framework for comparing structural and functional measures of glaucomatous damage. Prog Retin Eye Res . 2007; 26: 688–710.
{"url":"https://tvst.arvojournals.org/article.aspx?articleid=2778491","timestamp":"2024-11-10T03:07:12Z","content_type":"text/html","content_length":"230477","record_id":"<urn:uuid:82477e6f-795e-4ed5-b86d-23b406bf6904>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00437.warc.gz"}
What is the Maximum Height of a Semi Trailer? [Answered 2023] | Prettymotors What is the Maximum Height of a Semi Trailer? When hauling large freight, a semi trailer’s maximum height is an important consideration. While there is no federally regulated maximum height for trailers, there are some general guidelines. In general, a trailer’s overall height must not exceed 13 feet 6 inches or 14 feet. However, certain exceptions apply if a road or bridge requires lower clearance. For these reasons, different types of trailers are available to accommodate lower height requirements. Depending on the type of cargo being hauled, a 53-foot trailer’s interior height can vary from 110 to 114 inches. The trailer’s door height remains constant at 104 inches. Flatbed trailers, on the other hand, can reach up to 110 feet, although they aren’t usually enclosed. The inside height of a 53-foot trailer is usually 110 to 11 feet. The height of a semi-truck trailer varies by manufacturer and state. However, some roads have lower clearances than the average commercial motor vehicle. Height restrictions for semi-trucks are often enforced by the Department of Transportation for each state. For example, the Department of Transportation of California limits the maximum height of semi-trucks to 14 feet, while the Pennsylvania Department of Transportation restricts them to 13 feet. The UK, on the other hand, limits their maximum height at 13.1 feet. What is the Standard Height of a 53 Trailer? You might be wondering: what is the standard height of a 53 foot trailer? It depends on your state’s road laws. Generally, a 53-foot trailer is 8.5 feet wide and 48 feet long. It may take as little as twenty-four standard 40-foot pallets, but a clever shipper can pack thirty forty-foot pallets into a single trailer load. These are the dimensions of a standard truck trailer. A 53-foot step-deck trailer has a maximum cargo and freight weight of approximately 65,000 pounds. These trailers come in a wide range of lengths, but typically have a standardized width of 10 feet. Interior dimensions of 53′ step-deck trailers vary slightly from truck to trailer. Overall interior dimensions are 630 inches wide by 102 inches high, which doesn’t include the driver’s cabin. Semi-trailers come in a wide range of lengths, but most common ones are 48 to 53 feet long. A 53-foot trailer can hold thirteen pallets lengthwise, or two rows of them. These are the dimensions of most freight trailers that you’ll see on the road. If you’re wondering, what is the Standard Height of a 53 trailer? You’ll find the answer in the article below. What is the Max Height of a 53 Foot Trailer? In order to determine how high a 53-foot semi trailer can safely carry a load, you must first determine the maximum length of the vehicle. This is the length of the trailer from top to bottom. The interior dimensions are 98.5 inches wide and 108 inches high. A 53-foot trailer can carry approximately 3,489 cubic feet of cargo. It weighs around 13,500 pounds. Each state and county has its own minimum and maximum specifications. To get more information about 53-foot trailers, read on! The maximum height of a semi trailer varies by manufacturer, state, and type. There are certain legal heights for semi-trailers, as well as the width. Internationally, a 53-foot trailer must be at least eight feet six inches high. Despite these limitations, many trucks exceed these heights when traveling. Nevertheless, you should check your vehicle’s manufacturer’s website before you purchase a trailer. What is the Standard Height of a Trailer? When hauling large freight, the height of the trailer is crucial. While there is no federal standard on the standard height of a semi trailer, there are general guidelines based on state regulations. Overall trailer height is generally between 13 feet 6 inches and 14 feet, with special exceptions for low-clearance roads and bridges. Different types of trailers can help you carry your freight without a permit. If you’re planning to transport oversized loads, you should know the standard height and width of a semi trailer. While some states have strict height limits, others have no standards. In addition to these guidelines, your logistics agent will be knowledgeable of the standard height and width of a semi trailer, and will recommend the correct trailer for your equipment. Here are a few examples. To learn more, check out the following guide. The length of a semi trailer is determined by the widest point of the cargo unit. Generally, this measurement is taken from the outside fender of the trailer. You can also measure the overall length of the trailer, which includes the tongue. The length is generally measured with a long tape measure, so be sure to take two measurements and double check the results. The length should be 48 feet. What is the Standard Loading Dock Height? Often, loading docks are not designed for the height of standard over-the-road trailers. The height required for the trailer to dock at a loading dock is based on the slope of the approach. The slope is measured by attaching a string to the floor of the dock 50′ from the trailer. The drop of the line from 50′ to the dock level can then be divided by 600 to determine the percentage of the slope. For every 1% slope, you will need to add 1 inch to the bumper’s projection. You should use 3/4″ or 5/8″ lag bolts and a minimum of 8″ long “J” bolts. A loading dock should be low enough for the doors to swing down. Docks should be 51 inches high or lower. If the dock is recessed in a driveway, the height should be lowered one inch for each one percent of the driveway’s grade. If the dock height is less than the maximum height of the trailer, you need to lower the dock height by an additional 1 inch. What is the Height of an 18 Wheeler? Listed below is information on the height of an 18-wheeler trailer. Unlike standard passenger cars, 18 wheelers are considerably taller. In addition, they are much wider. Compared to seven-foot-wide cars, semi-trucks are almost nine feet wide. This means that they have a turning radius of 55 feet and are five times as wide as a typical U.S. road lane. Because of these dimensions, semi-trucks require special licensing and follow strict state and local laws. How Tall is a 53 Foot Dry Van? A 53 foot dry van can accommodate up to 28 48X45 pallets. These are common in the automotive industry. For this reason, this trailer can hold up to 48×48 pallets. To find out how much space each pallet can hold, consult How Are Dry Van Rates. You will also find helpful information on loading these pallets onto a 53 foot van. The height of a 53 foot dry van is 108 inches high. If you plan to transport hazardous materials or other large objects, a 53 foot dry van is a great choice. Its enclosed interior protects freight from the elements. The height of a 53 foot dry van is equivalent to that of a flatbed trailer, which is approximately 2,000 pounds tall. You can also find a 53 foot trailer with a loading dock. Those specifications are important for a dry van. Dry van trailers come in different sizes, and the most common size is 53’x8’6″. You can also find other trailer sizes depending on the type of shipment you’re shipping. A 53 foot dry van trailer can hold up to 45,000 pounds of freight. And because the trailer is enclosed, there’s a limit to how many standard pallets it can hold. However, this limits the number of pallets you can carry – and therefore the cargo capacity – in a 53 foot dry van. What is the Inside Height of a 53 Dry Van? Listed below are the specifications of a 53 foot dry van. These trucks are typically used for truckload shipping. They can fit up to 26 pallets single-stacked. The inside height of a 53 dry van is 102 inches. If you’re wondering what you can load inside one of these trucks, read How Are Dry Van Rates Calculated. This article will provide you with the information you need to load a 53 dry van A 53-foot dry van trailer is one of the most common types of shipping trailers. Their maximum cargo height is between 108 inches and 110 inches. This is the most common height. Semi-trucks come in many different sizes and dimensions. Typical semi-trucks in the United States measure about 72 feet long, 13.5 feet tall, and 8.5 feet wide. This makes them the ideal option for transporting household goods and nonperishable foods.
{"url":"https://www.prettymotors.com/what-is-the-maximum-height-of-a-semi-trailer/","timestamp":"2024-11-02T09:20:55Z","content_type":"text/html","content_length":"84812","record_id":"<urn:uuid:f5ba67bd-5c86-4366-8937-579bfb1b2d59>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00559.warc.gz"}
Irrational Numbers Define Irrational Numbers with Examples. Definition: A decimal number which is non-terminating and non-recurring is called an irrational number. Irrational numbers cannot be expressed in the form of p/q where p and q are integers and q≠0. The square root of all non-square numbers, cube root of all non-cube numbers, etc. are all non-terminating and non-recurring decimal numbers, so they are irrational numbers. π(pi) also is an irrational number. Here are some examples: √2 = 1.4142135… √3 = 1.7320508… √5 = 2.2360679… √7 = 2.6457513… π = 3.14159265… are non-terminating and non-recurring decimal numbers, so they are irrational numbers. Some other examples of irrational numbers are: 10 Math Problems officially announces the release of Quick Math Solver and 10 Math Problems, Apps on Google Play Store for students around the world. Properties of irrational numbers: Here are some of the properties of irrational numbers: 1. Irrational numbers are non-terminating and non-recurring decimal numbers. 2. Irrational numbers are closed under the operation of addition i.e. the sum of two irrational numbers is always an irrational number. 3. Irrational numbers are closed under the operation of subtraction i.e. the difference of two irrational numbers is always an irrational number. 4. Irrational numbers are not closed under the operation of multiplication i.e. the product of two irrational numbers may not be an irrational number. 5. Irrational numbers are not closed under the operation of division. i.e. the quotient of two irrational numbers may not be an irrational number. What are the differences between rational and irrational numbers? Following are the differences between rational and irrational numbers: 1. Rational numbers can be expressed in the form of p/q where q ≠ 0 and p and q are integers, whereas irrational numbers cannot be expressed in such form. 2. Rational numbers may be terminating or non-terminating but recurring decimal numbers whereas irrational numbers are always non-terminating and non-recurring decimal numbers. 3. Rational numbers are closed under the operation of multiplication and division, but irrational numbers are not closed under the operation of multiplication and division. Some Question and Answers on Irrational Numbers Question: How is π(pi) an irrational number? Because it can be expressed in the form of p/q which is 22/7. Answer: 22/7 is not the exact value of π(pi). This is an approximate value only. The exact value of π(pi) is 3.141592653589793238... which is non-terminating and non-recurring decimals. So, the exact value of π(pi) is an irrational number. Question: Is the sum of two irrational numbers always irrational? Answer: Yes, the sum of two irrational numbers is always an irrational number. Question: Are all square roots irrational numbers? Answer: Square roots of non-square numbers are all irrational numbers. Question: Are irrational numbers closed under multiplication? Answer: No, irrational numbers are not closed under multiplication. Question: Are every irrational number is a real number? Answer: Yes, every irrational number is a real number. Question: What about the between rational and irrational numbers? Answer: The difference between rational and irrational numbers is irrational numbers. Question: What about the sum of rational and irrational numbers? Answer: The sum of a rational and irrational number is always an irrational number. Question: How many irrational numbers are there between 1 and 6? Answer: There are infinitely many irrational numbers between any two integers. So there are infinitely many rational numbers between 1 and 6. Question: Find at least two irrational numbers between 2 and 3. Answer: √5 and √6 are two irrational numbers between 2 and 3. Question: Find an irrational number between 3 and 4. Answer: √10 is an irrational number between 3 and 4. Question: Find an irrational number between 1 and 2. Answer: √3 is an irrational number between 1 and 2. Question: Find two irrational numbers between 0.5 and 0.55. Answer: √0.26 and √0.27 are two irrational numbers between 0.5 and 0.55. Question: Find two irrational between 0.7 and 0.77. Answer: √50 and √51 are two irrational numbers between 0.7 and 0.77. Question: What about the sum of a rational number and an irrational number? Answer: The sum of a rational number and an irrational number is an irrational number. Question: What about the difference between rational and irrational numbers? Answer: The difference between rational and irrational numbers is an irrational number. Question: Is the product of two irrational numbers always irrational? Answer: No, the product of two irrational numbers is not always irrational. For example: √2 × √8 = √16 = 4 which is a rational number. If you have any questions or problems regarding the Irrational Numbers, you can ask here, in the comment section below. Was this article helpful? LIKE and SHARE with your friends… 1 comment: 1. joi's titanium | TITaniumArt joi's titanium. J-4J - J‑5J. J‑6J. A -6J. J‑6J. A -9J. A -8J. J-7J. J‑6J. A -10J. titanium ring J‑7J. A nipple piercing jewelry titanium -10J. J-10J. J‑8J. J-13J. A -11J. J‑16J. J‑16J. J‑17J. J‑17J. A -12J. J‑18J. A -17J. titanium white wheels J‑19J. A -18J. J‑20J. J‑21J. titanium fat bike J‑22J. A -25J. J‑23J. A -26J. A -26J. titanium exhaust tips A -27J. A -28J
{"url":"https://www.10mathproblems.com/2022/02/irrational-numbers.html","timestamp":"2024-11-12T22:31:13Z","content_type":"application/xhtml+xml","content_length":"190951","record_id":"<urn:uuid:ce70f73c-ab2b-4d40-9f42-8cdacc264b4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00183.warc.gz"}
Partial Passwords Done Right - Magic of Security Partial Passwords Done Right Partial passwords is an authentication system where users enter a random 2-3 characters of their password. While it makes random guesses easier, attackers must eavesdrop on more than one logons. How to implement partial passwords correctly? The problem made us think and eventually realise why online banking systems using authentication with partial passwords ask for two or three letters only and they also limit the maximum length of passwords. Note: Actually – the above is only true for well-designed systems. If a system asks users for two letters, the database of the system will store all possible pairs of letters that appear in a user password – all “obfuscated” (e.g., with a cryptographic hash function). We say obfuscated because brute-force attack on data with about 10 bits of entropy (the number of all possible options being 1,000) is trivial. Those systems can not ask for more than two letters because more letters would require a lot more database space. A two-letter partial password system (with eight characters’ passwords) requires 56 strings (hash values), while a three-letter system would require 336 strings – easily taking more than 5kB of data for a single password. It appears that most companies indeed store all possible combination of letters a user may enter in a database. The second option is to store the password unencrypted or encrypted with a symmetric algorithm (e.g. 3DES, AES). Passwords cannot be stored as hash values as the system needs to have access to plaint text passwords to compare the letters entered by users. Better Solution We have suggested a solution based on Shamir secret sharing scheme. This saves database space and it is also fast as only a polynomial computation (in the simplest form) is needed for verification. A. Global Parameters At the beginning, someone has to define global parameters of the system. Well, there is actually just one – how many letters will users have to select. Let us call the parameter N. The maximum length of password (L) is important only from the database point of view – you will need to store a 32b long number for each character. B. adding New User 1. User chooses his/her password P. It consists of letters p1, p2, p3, … pk. 2. The system will generate at least 32 bits’ long secret key – K – unique for each user. 32 bits is enough if N is 3, for larger N (the number of letters users have to correctly select each time) you may want to increase the length of K. 3. The system will also generate N-1 32 bits’ long random numbers R1, R2, … R(N-1) 4. The next step is a computation of k points (k being the length of the password) on a polynomial: y=K+R1*x + R2*x^2 + … + R(N-1)*x^(N-1), for x = 1, 2, … k. Let us denote the results as y1, y2, …, 5. Values s1=(y1-p1), s2=(y2-p2), … sk=(yk-pk) is stored in the database. Each number takes 32 bits. One will also need to store K, or the hash of K. C. Authentication The next part of the system is user authentication, which is very simple and fast. 1. The system selects N positions in the password – i1, i2, … iN. 2. A user selects N letters from her/his password at specified positions so that we have pairs (p’1, i1), (p’2, i2), …, (p’N, iN). 3. The system recovers yi values for indices i selected in step 1 – simply just adding stored values (see step 5 above) to values p’i entered by the users. 4. Now we have to solve the polynomial equation to obtain K’. The equation for that looks horribly but it is quick to compute and can even be partially pre-computed as it uses indices (positions of letters): K’ = \sum_i [ yi * [ (\PI_j (j) ) / (\PI_j (i-j)) ] ], where i and j run over i1, i2, …, iN (step 1), and j skips the actual selected i. Example: let’s say that user selected 2nd, 3rd, and 4th letter, the solution will be: K’=y2*( 3*4/[(2-3)*(2-4)] )+y3*( 2*4/[(3-2)*(3-4)] ) + y4*( 2*3/[(4-2)*(4-3)] ) 5. The last step is to compare K and K’. If they are equal, user entered correct values and is logged in. One note here – the secret is not K, but values yi reconstructed when user enters his/her letters. Pros and Cons The highlights of this solution are: 1. The number of letters for authentication and the length of passwords is not really limited. 2. The database space that is needed to store all necessary information is linear with the length of a password, not quadratic. 3. Secrets are not stored in plain-text and need certain computation. 4. All the data can be still encrypted for storage if needed. 5. Faster to compute – verification is several multiplications of 32 bit numbers that is much much faster than computing a hash value. The low points are: 1. It is not a straight forward solution. 2. The security is still not increased beyond the difficulty of finding the correct K letters. The discussion actually continued and a stranger – Tom – showed that storing the constant K in plaintext allows for an attack. I have initially suggested that one can store this constant in plaintext or a hash value of it. Currently, we have to recommend using the hash value. For those interested in detail, here is an excerpt from Tom’s email. (It assumes password letters come from a set of 100 characters – hence the magic constant of 100.) For match (p1p2..pN) I think you can compute the whole password in “constant” time O(100*k), by using the function again for each next unknown password character pN+1..pk, as an equation with 1 variable. E.g. we’ve got a match (p1p2…pN), then to reveal pN+1 we solve the equation for set 1, 2, …, N-1, N+1 with just discovered (y1, y2, …, yN-1), and with an unknown yN+1: K’ = \sum_i [ yi * [ (\PI_j (j) ) / (\PI_j (i-j)) ] ] would be like K’ = yN+1 * [ (\PI_j (j) ) / (\PI_j (N+1-j)) ] ] + C where C is constant (computed). If K is in plain text, then it is a simple equation with 1 variable and can be solved in O(1). If K is stored as a hash, then iterate through every character again and compare hash(K’) with stored hash(K). Note, that in this case, computing each next character requires O(100) operations, in total of O(100*k) rather than O(100^k). 10^12/16!*10! was suggested as if you brute force N=6 characters with plain K (if using the trick with modulo it is be better to brute force last N chars than first N, due to the lower charset for p16 than p1).
{"url":"https://magicofsecurity.com/partial-passwords-done-right/","timestamp":"2024-11-05T08:38:53Z","content_type":"text/html","content_length":"65585","record_id":"<urn:uuid:90759302-5561-46c0-bf9d-7c9437fb6ba4>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00477.warc.gz"}
A new approach to measurement in quantum tomography In this article we propose a new approach to quantum measurement in reference to the stroboscopic tomography. Generally, in the stroboscopic approach it is assumed that the information about the quantum system is encoded in the mean values of certain Hermitian operators $Q_1, ..., Q_r$ and each of them can be measured more than once. The main goal of the stroboscopic tomography is to determine when one can reconstruct the initial density matrix $\rho(0)$ on the basis of the measurement results $\langle Q_i \rangle_{t_j}$. In this paper we propose to treat every complex matrix as a measurable operator. This generalized approach to quantum measurement may bring some improvement into the models of stroboscopic tomography. arXiv e-prints Pub Date: April 2015 □ Quantum Physics; □ Mathematical Physics The article presents a theoretical idea that every complex matrix can be regarded as a measurable operator
{"url":"https://ui.adsabs.harvard.edu/abs/2015arXiv150401326C/abstract","timestamp":"2024-11-12T19:34:32Z","content_type":"text/html","content_length":"36259","record_id":"<urn:uuid:ecba3221-2b45-40e1-998d-527bec9563d6>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00534.warc.gz"}