content
stringlengths
86
994k
meta
stringlengths
288
619
100+ Math Riddles for Middle School (Easy/Intermediate/Difficult) Are you looking for fun and challenging math riddles for middle school students? If yes, you are in the right place! We have 100 fun math brain teasers with answers. These math questions are great to be used as an ice breaker activity. To make the list available for everyone, we divided it into three levels: easy, intermediate, and hard. You can start with the easy math questions and move on to the harder ones. Or, if you are really confident, you can jump right into the difficult math riddles. Math Riddles for Middle School Math riddles are puzzles involving math concepts and operations. They challenge critical thinking, reasoning, and problem-solving skills. They can take various forms, such as word problems, number sequences, or geometric puzzles. They make learning math more interactive and enjoyable. Math riddles are beneficial for middle school students as they make learning math enjoyable, engaging, and effective. By presenting mathematical concepts in a puzzle format, riddles promote critical thinking and problem-solving skills. They not only reinforce classroom knowledge but also encourage students to approach math with curiosity and confidence. Easy Math Riddles Let’s start with the easy math riddles for kids. 1. I am an odd number. Take away a letter, and I become even. What number am I? (Answer: Seven – remove the ‘s’ and you get ‘even’) 2. I am a three-digit number. My tens digit is five more than my ones digit. My hundreds digit is eight less than my tens digit. What number am I? (Answer: 194) 3. I am a fraction. My numerator is one less than my denominator. What fraction am I? (Answer: 1/2) 4. I am a prime number greater than 20 but less than 30. What number am I? (Answer: 29) 5. I am an even number. I am greater than 50 and less than 60. What number am I? (Answer: 54) 6. I am a multiple of 9. My digits add up to 9. What number am I? (Answer: 27) 7. I am a polygon with six equal sides and angles. What shape am I? (Answer: Hexagon) 8. I am the result of multiplying 7 by itself. What number am I? (Answer: 49) 9. I am a two-digit number. The sum of my digits is 11. What number am I? (Answer: 47) 10. I am the square root of 81. What number am I? (Answer: 9) 11. I am a multiple of 5. If you add 20 to me, you get 50. What number am I? (Answer: 30) 12. I am an acute angle in a right-angled triangle. What am I? (Answer: Less than 90 degrees) 13. I am the sum of the first ten prime numbers. What number am I? (Answer: 129) 14. I am a fraction equivalent to 0.5. What fraction am I? (Answer: 1/2) 15. I am the product of 6 and 8. What number am I? (Answer: 48) 16. I am the perimeter of a square with sides of length 5. What number am I? (Answer: 200) 17. I am a factor of 12. What number am I? (Answer: 6) 18. I am the result of subtracting 15 from 30. What number am I? (Answer: 15) 19. I am the sum of the first five positive integers. What number am I? (Answer: 15) 20. I am a fraction greater than 1/2 but less than 1. What fraction am I? (Answer: 3/4) 21. I am the difference between 100 and 37. What number am I? (Answer: 63) 22. I am a multiple of 4. If you subtract 10 from me, you get 22. What number am I? (Answer: 32) 23. I am the square of a prime number. What number am I? (Answer: 25) 24. I am an obtuse angle in a right-angled triangle. What am I? (Answer: Greater than 90 degrees) 25. I am the sum of the first three perfect squares. What number am I? (Answer: 14) 26. I am a multiple of 7. If you divide me by 2, what is the result? (Answer: 7) 27. I am the difference between 80 and 45. What number am I? (Answer: 35) 28. I am a polygon with four equal sides and angles. What shape am I? (Answer: Square) 29. I am the result of dividing 50 by 5. What number am I? (Answer: 10) 30. I am the sum of the first four prime numbers. What number am I? (Answer: 17) Intermediate Fun Math Riddles for Middle School Here’s the intermediate-level fun and slightly tricky math riddles. 1. What three positive numbers give the same answer when multiplied and added together? (Answer: 1, 2, and 3) 2. I am thinking of a number. If you add 15 to it and then subtract 10, you get the same result as if you had multiplied the number by 2 and then subtracted 5. What is the number? (Answer: 10) 3. What comes next in the sequence: 2, 6, 12, 20, ___? (Answer: 30 – each number is the sum of consecutive prime numbers) 4. I am an even number. If you subtract 5 from me, you get a prime number. What am I? (Answer: 7) 5. If you have 3 apples and you take away 2, how many apples do you have? (Answer: 2 apples – you took them, but you still have them) 6. What number is represented by Roman numeral XIX? (Answer: 19) 7. I am a two-digit number. My tens digit is three times my ones digit, and the sum of my digits is 12. What number am I? (Answer: 48) 8. If a rooster lays an egg on the peak of a triangular roof, which side will the egg roll down? (Answer: Roosters don’t lay eggs). 9. What is the missing number in the sequence: 8, 27, ___, 125, 216? Answer: 64 (each number is the cube of consecutive integers). 10. I am a fraction. If you add my numerator and denominator, the result is 10. If you subtract my numerator from my denominator, the result is 6. What fraction am I? (Answer: 2/8) 11. If you have a bowl with six apples and you take away four, how many do you have? (Answer: The bowl is still with you; you have the bowl) 12. I am a prime number. If you reverse my digits, I am still a prime number. What number am I? (Answer: 73) 13. What is the product of the first three prime numbers? (Answer: 2 * 3 * 5 = 30) 14. If you have 5 oranges in one hand and 8 apples in the other, what do you have? (Answer: Very large hands) 15. I am a number. Add 7 to me, and you get 30. What number am I? (Answer: 23) 16. What comes next in the sequence: 1, 4, 9, 16, ___? (Answer: 25 – each number is a perfect square) 17. What is the sum of the first 10 positive integers? (Answer: 55) 18. If you have 3 apples and you give away 2, how many apples do you have left? (Answer: 1 apple – you gave away 2) 19. I am a three-digit number. The sum of my digits is 18. My tens digit is three times my one’s digit. What number am I? (Answer: 486) 20. What is the only even prime number? (Answer: 2) 21. If you have a box with 12 chocolates and you eat half of them, how many chocolates do you have left? Answer: You have the other half, so you still have 12 chocolates. 22. What is the next number in the Fibonacci sequence after 1, 1, 2, 3, 5, 8, ___? (Answer: 13) 23. I am a two-digit number. My digits are the same, and their sum is 9. What number am I? (Answer: 45) 24. What is the sum of the first 50 positive integers? (Answer: 1275) 25. I am a fraction. If you double my numerator and triple my denominator, I become 2/3. What fraction am I? (Answer: 2/9) 26. What is the smallest three-digit prime number? (Answer: 101) 27. I am thinking of a number. If you add 20 to it and then divide the result by 5, you get the same result as if you had added 15 to the number. What is the number? (Answer: 15) 28. What is the sum of the first four perfect numbers? (Answer: 8658 because 6+28+496+8128=8658) 29. I am a two-digit number. If you reverse my digits and subtract the smaller from the larger, you get 27. What number am I? (Answer: 63 since ) 30. I am a number. If you multiply me by 7 and then add 11, the result is 88. What number am I? (Answer: 11) Difficult Math Riddles for Middle School Try solving these challenging math questions for middle school students. 1. I am a three-digit number. The sum of my digits is 14, and the product of my digits is 72. What number am I? (Answer: 261) 2. What is the next number in the sequence: 3, 6, 12, 24, ___? (Answer: 48) 3. I am a fraction. My denominator is one less than my numerator. If you add 2 to both, the fraction becomes 3/4. What fraction am I? (Answer: 5/4) 4. I am a prime number. If you reverse my digits, subtract the smaller from the larger, and then divide by 3, you get 3. What number am I? (Answer: 37) 5. What is the sum of all the prime numbers between 30 and 50? (Answer: 233) 6. I am a four-digit number. My thousands digit is three less than my hundreds digit, and the sum of my digits is 20. What number am I? (Answer: 1823) 7. In a garden, there are 15 rows of flowers. In each row, there are 8 flowers. If half of the rows have red flowers and the rest have blue flowers, how many blue flowers are there? (Answer: 60) 8. If you multiply a number by 7 and then subtract 12, you get 99. What is the number? (Answer: 15) 9. I am a fraction. If you add my numerator and denominator, the result is 17. If you subtract my numerator from my denominator, the result is 5. What fraction am I? (Answer: 6/11) 10. What is the product of the first five prime numbers? (Answer: 210) 11. I am a two-digit number. If you reverse my digits and subtract the smaller from the larger, you get 9. What number am I? (Answer: 45) 12. In a right-angled triangle, the length of the hypotenuse is 13, and one side is 5. What is the length of the other side? (Answer: 12) 13. I am thinking of a number. If you multiply it by 3 and then add 7, the result is the same as if you double it and subtract 10. What is the number? (Answer: 17) 14. A number is divided by 3, and the result is increased by 5. If the final result is 11, what is the original number? (Answer: 6) 15. I am a three-digit palindrome. If you subtract me from the number formed by reversing my digits, the result is 198. What number am I? (Answer: 292) 16. The sum of three consecutive even numbers is 114. What are the numbers? (Answer: 36, 38, 40) 17. I am a fraction. If you double my numerator and triple my denominator, I become 5/6. What fraction am I? (Answer: 2/3) 18. A number is increased by 25%, and then the result is decreased by 20%. What is the overall percentage change? (Answer: 0%) 19. I am a polygon with the same number of sides as the sum of the first four prime numbers. What shape am I? (Answer: Pentagon) 20. If you have a rectangle with a length of 10 units and a diagonal of length 13 units, what is the width of the rectangle? (Answer: 5 units) 21. I am a number. If you add 25 to me and then divide by 5, the result is the same as if you subtract 5 and then multiply by 3. What is the number? (Answer: 10) 22. The sum of two consecutive odd numbers is 76. What are the numbers? (Answer: 37, 39) 23. I am a fraction. If you subtract 3 from my numerator and add 4 to my denominator, I become 1/3. What fraction am I? (Answer: 7/11) 24. The average of five consecutive integers is 21. What is the largest of these integers? (Answer: 23) 25. I am a perfect square. If you add 16 to me, the result is a perfect cube. What number am I? (Answer: 36) 26. I am a fraction. If you double my numerator and subtract 5 from my denominator, I become 3/8. What fraction am I? (Answer: 7/16) 27. The sum of the angles in a hexagon is equal to the sum of the angles in a rectangle. If one angle in the rectangle is 90 degrees, what is the measure of one angle in the hexagon? (Answer: 120 28. I am a number. If you multiply me by 6 and then add 9, the result is the same as if you double me and subtract 3. What is the number? (Answer: 3) 29. The sum of three consecutive odd integers is 87. What are the integers? (Answer: 27, 29, 31) 30. I am a fraction. If you multiply my numerator by 2 and add 3 to my denominator, I become 1/2. What fraction am I? (Answer: 5/13) Very Challenging Math Questions If you want to try something even more challenging, try solving these bonus math questions. 1. Three people check into a hotel room that costs $30. They each contribute $10, handing $30 to the hotel clerk. Later, the hotel clerk realizes there was a mistake, and the room only costs $25. The hotel clerk gives $5 to the bellboy and asks him to return it to the guests. On the way to the room, the bellboy wonders how to split $5 among three people. He decides to give each guest $1 and keep $2 as a tip. Now, each guest has paid $9 (a total of $27), and the bellboy has kept $2, making a total of $29. What happened to the missing dollar? (Answer: There is no missing dollar. The guests paid a total of $27, and the bellboy kept $2, which adds up to the correct room price of $29) 2. A farmer has 17 sheep, and all but 9 die. How many are left? (Answer: The farmer has 9 sheep left) 3. In a group of people, 90% are married, and the rest are single. If you randomly select three people from the group, what is the probability that at least two of them are married? (Answer: The probability is 1 (or 100%) because everyone is either married or single) 4. If you have a rectangular box and you increase the length, width, and height each by 100%, how much does the volume increase? (Answer: The volume increases by 800% (or becomes 9 times larger) 5. In a town, 80% of the men are married to 90% of the women. What percentage of the population is married? (Answer: The percentage of the population that is married is 72% (80% of 90%)) 6. A man builds a house with four sides of rectangular construction, each having a southern exposure. A big bear comes by. What color is the bear? (Answer: White. The only way for the house to have all four sides facing south is if it is at the North Pole, and there, the only bears are white.) 7. A clock chimes 5 times in 4 seconds. How many times will it chime in 10 seconds? (Answer: 12 times. The clock chimes once every second, so in 10 seconds, it will chime 10 times.) 8. A bag contains 5 red balls, 4 green balls, and 3 blue balls. If you randomly draw two balls without replacement, what is the probability that both balls are of the same color? (Answer: The probability is 29/66 because there are 12 balls in total, and after drawing one, there are 11 left.) 9. A man is three times as old as his son. In 15 years, he will be only twice as old as his son. How old are they now? (Answer: The man is 45 years old, and his son is 15 years old.) 10. If you write all the numbers from 1 to 1000 in words (e.g., one, two, three), how many times do you write the letter ‘a’? (Answer: The letter ‘a’ is used times.) FAQs About Math Riddles for Middle School Here are some of the most frequently asked questions about math riddle questions that are suitable for middle school students. Turn me on my side and I am everything. Cut me in half and I am nothing. What am I? The answer is 8. What are some math brain teasers for middle school? Here are some good math brain teaser for kids: 1. I am an even number. If you subtract 5 from me, you get a prime number. What am I? (Answer: 7.) 2. If you have 3 apples and you take away 2, how many apples do you have? (Answer: 2 apples (you took them, but you still have them)) 3. I am a fraction. My denominator is one more than my numerator. What fraction am I? (Answer: 1/2 4. I am the square root of 81. What number am I? (Answer: 9) 5. I am a multiple of 7. If you divide me by 2, what is the result? (Answer: 7) What is the answer to the “how many sides does a circle have” riddle? The answer is two (inside & outside). What are the benefits of math riddles? Math riddles offer benefits such as developing critical thinking and problem-solving skills, reinforcing mathematical concepts, promoting logical reasoning and creativity, making learning enjoyable, increasing engagement, fostering collaborative learning, and boosting confidence. What are some funny math riddles for middle school? Here are some funny maht riddles for middle school students. • Why was the equal sign so humble? (Answer: It knew it wasn’t less than or greater than anyone else.) • Why did the math book look sad? (Answer: Because it had too many problems.) • What did the zero say to the eight? (Answer: “Nice Belt!”) • Why did the student do multiplication problems on the floor? (Answer: The teacher told them not to use tables.) • What’s a math teacher’s favorite place in New York? (Answer: Times Square.) More Fun Questions If you are looking for more fun questions or activities, check out the following list of articles: Math Riddles for Middle School: Join the Conversation Which one was your favourite math brain teaser question? Let us know in the comments. We’d love to hear from you.
{"url":"https://www.eslactivity.org/math-riddles-for-middle-school/","timestamp":"2024-11-05T00:56:34Z","content_type":"text/html","content_length":"104821","record_id":"<urn:uuid:38767a48-ef19-40f8-b2d5-9b7d3bf46c2b>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00779.warc.gz"}
Math Grade 6 Quiz Understanding Statistical Questions - 6.SP.A.1Math Grade 6 Quiz Understanding Statistical Questions - 6.SP.A.1 | Tutorified.com Quizzes Math Grade 6 Quiz Understanding Statistical Questions – 6.SP.A.1 Statistical questions are those that can be answered by collecting data and where there will be variability in that data. This means that the question anticipates a variety of answers and is not just a single, straightforward question with one answer. Understanding this concept is crucial for initiating the data collection process, anticipating variability, and analyzing data in context, not only in mathematical situations but also in real-life scenarios where data plays a significant role.
{"url":"https://quizzes.tutorified.com/quizzes/math-grade-3-quiz-understanding-statistical-questions-6-sp-a-1/","timestamp":"2024-11-05T21:30:47Z","content_type":"text/html","content_length":"128406","record_id":"<urn:uuid:0114117f-927e-459a-aa55-2f3c6279699f>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00789.warc.gz"}
Digital Math Resources This collection of math examples on Compound Interest offers a comprehensive set of 24 resources designed to enhance students' understanding of this crucial financial concept. The examples progress in complexity, covering a range of skills from basic interest calculations to more advanced compound interest scenarios. By utilizing visual models, these examples simplify complex concepts, making them more accessible to learners. The collection covers various aspects of compound interest, including: Key Concepts • Simple vs. compound interest • Interest rate calculations • Time periods and compounding frequency Advanced Topics • Continuous compounding • Future value calculations • Real-world applications Visual representations play a significant role in these examples, helping students grasp abstract concepts more easily. By providing more examples than typical textbooks or digital curricula, Media4Math allows students to benefit from analyzing multiple scenarios, reinforcing their understanding of compound interest. Subscribers to Media4Math can leverage these resources further by using the Slide Show Creator (https://www.media4math.com/SlideShowCreator) to develop customized presentations. This feature enables educators to tailor their lessons to specific learning objectives and student needs, enhancing the overall learning experience.
{"url":"https://www.media4math.com/MathExamplesCollection--CompoundInterest","timestamp":"2024-11-15T01:46:16Z","content_type":"text/html","content_length":"87314","record_id":"<urn:uuid:80dcfb94-7da4-4196-ba87-a717dfaaea55>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00217.warc.gz"}
Free Printable Multiplication Flash Cards Double Sided Free Printable Multiplication Flash Cards Double Sided - Then simply slice up and start testing yourself. Web our free flash cards. Web free printable multiplication flash cards. Take your students through regular drills for the 2 to 12 times tables with these folding times table flash cards.this resource contains a times table flash card. Web welcome to the math salamanders printable math flash cards for multiplication tables 2, 3, 4, 5 and 10. These cover the 0x through 12x multiplication facts. Web these free printable multiplication flashcards are a great way for kids to practice multiplication facts and gain math fluency. Web small individual student flash card set (2.25 x 3) for use with our picture and story method for teaching the times tables. Web take your students through regular drills for the 2 to 12 times tables with these folding times table flash cards. Use this resource to drill your students individually, in groups or as a class. Elementary Double Sided Multiplication Flash Cards Whole Group Around Great practice for your times table facts on the go! Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. If you love this printable, do not forget to leave a comment down below. Web our free flash cards. Take your students through regular drills for the 2 to 12 times tables with these. FREE Printable Multiplication Flashcards This Reading Mama When printed double sided the answers will print. Cards that are printable for free. To use these cards, simply cut them out, fold them in the middle and glue the two. Great practice for your times table facts on the go! This is a fun multiplication. Free Printable Multiplication Flash Cards Double Sided Printable We have printable multiplication table answers to use to print on the back side. Select any of the multiplication flashcards you want from above. Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. Printable multiplication flashcards make learning basic multiplication facts super fun! This is a fun multiplication. Free Printable Multiplication Flash Cards 0 10 Free Printable Great practice for your times table facts on the go! Printable multiplication flash cards double sided pdf can be downloaded to your computer by right clicking the image. If you love this printable, do not forget to leave a comment down below. To get started, simply print. Take your students through regular drills for the 2 to 12 times tables. Times Table Flash Cards and doublesided multiplication poster Pinterest Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. Printable multiplication flash cards double sided pdf can be downloaded to your computer by right clicking the image. Web start learning for pdf flashcards: Worksheet #4 set of 11 & 12: These cover the 0x through 12x multiplication facts. 92 INFO FLASH CARDS PRINTABLE MULTIPLICATION ZIP PRINT DOWNLOAD * Card Here you will find a wide range of free printable multiplication flashcards, which will help your child learn their multiplication facts. Worksheet #3 set of 9 & 10: Make sure you print double sided so that you can flip the cards (see what is a flashcard ). Print single sided for matching games. This resource contains a times table flash. Free Printable Multiplication Flash Cards Double Sided Printable Web grab these free multiplication flashcards to quickly review multiplication facts with your learners. These cards are very versatile and colorful at the same time. Printable multiplication flash cards double sided pdf can be downloaded to your computer by right clicking the image. Web free printable multiplication flash cards. Worksheet #3 set of 9 & 10: Multiplication Flashcards 012 Printable Etsy To get started, simply print. This is a fun multiplication. Worksheet #2 set of 6, 7 & 8: Printable multiplication flash cards double sided pdf can be downloaded to your computer by right clicking the image. Web small individual student flash card set (2.25 x 3) for use with our picture and story method for teaching the times tables. Online multiplication flash cards 012 + printables Printable multiplication flash cards double sided pdf can be downloaded to your computer by right clicking the image. To get started, simply print. Print single sided for matching games. Then simply slice up and start testing yourself. Worksheet #3 set of 9 & 10: Free Printable Multiplication Flash Cards Double Sided Web printable multiplication flash cards in pdf format for two sided or single sided printing. Download the printable flashcards to your computer print the multiplication flashcards with your printer note: Worksheet #4 set of 11 & 12: Printable multiplication flash cards double sided pdf can be downloaded to your computer by right clicking the image. These cards are very versatile. Each times table is clearly linked to the appropriate year group, based upon the current mathematics curriculum. Web take your students through regular drills for the 2 to 12 times tables with these folding times table flash cards. Web our free flash cards. Web free printable multiplication flash cards. Here you will find a wide range of free printable multiplication flashcards, which will help your child learn their multiplication facts. Worksheet #3 set of 9 & 10: Set of 0, 1 & 2: Web printable multiplication flash cards in pdf format for two sided or single sided printing. Web small individual student flash card set (2.25 x 3) for use with our picture and story method for teaching the times tables. These cover the 0x through 12x multiplication facts. Cards that are printable for free. Use this resource to drill your students individually, in groups or as a class. Web print these free multiplication flash cards to help kids memorize their multiplication facts for school. These simple times table flash cards have been popular for many years now. Select any of the multiplication flashcards you want from above. To use these cards, simply cut them out, fold them in the middle and glue the two. Then simply slice up and start testing yourself. This is a fun multiplication. Make sure you print double sided so that you can flip the cards (see what is a flashcard ). Web welcome to the math salamanders printable math flash cards for multiplication tables 2, 3, 4, 5 and 10. This Resource Contains A Times Table Flash Card For Every Times Table From 2 X 1 To 12 X 12. Set of 0, 1 & 2: Plus, these cards come in two. Cards that are printable for free. Then simply slice up and start testing yourself. Print Single Sided For Matching Games. More collection of printable multiplication flash cards double sided. If you love this printable, do not forget to leave a comment down below. Worksheet #2 set of 6, 7 & 8: Print double sided to do practice drills. Web Welcome To The Math Salamanders Printable Math Flash Cards For Multiplication Tables 2, 3, 4, 5 And 10. We have printable multiplication table answers to use to print on the back side. Web printable multiplication flash cards in pdf format for two sided or single sided printing. Take your students through regular drills for the 2 to 12 times tables with these folding times table flash cards.this resource contains a times table flash card. Web free printable multiplication flash cards. To Use These Cards, Simply Cut Them Out, Fold Them In The Middle And Glue The Two. When printed double sided the answers will print. These cards are very versatile and colorful at the same time. Worksheet #4 set of 11 & 12: These flashcards start at 0 x 0 and end at 12 x 12. Related Post:
{"url":"https://time.ocr.org.uk/en/free-printable-multiplication-flash-cards-double-sided.html","timestamp":"2024-11-06T05:04:21Z","content_type":"text/html","content_length":"30630","record_id":"<urn:uuid:1682e6e4-c756-4a25-ba46-b15015fa5c26>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00405.warc.gz"}
Non-Parametric Tests: The Sign Test, Mann-Whitney U Test, Runs Test, and Rank Sum Test Non-parametric tests cover techniques that do not rely on data belonging to any distribution. These include distribution free methods, which do not rely on assumptions that the data are drawn from a given probability distribution. As such it is the opposite of parametric statistics. It includes non-parametric statistical models, inference and statistical tests. Following are the nonparametric The Sign test: The sign test is one of the simplest nonparametric tests. It is for use with 2 repeated (or correlated) measures and measurement is assumed to be at least ordinal. For each subject, subtract the 2nd score from the 1st, and write down the sign of the difference. (That is write “-” if the difference score is negative, and “+” if it is positive.) The usual null hypothesis for this test is that there is no difference between the two treatments. If this is so, then the number of + signs (or - signs, for that matter) should have a binomial distribution1 with p = .5, and N = the number of subjects. In other words, the sign test is just a binomial test with + and - in place of Head and Tail (or Success and Failure). Mann-Whitney U Test: The null hypothesis assumes that the two sets of scores are samples from the same population; and therefore, because sampling was random, the two sets of scores do not differ systematically from each The alternative hypothesis, on the other hand, states that the two sets of scores do differ systematically. Runs Test: The runs test (also called Wald–Wolfowitz test after Abraham Wald and Jacob Wolfowitz) is a non-parametric statistical test that checks a randomness hypothesis for a two-valued data sequence. More precisely, it can be used to test the hypothesis that the elements of the sequence are mutually independent. Rank Sum Test: The t-test is the standard test for testing that the difference between population means for two non-paired samples are equal. If the populations are non-normal, particularly for small samples, then the t-test may not be valid. The rank sum test is an alternative that can be applied when distributional assumptions are suspect. business research methods March 09, 2018
{"url":"https://notesformba.com/topic/non-parametric-tests/","timestamp":"2024-11-10T09:04:52Z","content_type":"text/html","content_length":"27004","record_id":"<urn:uuid:f6c2d00a-ebe2-45bf-b78e-5fa051a93fa9>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00131.warc.gz"}
Using gumboot Martyn Clark and Kevin Shook Clark et al. (2021) have shown how Goodness of Fit (GOF) statistics for hydrological models can easily be misused. The purpose of gumboot is to evaluate the sampling uncertainty in GOF statistics using both jacknife and bootstrap methods. The sampling uncertainty in the GOF estimates is quantified using a mixture of Jackknife and Bootstrap methods. First, we use the Jackknife and Bootstrap methods to compute the standard error in the GOF estimates. These methods resample from the original data sample using the Non-overlapping Block Bootstrap (NBB) strategy using data blocks of length one year. The use of data blocks of length one year reduces the issues with substantial seasonal nonstationarity in shorter data blocks, while preserving the within-year autocorrelation and seasonal periodicity of streamflow series. Bootstrapping methods are only effective if the blocks used are approximately independent. Second, we use the Bootstrap methods to compute tolerance intervals for the GOF estimates, where the 90% tolerance intervals are defined as the difference between the 95th and 5th percentile of the empirical probability distribution of the GOF estimates. Tolerance intervals differ from confidence intervals, because tolerance intervals are intervals corresponding to a random variable, rather than random confidence intervals around some true value. These bootstrap tolerance intervals are computed using 1000 bootstrap samples. Finally, we use the Jackknife-After-Bootstrap method (Efron, 1992) to estimate the standard error in the Bootstrap tolerance intervals, which enables us to evaluate how sensitive the resulting uncertainty intervals are to individual years (blocks). References: Efron, B. and Gong, G., 1983. A leisurely look at the bootstrap, the jackknife, and cross-validation. The American Statistician, 37(1), pp.36-48. Efron, B., 1992. Jackknife‐after‐bootstrap standard errors and influence functions. Journal of the Royal Statistical Society: Series B (Methodological), 54(1), pp.83-111. How to use gumboot The most important function is bootjack() which computes the bootstap and jackknife statistics for a single data set, which is a data frame containing dates, observed and simulated flows. A data set (flows_1030500) is provided. flows_1030500 <- flows_1030500 #> date obs sim #> 1 1989-10-01 0.753092 1.895452 #> 2 1989-10-02 0.679782 1.738436 #> 3 1989-10-03 0.623800 1.556237 #> 4 1989-10-04 0.582480 1.367636 #> 5 1989-10-05 0.545159 1.185386 #> 6 1989-10-06 0.511836 1.017480 Plotting the values shows fairly good agreement between the simulated and observed values melted <- melt(flows_1030500, id.vars = "date") ggplot(melted, aes(date, value, colour = variable)) + geom_line() + xlab("") + labs(y = bquote('Daily streamflow'~(m^3/s)), x = "") To perform the bootstrap and jackknife analyses, the values are passed to bootjack. There are many options. The default is to calculate statistics for both NSE (Nash-Sutcliffe efficiency), and KGE (Kling-Gupta efficiency) values. In this example, we will compute the statistics of the NSE. Note that the name of the observed variable must be obs and the name of the simulated variable must be sim. NSE_values <- bootjack(flows_1030500, GOF_stat = "NSE") #> GOF_stat seJack seBoot p05 p50 p95 score #> 1 NSE 0.04850763 0.04719664 0.4746045 0.5547303 0.6290767 0.5541248 #> biasJack biasBoot seJab #> 1 -0.00196514 -0.0007371399 0.0445264 In this example, the standard error of the NSE statistic, as calculated by jackknifing is 0.0485076; as calculated by bootstrapping it is 0.0471966, and as calculated by jackknifing after the bootstrapping (JAB) is 0.0445264, showing the uncertainty in the statistic. It is important to note that the bootstrap and jackknife-after-bootstrap standard errors are dependant on random samples of the years, so the values will change with each execution of the function. If you want to get the same values each time, for example to compare your results with another person’s analyses, you have two options. If you set the option seed to have a value, the R random number generator will always return the same sequence of values, as shown below: bootjack(flows_1030500, GOF_stat = "NSE", seed = 1) #> GOF_stat seJack seBoot p05 p50 p95 score #> 1 NSE 0.04850763 0.04716093 0.4741728 0.5564785 0.6237025 0.5541248 #> biasJack biasBoot seJab #> 1 -0.00196514 -0.001495579 0.03160307 bootjack(flows_1030500, GOF_stat = "NSE", seed = 1) #> GOF_stat seJack seBoot p05 p50 p95 score #> 1 NSE 0.04850763 0.04716093 0.4741728 0.5564785 0.6237025 0.5541248 #> biasJack biasBoot seJab #> 1 -0.00196514 -0.001495579 0.03160307 Note that the value of seJack above is identical to the previously determined value, as the jackknifing always uses the same set of years of data. If you want to compare the results with code not written in R you can save the randomly selected years to a file. If the specified file does not exist, it will be written to. If the file does exist, then the years will be read from it. Raw values If you are interested in the values used to calculate the standard errors, you can return them using the option returnSamples = TRUE. The function returns a list with the values for the bootstrap and jackknifing analyses. NSE_samples <- bootjack(flows_1030500, GOF_stat = "NSE", returnSamples = TRUE) #> [1] "statsBoot" "statsJack" You can see the variability of the NSE values (as well as the sampled observed and simulated values) as determined by the bootstrap and jackknife. Multiple locations The function CAMELS_bootjack() applies bootjack() to model runs over the “CAMELS” catchments across the contiguous US (CONUS). The model runs are not supplied, and need to be stored in a NetCDF file. Newman et al. (2015) and Addor et al. (2017) provide details on the hydrometeorological and physiographical characteristics of the CAMELS catchments. The CAMELS catchments are those with minimal human disturbance (i.e., minimal land use changes or disturbances, minimal water withdrawals), and are hence almost exclusively smaller, headwater-type catchments (median basin size of 336 km^2). References: Addor, N., Newman, A.J., Mizukami, N. and Clark, M.P., 2017. The CAMELS data set: catchment attributes and meteorology for large-sample studies. Hydrology and Earth System Sciences, 21 (10), pp.5293-5313. Newman, A.J., Clark, M.P., Sampson, K., Wood, A., Hay, L.E., Bock, A., Viger, R.J., Blodgett, D., Brekke, L., Arnold, J.R. and Hopson, T., 2015. Development of a large-sample watershed-scale hydrometeorological data set for the contiguous USA: data set characteristics and assessment of regional variability in hydrologic model performance. Hydrology and Earth System Sciences, 19(1), In addition to the NetCDF file containing the model runs CAMELS_bootjack() a data frame containing the site numbers and their latitudes and longitudes. This is contained in the supplied data frame CAMELS_sites. Note that you can subset the data frame if you only wish to test some of the runs. CAMELS_bootjack() has most of the same options as bootjack() and returns the same values. Unless you tell it not to (setting quiet = TRUE) a progress bar will be displayed. This example does the analyses for all of the CAMELS data, returning statistics for runs which were optimised using NSE and KGE run targets, using both NSE and KGE goodness of fit statistics. CAMELS_sites <- hcdn_conus_sites nc_file <- "/home/kevin/data/projects/bootstrappR_test/hess2019/results_hcdn_flow.nc" CAMELS_stats <- CAMELS_bootjack(CAMELS_sites, nc_file) Having computed the statistics for all CAMELS basins, the uncertainties in the NSE and KGE values can be plotted using ggplot_estimate_uncertainties(), which returns a ggplot2 plot object. Because some of the CAMELS basins have data sets which do not meet the default criteria for bootjack(), they will return NA_real_ values for their statistics. It is a good idea to first remove these stations from the data set before calling ggplot_estimate_uncertainties(), by using na.omit().
{"url":"https://cran.case.edu/web/packages/gumboot/vignettes/using_gumboot.html","timestamp":"2024-11-04T07:18:54Z","content_type":"text/html","content_length":"212761","record_id":"<urn:uuid:fc9b2ba9-0c75-4694-9e3c-8944a1a6fad8>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00061.warc.gz"}
How to Quote | Citing Quotes in APA, MLA & Chicago Quoting means copying a passage of someone else’s words and crediting the source. To quote a source, you must ensure: • The quoted text is enclosed in quotation marks or formatted as a block quote • The original author is correctly cited • The text is identical to the original The exact format of a quote depends on its length and on which citation style you are using. Quoting and citing correctly is essential to avoid plagiarism which is easy to detect with a good plagiarism checker. How to cite a quote in APA, MLA and Chicago Every time you quote, you must cite the source correctly. This looks slightly different depending on the citation style you’re using. Three of the most common styles are APA, MLA, and Chicago. Citing a quote in APA Style To cite a direct quote in APA, you must include the author’s last name, the year, and a page number, all separated by commas. If the quote appears on a single page, use “p.”; if it spans a page range, use “pp.” An APA in-text citation can be parenthetical or narrative. In a parenthetical citation, you place all the information in parentheses after the quote. In a narrative citation, you name the author in your sentence (followed by the year), and place the page number after the quote. Punctuation marks such as periods and commas are placed after the citation, not within the quotation marks. Examples: APA in-text citation • Evolution is a gradual process that “can act only by very short and slow steps” (Darwin, 1859, p. 510). • Darwin (1859) explains that evolution “can act only by very short and slow steps” (p. 510). Citing a quote in MLA style An MLA in-text citation includes only the author’s last name and a page number. As in APA, it can be parenthetical or narrative, and a period (or other punctuation mark) appears after the citation. Examples: MLA in-text citation • Evolution is a gradual process that “can act only by very short and slow steps” (Darwin 510). • Darwin explains that evolution “can act only by very short and slow steps” (510). Citing a quote in Chicago style Chicago style uses Chicago footnotes to cite sources. A note, indicated by a superscript number placed directly after the quote, specifies the author, title, and page number—or sometimes fuller Unlike with parenthetical citations, in this style, the period or other punctuation mark should appear within the quotation marks, followed by the footnote number. Example: Chicago footnote citation Evolution is a gradual process that “can act only by very short and slow steps.” 1. Darwin, The Origin of the Species, 510. Complete guide to Chicago style Introducing quotes Make sure you integrate quotes properly into your text by introducing them in your own words, showing the reader why you’re including the quote and providing any context necessary to understand it. Don’t present quotations as stand-alone sentences. Example: Quote not properly introduced “A membership referendum held today would be backed by 55 percent of Danish voters” (Levring, 2018, p. 3). There are three main strategies you can use to introduce quotes in a grammatically correct way: The following examples use APA Style citations, but these strategies can be used in all styles. Introductory sentence Introduce the quote with a full sentence ending in a colon. Don’t use a colon if the text before the quote isn’t a full sentence. If you name the author in your sentence, you may use present-tense verbs, such as “states,” “argues,” “explains,” “writes,” or “reports,” to describe the content of the quote. • In Denmark, a recent poll shows that: “A membership referendum held today would be backed by 55 percent of Danish voters” (Levring, 2018, p. 3). • In Denmark, a recent poll shows that support for the EU has grown since the Brexit vote: “A membership referendum held today would be backed by 55 percent of Danish voters” (Levring, 2018, p. 3). • Levring (2018) reports that support for the EU has grown since the Brexit vote: “A membership referendum held today would be backed by 55 percent of Danish voters” (p. 3). Introductory signal phrase You can also use a signal phrase that mentions the author or source, but doesn’t form a full sentence. In this case, you follow the phrase with a comma instead of a colon. • According to a recent poll, “A membership referendum held today would be backed by 55 percent of Danish voters” (Levring, 2018, p. 3). • As Levring (2018) explains, “A membership referendum held today would be backed by 55 percent of Danish voters” (p. 3). Integrated into your own sentence To quote a phrase that doesn’t form a full sentence, you can also integrate it as part of your sentence, without any extra punctuation. • A recent poll suggests that EU membership “would be backed by 55 percent of Danish voters” in a referendum (Levring, 2018, p. 3). • Levring (2018) reports that EU membership “would be backed by 55 percent of Danish voters” in a referendum (p. 3). Quotes within quotes When you quote text that itself contains another quote, this is called a nested quotation or a quote within a quote. It may occur, for example, when quoting dialogue from a novel. To distinguish this quote from the surrounding quote, you enclose it in single (instead of double) quotation marks (even if this involves changing the punctuation from the original text). Make sure to close both sets of quotation marks at the appropriate moments. Note that if you only quote the nested quotation itself, and not the surrounding text, you can just use double quotation marks. Examples: Punctuation mistakes with nested quotations • Carraway introduces his narrative by quoting his father: ““Whenever you feel like criticizing anyone,” he told me, “just remember that all the people in this world haven’t had the advantages that you’ve had”” (Fitzgerald 1). • Carraway introduces his narrative by quoting his father: “‘Whenever you feel like criticizing anyone,’ he told me, ‘just remember that all the people in this world haven’t had the advantages that you’ve had” (Fitzgerald 1). Examples: Correctly formatted nested quotations • Carraway introduces his narrative by quoting his father: “‘Whenever you feel like criticizing anyone,’ he told me, ‘just remember that all the people in this world haven’t had the advantages that you’ve had’” (Fitzgerald 1). • Carraway begins by quoting his father’s invocation to “remember that all the people in this world haven’t had the advantages that you’ve had” (Fitzgerald 1). Note: When the quoted text in the source comes from another source, it’s best to just find that original source in order to quote it directly. If you can’t find the original source, you can instead cite it indirectly. Shortening or altering a quote Often, incorporating a quote smoothly into your text requires you to make some changes to the original text. It’s fine to do this, as long as you clearly mark the changes you’ve made to the quote. Shortening a quote If some parts of a passage are redundant or irrelevant, you can shorten the quote by removing words, phrases, or sentences and replacing them with an ellipsis (…). Put a space before and after the Be careful that removing the words doesn’t change the meaning. The ellipsis indicates that some text has been removed, but the shortened quote should still accurately represent the author’s point. Example: Shortening a quote As Darwin (1859) puts it, “natural selection acts solely by accumulating slight, successive, favourable variations … it can act only by very short and slow steps” (p. 510). QuillBot’s Word Counter tool can help you effectively track the word count of your sentences or texts to help you write more concisely. Altering a quote You can add or replace words in a quote when necessary. This might be because the original text doesn’t fit grammatically with your sentence (e.g., it’s in a different verb tense), or because extra information is needed to clarify the quote’s meaning. Use brackets to distinguish words that you have added from words that were present in the original text. Example: Adding words to a quote Smith (2020) states that “those [participants] with the highest scores were generally older than the average” (p. 33). The Latin term “sic” is used to indicate a (factual or grammatical) mistake in a quotation. It shows the reader that the mistake is from the quoted material, not a typo of your own. Example: Marking a mistake with “sic“ Sill (2022) states that “several problem [sic] can be addressed using this technique” (p. 14). In some cases, it can be useful to italicize part of a quotation to add emphasis, showing the reader that this is the key part to pay attention to. Use the phrase “emphasis added” to show that the italics were not part of the original text. Example: Adding emphasis with italics Because natural selection “acts solely by accumulating slight, successive, favourable variations [emphasis added], it can produce no great or sudden modification; it can act only by very short and slow steps” (Darwin, 1859, p. 510). You usually don’t need to use brackets to indicate minor changes to punctuation or capitalization made to ensure the quote fits the style of your text. Block quotes If you quote more than a few lines from a source, you must format it as a block quote. Instead of using quotation marks, you set the quote on a new line and indent it so that it forms a separate block of text. Block quotes are cited just like regular quotes, except that if the quote ends with a period, the citation appears after the period. Example: MLA block quote Tolkien favors long sentences and detailed descriptions: To the end of his days Bilbo could never remember how he found himself outside, without a hat, a walking-stick or any money, or anything that he usually took when he went out; leaving his second breakfast half-finished and quite unwashed-up, pushing his keys into Gandalf’s hands, and running as fast as his furry feet could carry him down the lane, past the great Mill, across The Water, and then on for a mile or more. (16) When should I use quotes? Avoid relying too heavily on quotes in academic writing. To integrate a source, it’s often best to paraphrase, which means putting the passage in your own words. This helps you integrate information smoothly and keeps your own voice dominant. However, there are some situations in which quoting is more appropriate. When focusing on language If you want to comment on how the author uses language (for example, in literary analysis), it’s necessary to quote so that the reader can see the exact passage you are referring to. Example: Using quotes to analyze language You are writing a paper about the novels of F. Scott Fitzgerald. You will have to quote frequently from the novels in order to analyze their language and style. When giving evidence To convince the reader of your argument, interpretation or position on a topic, it’s often helpful to include quotes that support your point. Quotes from primary sources (for example, interview transcripts or historical documents) are especially credible as evidence. Example: Using quotes as evidence You are working on a research paper about the causes of the French Revolution, and you have studied documents and letters written at the time. You can quote from these sources as evidence in support of your argument. When presenting an author’s position or definition When you’re referring to secondary sources such as scholarly books and journal articles, try to put others’ ideas in your own words when possible. But if a passage does a great job at expressing, explaining, or defining something, and it would be very difficult to paraphrase without changing the meaning or losing the weakening the idea’s impact, it’s worth quoting directly. Example: Quoting to present a theory Your interpretation of survey data is supported by a well-known theory on your topic. You find a sentence that perfectly sums up the theory, so you quote it directly before elaborating on your understanding of the theory. Other interesting articles If you want to know more about ChatGPT, AI tools, citation, and plagiarism, make sure to check out some of our other articles with explanations and examples. Frequently asked questions about quoting sources A quote is an exact copy of someone else’s words, usually enclosed in quotation marks and credited to the original author or speaker. In academic writing, there are three main situations where quoting is the best choice: Don’t overuse quotes; your own voice should be dominant. If you just want to provide information from a source, it’s usually better to paraphrase or summarize. Every time you quote a source, you must include a correctly formatted in-text citation. This looks slightly different depending on the citation style. For example, a direct quote in APA is cited like this: “This is a quote” (Streefkerk, 2020, p. 5). Every in-text citation should also correspond to a full reference at the end of your paper. A block quote is a long quote formatted as a separate “block” of text. Instead of using quotation marks, you place the quote on a new line, and indent the entire quote to mark it apart from your own words. The rules for when to apply block quote formatting depend on the citation style: □ APA block quotes are 40 words or longer. □ MLA block quotes are more than 4 lines of prose or 3 lines of poetry. □ Chicago block quotes are longer than 100 words. If you’re quoting from a text that paraphrases or summarizes other sources and cites them in parentheses, APA and Chicago both recommend retaining the citations as part of the quote. However, MLA recommends omitting citations within a quote: □ APA: Smith states that “the literature on this topic (Jones, 2015; Sill, 2019; Paulson, 2020) shows no clear consensus” (Smith, 2019, p. 4). □ MLA: Smith states that “the literature on this topic shows no clear consensus” (Smith, 2019, p. 4). Footnote or endnote numbers that appear within quoted text should be omitted in all styles. If you want to cite an indirect source (one you’ve only seen quoted in another source), either locate the original source or use the phrase “as cited in” in your citation. In scientific subjects, the information itself is more important than how it was expressed, so quoting should generally be kept to a minimum. In the arts and humanities, however, well-chosen quotes are often essential to a good paper. In social sciences, it varies. If your research is mainly quantitative, you won’t include many quotes, but if it’s more qualitative, you may need to quote from the data you collected. As a general guideline, quotes should take up no more than 5–10% of your paper. If in doubt, check with your instructor or supervisor how much quoting is appropriate in your field. Cite this Scribbr article If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator. McCombes, S. & Caulfield, J. (2024, October 16). How to Quote | Citing Quotes in APA, MLA & Chicago. Scribbr. Retrieved November 4, 2024, from https://www.scribbr.com/working-with-sources/ You have already voted. Thanks :-) Your vote is saved :-) Processing your vote...
{"url":"https://www.scribbr.com/working-with-sources/how-to-quote/","timestamp":"2024-11-05T10:01:26Z","content_type":"text/html","content_length":"216950","record_id":"<urn:uuid:ca23ea19-9b94-4ac4-ad13-d5017d8e66e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00834.warc.gz"}
Mom! My plus key is broken! Do you remember the good old point-and-click adventures? They provided plenty of puzzles and riddles with a compelling narrative. I loved them, possibly because I love riddles, puzzles, and this sort of challenge. So I was super excited when my employer supported the initiative of a Coding-Riddle-of-the-Week contest. This is the second issue and I’m going to present it here. Produce a program which increments an integer variable by a value of 1 without using sum or increment operators. You are pretty free to choose whatever integer size and type you prefer (I would say, but bool), and whatever language you want. I stuck with C++ because it was quicker, but most of my solutions can be easily ported to other languages. So, before continuing be sure to give some thought to this riddle to not spoil the fun. Increasing an integer number without using the addition (symbol) is quite a challenge. After some brainstorming, I figured out the following directions – • use algebra (hey, we studied it a life long… let’s put it to some use); • use bit representation (a pattern of bits needs to be turned into another pattern); • use reverse definition – find a number that when decremented results in the argument number. • use other language constructs that allow some mathematics Usual caveats hold – when trying to increment a signed int, you may stumble into an undefined behavior if the result would be greater than the larger number that can be represented using such a type. If the type is unsigned then you don’t have a UB, but if you are thinking of int as a mathematical integer then you may be in for some surprises in comparisons. But let’s start in the same order I found the solutions. Everything is relative This is the very first solution, a plus, according to algebra is just a pair of minus in disguise. – -1 = +1, so subtracting -1 is just as adding 1. // Everything's relative [[nodiscard]] int f0( int n ) noexcept return n - - 1; A pretty boring solution, so let’s try something more exotic. Stringly Typed Language Getting back to the meaning of integers, I remember from primary school that is something stemming from the cardinality of a set, therefore if I create a set with a number of items equal to the number to increment, then add an item and compute the cardinality I got the desired result. I opted for std::string for no specific reason… well I thought it was the only container that could be built with a given number of items, but I was wrong, std::vector can do it as well. Any character can be used since the content of the string is not relevant, but its length is. #include <string> // stringly typed language [[nodiscard]] int f1( int n ) std::string s( n, ' ' ); s.append( " " ); return s.size(); This method is a bit heavy on resources, but with nowadays PC could be affordable. Not as boring as the previous one, but still not that exciting. Time for something different… Meta-programming is always cool C++ metaprogramming is as contorted as it could be, in comparison, INTERCAL is a model of simplicity and mindfulness, but I digress. So I just imagined that we want to compute the increment at compile time. Uninteresting as it may be, here you are the code: template<int N> [[nodiscard]] int f2() struct T { char a[N]; char b; }; return sizeof( T ); This time the argument is not passed as a function parameter, but as a template parameter. So you need to call this function with f2<41>() to get the answer. The same code cannot be used to compute something that’s only known at run-time since you can’t pass a variable as a template argument. This was funny, but the limitation about the run-time was a bit disappointing and this led me looking into bits. Every bit counts What’s an integer, but a sequence of bits? What is left of an electronic engineer in my soul still knows how to design an adder circuit with some logic gates. You take two bits (addition terms) and output two bits. The first bit is the result, while the latter is the carry. In schematics, it would look like: Bit adder You can design an increment by chaining several of such circuits. All ‘A’s ports are connected to operand bits, the first ‘B’ port is set to 1 (we want to add 1), next circuit ‘B’ is connected to the previous carry signal. The translation in software is quite straightforward. // every bit counts int f3( int n ) int carry = 1; int result = 0; int mask = 1; int bit = (n&1); result |= (bit ^ carry)*mask; carry = bit & carry; n >>= 1; mask <<= 1; while( n != 0 ); return result; The most notable part is that the loop completes when there are no more ‘1’ bits in the argument. I picked this as an alternative to sweeping through all the bits of the operand since it is a slight optimization and also because the use of ‘+’, needed for incrementing the loop counter, is forbidden. It’s a bit thing While looking at the bits and reasoning about what is the meaning of increment, I noticed the following pattern – 0+1 1 01+1 10 011+1 100 0111+1 1000 01111+1 10000 That is, from left to right: • if the bit is 1, then it gets flipped to 0 and the operation continues, • if the bit is 0, then it gets flipped to 1 and the operation stops. In other words, bits to the left of the first 0, are not affected by the increment. That sounds like an algorithm // it's a bit thing. int f4( int n ) int mask = 1; while( (n & mask) == mask ) n ^= mask; mask <<= 1; return n | mask; This is faster, on average, than the previous solution since the loop lasts stops at the first least significant bit at 0 in the operand, while the former code looped through all the bits. Also, it is shorter and somewhat more readable. This is just a crazy solution, loosely inspired by the random sort algorithm. Since I know which relation must hold between the result and the operand (result – 1 == operand), I may look for numbers that satisfy this relationship. And where do I get these numbers from? Uh? Random generator, of course. Given an infinite time, for sure, I’ll get the right number #include <random> int f5( int n ) std::random_device dev; std::mt19937 generator( dev() ); std::uniform_int_distribution<std::mt19937::result_type> dist( std::numeric_limits<int>::min(), std::numeric_limits<int>::max()); while( true ) { int result = dist(generator); if( result-1 == n ) return result; This is more hideous than what is needed because of the Uglification Principle that inspires C++ Evolution (given two solutions yielding the same result, pick the uglier one). On the other end, it is true that you can choose different RNG algorithms. An improvement that I left out for sake of brevity (and some sadism). is to further limit the range of the random numbers. Since you can’t add, you could double the number and search the result in the (n, 2n] range. Of course in the argument is 0, then it must be treated as a special case. Also, negative numbers require some care. But … trusting in luck must be blind as the luck herself 🙂 Algebra class After looking into representation, let’s have a look at semantics. I remember an optimization for computing multiplications from just square lookup tables, sum, and subtraction, and halving (axb = ((a+b)^2-(a-b)^2)/4. Therefore I looked for some formula that involved somehow n, resulting in n+1. The formula is from basic algebra: (a^2-b^2) = (a+b)(a-b), which means that a+1=(a^2-1)/(a-1). int f6( int n ) return (n*n-1)/(n-1); That’s it, elegant, neat, tidy. Regretfully you must ensure that n*n does not overflow the integer type you are using. Also may have some trouble if n equals 1. Pointers and arrays The next exploring direction was to look if the language provides, i.e. is there any operation in the syntactic sugar of the language that I can exploit to get a sum or increment? Yes indeed. The weird C equivalence between array and pointers requires that a[n] == *(a+n). So item accessing hides a sum (and a multiplication if you look under the hood). Let’s use it int f7( int n ) auto const* ptr = reinterpret_cast<char const*>(n); auto result = reinterpret_cast<uintptr_t>( &ptr[1] ); return result; Ugly as it can be it is just turning integers into pointers and back. Strictly speaking, this is UB. Even if the pointer is not dereferenced, pointer math could rely on the fact that the pointer is well defined and the integer argument may not be such a value. I found more info in this here. I don’t like this solution very much, but it is still a hack. Don’t look down Talking about lookup tables… a lookup table where at index n, n+1 is stored could easily solve the problem. The drawback is that this is really expensive for integers beyond 8 bits. int16_t and uint16_t require 2*2^16 = 128k (and that’s could be still acceptable), while int32_t and uint32_t require 4*2^32=16G. Also, you need to create the table… So I decided to just show the proof of concept using an uint8_t and a table computed outside the code. // Don't look down int f8( uint8_t n ) static uint8_t const next[] = { 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 0 }; return next[n]; (I didn’t even try to format the code O:-) ). The Crab Move This solution stems from the random search one, i.e. searching from a solution in the solution space, but this time with a better strategy – linear search from the highest integer number. #include <limits> // The crab move int f9( uint8_t n ) int result = std::numeric_limits<uint8_t>::max(); while( result-1 != n ) result -= 1; return result; Now that I read this solution again, I think that many improvements can be done both to restrict the search space (according to the same thoughts on the random search) and to step from the linear search to a binary search. A binary search would take just 32 attempts in the worst case to figure out the next integer for int32_t. Half right Being right half of the time is better than never being right. That sounds a bit pretentious (and it is), but sometimes you just need to look at your problem from a different perspective to find that not all the constraints you think you have, are there for real. // correct half of the times int f10( int n ) return n | 1; Yes if n is odd, then the answer is wrong, but it is always right when n is even. Recursion is Cool With all the fuzz on functional programming, I couldn’t miss a recursive solution. It took a while to figure out what could be the best approach for applying recursion, eventually, I came out with the following – if the number is even, then just set the least significant bit to one (see the previous solution) and we are done. If the number is odd, then look for an even number in the most significant bits and pads with 0 the right of the result. int f11( int n ) if( n % 2 == 0 ) return n | 1; return f11( n >> 1 ) << 1; Although this function has a certain elegance I wouldn’t pick this as the most readable, I’m not that recursion-addicted. Also, note that this is not a tail recursion, but in the worst case it recurs just 32 times. Likely no problems with the stack. Pass the test Last solution devised by me is dubbed “Pass the test” and the idea is that it is better to be right at least once than never. So, if you know you are testing with a specific number, why not just check that input? int f12( int n ) if( n == 41 ) return 42; return 0; Although I wouldn’t advise using code like this in production, it may be useful to force a test passing or to implement mock-ups. What a ride! Very interesting stuff from an apparently simple challenge. Although my intention was just to explore the solution space, it turned out I won, so it is my time to propose the next C++ riddle of the week. This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://www.maxpagani.org/2022/07/08/mom-my-plus-key-is-broken/","timestamp":"2024-11-11T16:46:53Z","content_type":"text/html","content_length":"143742","record_id":"<urn:uuid:03ac76bf-d9dc-45a6-92cf-2af6410f0a5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00687.warc.gz"}
radmixture documentation br Block Relaxation for parameters estimation em Do ancestry analysis with EM algorithm fFixBr Block relaxation when f is fixed fFixEm EM when f is fixed fFixQN quasi-Newton when f is fixed generateG Transfer ped file to genotype matrix initQF Initialize Q and F qn quasi-Newton algorithm for ancestry analysis radmixture radmixture tfrdpub Transfer personal genotype raw data according public dataset For more information on customizing the embed code, read Embedding Snippets.
{"url":"https://rdrr.io/cran/radmixture/man/","timestamp":"2024-11-13T15:29:04Z","content_type":"text/html","content_length":"16104","record_id":"<urn:uuid:ab08767d-273d-4f30-91cf-cc3569ebcaad>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00235.warc.gz"}
Amps to kVA Amps (A) to kilovolt-amps (kVA) conversion Single phase amps to kVA conversion 1 kilovolt-ampere is equal to 1,000 volts-amperes. As such, to convert the apparent power from VA to kVA, you should divide the VA by 1,000. Therefore, the apparent power S, in kVA, is calculated by multiplying the RMS voltage, in volts V by the phase current I and then dividing the result by 1,000. S[(kVA)] = I[(A)] × V[(V)] / 100 Kilovolt-amps = volts times the amps divided by 1,000 Kilovolt-amps = amps × volts / 1000 kVA = A × V / 1000 Work out the apparent power, given that the RMS voltage is 110V and the phase current is 10A. S = 10A × 110V / 1000 = 1.1kVA 3 phase amps to kVA conversion Calculation with line to line voltage Assuming that all the loads in the system are balanced, the apparent power S in kilovolt-amps is calculated by multiplying the phase current I, in amps (A) multiplied by the line in line RMS voltage, in volts V[L-L] (V) multiplied by the square root of 3 and then dividing the result by 1,000. S[(kVA)] = √3 × I[(A)] × V[L-L(V)] / 1000 Kilovolt-amps is equals to the square root of 3 times the volts times the amps divided by 1,000 Kilovolt-amps = √3 × amps × volts / 1000 kVA = √3 × A × V / 1000 Given that the line in line RMS voltage across an electrical circuit is 110V and the phase current is 10A, calculate the apparent power, in kVA. S = √3 × 10A × 110V / 1000 = 1.905kVA Conversion with line to neutral voltage Again, the assumption here will be that all the loads in the system are balanced. As such, the apparent poser S, in kVA is calculated by multiplying the line to neutral RMS V[L-N] voltage, in volts (V) by the phase current I, in amps (A), multiplied by 3 and the dividing the result by 1,000. S[(kVA)] = 3 × I[(A)] × V[L-N(V)] / 1000 kVA is equal to 3 times the volts times amps divided by 1,000. Kilovolt-amps = 3 × amps × volts / 1000 kVA = 3 × A × V / 1000 What is the apparent current in kVA in an electrical circuit, whereby the line to neutral RMS voltage is 110V and the phase current is 11A? S = 3 × 10A × 110V / 1000 = 3.3kVA
{"url":"https://rapidcalculation.com/electricity-calculators/amps-to-kva.php","timestamp":"2024-11-12T16:52:03Z","content_type":"text/html","content_length":"46919","record_id":"<urn:uuid:e0bf1ffb-3a5b-4476-8b93-4f92ebc985af>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00447.warc.gz"}
Implications for Mathematics and Its Foundations: A New Kind of Science | Online by Stephen Wolfram [Page 777] particular string, then at each successive step one applies all possible transformations, so that in the end one builds up a whole network of connections between strings, as in the pictures below. In a sense such a network can then be thought of as representing the whole field of mathematics that can be derived from whatever set of axioms one is using—with every connection between strings corresponding to a theorem, and every possible path to a proof. But can networks like the ones below really reflect mathematics as it is actually practiced? For certainly the usual axioms in every traditional area of mathematics are significantly more complicated than any of the multiway system rules used below. But just like in so many other cases in this book, it seems that even systems whose underlying rules are remarkably simple are already able to capture many of the essential features of mathematics. An obvious observation in mathematics is that proofs can be difficult to do. One might at first assume that any theorem that is easy The result of applying the same transformations as on the facing page—but in all possible ways, corresponding to the evolution of a multiway system that represents all possible theorems that can be derived from the axioms. With the axioms used here, the total number of strings grows by a factor of roughly 1.7 at each step; on the last steps shown there are altogether 237 and 973 strings
{"url":"https://www.wolframscience.com/nks/p777--implications-for-mathematics-and-its-foundations/","timestamp":"2024-11-15T03:06:09Z","content_type":"text/html","content_length":"86066","record_id":"<urn:uuid:1afe9849-6e5d-4e48-9c84-04c1be2a2dc8>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00433.warc.gz"}
Digital Math Resources Join the hundreds of thousands of math educators who have used Media4Math resources in their classroom to engage their students. Our new integrated Library/Classroom product has everything you need! About Media4Math Our Mission Media4Math's mission is to educate 21st-century students in real-world applications of math with digital technology. We bring math to life in your classroom with a rich blend of resources to inspire your students to learn. Our philosophy is that the procedural side of math is a prerequisite to using it, but we also find that real-world math applications can provide motivation and even inspiration for math students. Math is its own language and it has important stories to tell. While many of our resources are for procedural skills, there are many resources that are real-world applications of math. Some of these resources rely on partnerships with other educational publishers. About Media4Math Library Media4Math Library contains over 15,000 high-quality resources designed for classroom or home use. This includes instructional, remediation, and assessment resources. You'll find truly innovative resources that bring math to life. Resources in Media4Math Library include: • Videos • Math Clip Art • Math Examples • Quizzes • Tutorials • PowerPoint and Google Slide presentations • GoogleEarth Voyager Stories • Algebra Applications • Geometry Explorations • Quizlet Flash Cards • Desmos Resources • Texas Instrument Resources • Games and Simulations About Media4Math Classroom Media4Math Classroom provides ready-to-use interactive math lessons that teach, assess, and provide real-world applications of topics in Pre-Algebra, Algebra, and Geometry. Assign these modules to your students and capture assessment scores in an easy-to-use Dashboard. This is a growing library of instructional modules. Topics include: • Arithmetic • Pre-Algebra • Algebra • Geometry • SAT Math Prep Media4Math Classroom modules provide real world applications of math that will motivate your students. Here are some examples: • Construction Site Math. Apply ratios and proportions to mixtures of cement and concrete. This module includes video resources and your students will get a real-world application of ratios and • Counting Bison. Apply place value concepts to the real-world application of the bison population. Because the bison population has gone through dramatic changes in population, this become an opportunity to use and apply place value. • Wildlife Refuge. This study of area and perimeter centers on the mustang population in the Nevada area. Students explore the relationship between area and perimeter of rectangular shapes in the context of designing a wildlife refuge. The Media4Math Bundle A subscription to the Media4Math bundle gives teachers access to all the resources listed above. Specifically: • Access to all the Media4Math Library resources and tools. • Access to all the Media4Math Classroom instructional modules. To learn more about our subscription packages, contact us at admin@media4math.com. Partnering with Media4Math Media4Math prides itself on its strategic partnerships with other educational organizations. Our partnerships include the following partners: • Google Earth. Media4Math has partnered with GoogleEarth to create a comprehensive library of GoogleEarth Voyager Stories. These map-based explorations of geometry, geography, and culture will literally bring the math to life. See our collection of Voyager Stories by clicking on this link to the Google Earth resources. • Texas Instruments. TI is the leading provider of graphing calculators used in the classroom. Media4Math has partnered with TI to create a library of digital resources to support the use of these graphing calculators. These resources include videos, presentations, and related tutorials. See our collection of TI resources by clicking on this link to the TI resources. • Desmos. This free online resource that includes a graphing calculator and geometry tools. Media4Math has created an extensive library of resources that support the use of these Desmos resources. See our collection of Desmos resources by clicking on this link to the Desmos resources. • Quizlet. We have partnered with the leading provider of interactive Flash Cards and is used by millions of teachers and students around the world. Media4Math has developed an extensive library of Quizlet resources. See our collection of Quizlet resources by clicking on this link to the Quizlet resources. Link to the Media4Math Study Sets on Quizlet and search for "Media4Math." • The Princeton Review. The Princeton Review is the leading provider of SAT prep and other test preparation courses and tutorials. Media4Math is an affiliate of Princeton Review and we have created a set of free SAT math resources, sponsored by Princeton Review. See this collection of SAT resources by clicking on this link to the Princeton Review resources. If you would like to partner with Media4Math, please reach out to us at admin@media4math.com.
{"url":"https://www.media4math.com/OER-Resources","timestamp":"2024-11-15T01:30:31Z","content_type":"text/html","content_length":"77270","record_id":"<urn:uuid:6d1e7a9a-f084-4c37-9fc3-647a3cb3a504>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00683.warc.gz"}
An improved approximation algorithm for vertex cover with hard capacities (extended abstract) In this paper we study the capacitated vertex cover problem, a generalization of the well-known vertex cover problem. Given a graph G = (V, E), the goal is to cover all the edges by picking a minimum cover using the vertices. When we pick a vertex, we can cover up to a pre-specified number of edges incident on this vertex (its capacity). The problem is clearly NP-hard as it generalizes the well-known vertex cover problem. Previously, 2-approximation algorithms were developed with the assumption that multiple copies of a vertex may be chosen in the cover. If we are allowed to pick at most a given number of copies of each vertex, then the problem is significantly harder to solve. Chuzhoy and Naor (Proc. IEEE Symposium on Foundations of Computer Science, 481-489, 2002) have recently shown that the weighted version of this problem is at least as hard as set cover; they have also developed a 3-approximation algorithm for the unweighted version. We give a 2-approximation algorithm for the unweighted version, improving the Chuzhoy-Naor bound of 3 and matching (up to lower-order terms) the best approximation ratio known for the vertex cover problem. Original language English Title of host publication Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Editors Jos C. M. Baeten, Jan Karel Lenstra, Joachim Parrow, Gerhard J. Woeginger Publisher Springer Verlag Pages 164-175 Number of pages 12 ISBN (Print) 3540404937, 9783540404934 State Published - 2003 Externally published Yes Publication series Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Volume 2719 ISSN (Print) 0302-9743 ISSN (Electronic) 1611-3349 • Approximation algorithms • Capacitated covering • Linear programming • Randomized rounding • Set cover • Vertex cover Dive into the research topics of 'An improved approximation algorithm for vertex cover with hard capacities (extended abstract)'. Together they form a unique fingerprint.
{"url":"https://cris.tau.ac.il/en/publications/an-improved-approximation-algorithm-for-vertex-cover-with-hard-ca","timestamp":"2024-11-03T09:58:33Z","content_type":"text/html","content_length":"54284","record_id":"<urn:uuid:f2c5e5d4-9e49-4f83-bba7-ac4dc9e53c92>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00750.warc.gz"}
“Green” barrier coverage with mobile sensors Mobile sensors are located on a barrier represented by a line segment. Each sensor has a single energy source that can be used for both moving and sensing. A sensor consumes energy in movement in proportion to distance traveled, and it expends energy per time unit for sensing in direct proportion to its radius raised to a constant exponent. We address the problem of energy efficient coverage. The input consists of the initial locations of the sensors and a coverage time requirement t. A feasible solution consists of an assignment of destinations and coverage radii to all sensors such that the barrier is covered. We consider two variants of the problem that are distinguished by whether the radii are given as part of the input. In the fixed radii case, we are also given a radii vector ρ, and the radii assignment r must satisfy r[i] ∈ {0, ρ[i]}, for every i, while in the variable radii case the radii assignment is unrestricted. We consider two objective functions. In the first the goal is to minimize the sum of the energy spent by all sensors and in the second the goal is to minimize the maximum energy used by any sensor. We present FPTASs for the problem of minimizing the energy sum with variable radii and for the problem of minimizing the maximum energy with variable radii. We also show that the latter can be approximated within any additive constant ε > 0. We show that the problem of minimizing the energy sum with fixed radii cannot be approximated within a factor of O(n^c), for any constant c, unless P=NP. The problem of minimizing the maximum energy with fixed radii is shown to be strongly NP-hard. Additional results are given for three special cases: (i) sensors are stationary, (ii) free movement, and (iii) uniform fixed radii. Original language English Title of host publication Algorithms and Complexity - 9th International Conference, CIAC 2015, Proceedings Editors Peter Widmayer, Vangelis Th. Paschos Publisher Springer Verlag Pages 33-46 Number of pages 14 ISBN (Print) 9783319181721 State Published - 2015 Event 9th International Conference on Algorithms and Complexity, CIAC 2015 - Paris, France Duration: 20 May 2015 → 22 May 2015 Publication series Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Volume 9079 Conference 9th International Conference on Algorithms and Complexity, CIAC 2015 Country/Territory France City Paris Period 20/05/15 → 22/05/15 All Science Journal Classification (ASJC) codes • Theoretical Computer Science • General Computer Science Dive into the research topics of '“Green” barrier coverage with mobile sensors'. Together they form a unique fingerprint.
{"url":"https://cris.iucc.ac.il/en/publications/green-barrier-coverage-with-mobile-sensors-3","timestamp":"2024-11-13T08:24:28Z","content_type":"text/html","content_length":"54909","record_id":"<urn:uuid:5103a94b-e5b2-4678-bea7-09e06ea648f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00121.warc.gz"}
ase SI units Base SI units metre, m The metre is the length of the path travelled by light in vacuum during a time interval of 1/299 792 458 of a second. kilogram, kg The kilogram is the unit of mass; it is equal to the mass of the international prototype of the kilogram. second, s The second is the duration of 9 192 631 770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom. ampere, A The ampere is that constant current which, if maintained in two straight parallel conductors of infinite length, of negligible circular cross-section, and placed 1 m apart in vacuum, would produce between these conductors a force equal to 2 x 10–7 newton per metre of length. kelvin, K The kelvin, unit of thermodynamic temperature, is the fraction 1/273.16 of the thermodynamic temperature of the triple point of water. mole, mol The mole is the amount of substance of a system which contains as many elementary entities as there are atoms in 0.012 kilogram of carbon 12. When the mole is used, the elementary entities must be specified and may be atoms, molecules, ions, electrons, other particles, or specified groups of such particles. candela, cd The candela is the luminous intensity, in a given direction, of a source that emits monochromatic radiation of frequency 540 x 1012 hertz and that has a radiant intensity in that direction of 1/683 watt per steradian. All other SI units can be derived from these, by multiplying together different powers of the base units. You must be logged in to post a comment.
{"url":"http://www.mbstudent.com/physics/base-si-units/","timestamp":"2024-11-12T06:59:00Z","content_type":"application/xhtml+xml","content_length":"41233","record_id":"<urn:uuid:bbc45d7d-1181-4b43-ae65-41d9fa4b9185>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00045.warc.gz"}
Advent of Racket 2023/06 - Wait For It A really quick one today. The example input looks like this: Time: 7 15 30 Distance: 9 40 200 Every pair of rows represents a race, where the distance is the record distance so far. By pausing at the beginning of the race we gain one distance unit per time unit paused, but lose that time unit in the process. We're to determine the distinct durations we could have paused for during every race in order to beat the record distance, then multiply the results. We can read the races as a list of pairs: (define (read-integers in start) (map string->number (substring (read-line in) start)))) (define races (call-with-input-file "day06.txt" (lambda (in) (define times (read-integers in (string-length "Time:"))) (define distances (read-integers in (string-length "Distance:"))) (map cons times distances)))) And write a procedure to determine whether or not a given hold time would beat the record distance: (define (win? r hold-time) (match-define (cons race-time distance) r) (define travel-time (- race-time hold-time)) (> (* hold-time travel-time) distance)) Then all we have to do is count the number of times a race could be won within its allotted time: (define (winning-hold-times r) (for/sum ([i (in-range (add1 (car r)))] #:when (win? r i)) And multiply those counds for every race together: (for/fold ([res 1]) ([r (in-list races)]) (* res (winning-hold-times r))) For part two, we need to append all the race times and durations together into one long race. So, instead of interpreting our example input as three separate races, we need to interpret it as if it were written without any spaces: Time: 71530 Distance: 940200 Let's append the races together into our input for part two: (define one-race (let ([m (λ (n) (expt 10 (exact-ceiling (log n 10))))]) (for/fold ([t 0] [d 0] #:result (cons t d)) ([r (in-list races)]) (match-define (cons r-t r-d) r) (values (+ (* t (m r-t)) r-t) (+ (* d (m r-d)) r-d))))) Finally, we can just call our winning-hold-times procedure on the one-race value to find the solution for part two. The input is small enough to brute force in a couple hundred milliseconds. If the input for part two were larger, we could use a closed form solution. We've already expressed a race's solution as: x(t - x) > d Where x is the hold time required to beat the record d. We can expand that expression to: -x^2 + tx - d > 0 And we can solve for x using the quadratic formula and get all possible values of x for any given distance: (define (winning-hold-times* r) (match-define (cons t d) r) (define discriminant (sqrt (- (* t t) (* 4 d)))) (define hi (exact-ceiling (/ (+ t discriminant) 2))) (define lo (exact-floor (/ (- t discriminant) 2))) (- hi lo 1))
{"url":"https://defn.io/2023/12/06/advent-of-racket-2023-day-06/","timestamp":"2024-11-06T16:44:45Z","content_type":"text/html","content_length":"5681","record_id":"<urn:uuid:312ae4d2-1fc6-47ba-92f4-480dedcc6187>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00213.warc.gz"}
Amber Kuang - dummies Articles From Amber Kuang Filter Results 41 results Geometry Geometry: 1001 Practice Problems For Dummies Cheat Sheet Cheat Sheet / Updated 03-09-2022 Geometry is full of formulas, properties, and theorems. You can become successful in geometry by remembering the most important ones and learning how to apply them. Use this reference sheet as you practice various geometry problems to grow your knowledge and skills. View Cheat Sheet Geometry Geometry Practice Problems with Triangles and Polygons Article / Updated 03-26-2016 A polygon is a geometric figure that has at least three sides. The triangle is the most basic polygon. You will find the following formulas and properties useful when answering questions involving triangle inequalities, right triangles, relationships between the angles and sides of triangles, and interior and exterior angles of polygons. All triangles The sum of the three interior angles of a triangle is 180°. The largest side of a triangle is opposite the largest angle of the triangle. The sum of the two shorter sides of a triangle must be greater than the longest side of the triangle. The exterior angle of a triangle is equal to the sum of the two nonadjacent interior angles of the triangle. The centroid of a triangle divides each median of the triangle into segments with a 2:1 ratio. Right triangles The Pythagorean theorem states that a2 + b2 = c2, where a and b represent the legs of the right triangle and c represents the hypotenuse. When you draw an altitude to the hypotenuse of a right triangle, you form two right triangles that are similar to each other and also similar to the original right triangle. Because these triangles are similar, you can set up the following proportions: The altitude to the hypotenuse of a right triangle is the mean proportional between the two segments that the hypotenuse is divided into: The leg of a right triangle is the mean proportional between the hypotenuse and the projection of the leg on the hypotenuse: Here are the trigonometric ratios in a right triangle: Polygons The sum of the degree measure of the interior angles of a polygon equals 180(n – 2), where n represents the number of sides. The sum of the exterior angles of a polygon is 360°. The area of a regular polygon equals The apothem is the line segment from the center of the polygon to the midpoint of one of the sides. View Article Geometry Coordinate Geometry Formulas Article / Updated 03-26-2016 Coordinate geometry is the study of geometric figures graphed on a coordinate plane. The slope formula can be used to determine whether lines are parallel or perpendicular. The midpoint can be used to determine if segments are bisected and also can be used to find the center of a circle. The distance formula can be used to determine the lengths of sides of geometric figures. Distance formula: Midpoint formula: Slope formula: Slope-intercept form of a line: Point-slope form of a line: View Article Geometry Circle Basics for Geometry Problems Article / Updated 03-26-2016 To solve geometry problems about circles, you will need to know the following circle theorems involving tangents, secants, and chords. These theorems can be used to find information about angles, intercepted arcs, and length of segments of a circle. In addition, you find the standard and general form of a circle, the formulas for area and circumference, and the area of a sector of a circle. Circle formulas The circumference of a circle equals The area of a circle equals The area of a sector equals Standard form of a circle: General form of a circle: Circle theorems involving angles The central angle equals the intercepted arc. An inscribed angle equals The interior vertical angles formed by two intersecting chords equal An exterior angle equals A line tangent to a circle is perpendicular to the radius drawn to the point of tangency. Circle theorems involving lengths of segments When a tangent and secant are drawn from the same exterior point, When two secants are drawn from the same exterior point, View Article Geometry Formulas for Geometric Solids Problems Article / Updated 03-26-2016 Many formulas are associated with the study of three-dimensional shapes in geometry. Here, you find formulas for calculating the volume, surface area, and lateral area of cylinders, cones, spheres, pyramids, cube, and rectangular prisms. Cylinders The lateral area of a cylinder equals Cones The lateral area equals Spheres Square pyramids Cubes Rectangular prisms View Article Geometry Transformation Rules for Geometry Problems Article / Updated 03-26-2016 In coordinate geometry problems, there are special rules for certain types of transformations. To determine the image point when performing reflections, rotations, translations and dilations, use the following rules: Reflections: Rotations: Translations: Dilations: View Article Geometry Alternate Interior and Exterior Angles —Practice Geometry Questions Article / Updated 03-26-2016 When two parallel lines are intersected by a third line (called a transversal), congruent pairs of angles are formed, including alternate interior angles and alternate exterior angles. The following practice questions ask you to use this information to find a missing angle, and then to apply some algebra to calculate a missing variable. Practice questions Use the diagram and the given information to solve the problem. Parallel lines are cut by a transversal. If is represented by is represented by 4x – 45, find the value of x. Parallel lines are cut by a transversal. If are represented by respectively, find the degree measure of Answers and explanations 20 are alternate interior angles. Alternate interior angles are congruent, so set their measures equal to each other and solve for x: 140 degrees are alternate exterior angles. Alternate exterior angles are congruent, so set their measures equal to each other and solve for x: which means they add up to 180 degrees. Use this info to solve for View Article Geometry Angles That Form Linear Pairs—Practice Geometry Questions Article / Updated 03-26-2016 Angles that form a linear pair combine to form a straight angle. (A straight angle measures 180 degrees.) The following practice questions ask you to solve problems based on linear pairs. Practice questions In the following figure, at E. In the following questions, fill in the blank to make the statement true. If you know that are represented by 2a, 2a + b, and 3a – 20, respectively then b = ———? If you know that Answers and explanations 20 Start with the given information: form a linear pair, which means their sum is 180 degrees. Set up the following equation and solve for a: Plug in the value of a to find because they’re vertical angles. Set them equal to each other, plug in the value of a, and solve for b: 124 degrees Angles that form a linear pair add up to 180 degrees. Set the sum of equal to 180 and solve for x: Now plug in the value of x to solve for View Article Geometry Area of Regular Polygons — Practice Geometry Questions Article / Updated 03-26-2016 If you are asked to find the area of a regular polygon, you can do so by using a formula that includes the perimeter of the polygon and a measurement called the apothem. The apothem is the line segment from the center of the polygon to the midpoint of one of the sides, and is perpendicular to that side. The perimeter is the total distance around the polygon. The formula for the area of a regular polygon is Practice questions Find the area of a regular pentagon whose perimeter is 40 units and whose apothem is 5 units. Find the exact area of a regular hexagon that has a perimeter of 60 units. Answers and explanations 100 units2 The formula for the area of a regular polygon is The apothem is 5 and the perimeter is 40, so the area is The formula for the area of a regular polygon is A regular hexagon is a polygon with six equal sides. You're given that the perimeter of the hexagon is 60 units, which means each side is 10. The apothem is joined to the midpoint of one of the sides and is also perpendicular to the side, forming a The side opposite the 30-degree angle is x, the side opposite the 60-degree angle is and the side opposite the 90-degree angle is 2x. The apothem is opposite the 60-degree angle, so the apothem equals When you plug everything into the formula, you get View Article Geometry Congruent Angle Constructions — Practice Geometry Questions Article / Updated 03-26-2016 You can use your knowledge of geometric constructions (as well as a compass and straight edge) to create congruent angles. The following practice questions test your construction skills. If you're drawing two arcs for a construction, make sure you keep the width of the compass (or radii of the circles) consistent. Practice questions Use the above figure to construct an angle congruent to angle A. Use the above figure to construct a triangle congruent to triangle BCA. Answers and explanations 1.Here is the solution: Use a straight edge to draw a ray with endpoint D. Place the compass point on A, and with any width, draw an arc intersecting the angle at two points. Using the same width, place the compass point at D and make an arc. Using the compass, measure the distance between B and C. Keeping that compass width, place the compass point at E and draw an arc. Connect the point where the arcs intersect to D. 2.Here is the solution: Place the compass point at B and measure the length of Draw Point D on your paper. Keeping the length of place the compass point on D and draw an arc. Place a point on the arc and label it E. Use your compass to measure the length of Keeping that compass width, place the compass point at D and draw an arc where the third vertex would be located. Use your compass to measure the length of Keeping that compass width, place the compass point at E and draw an arc where the third vertex would be. Name the point where the arcs intersect F. Connect the three vertices of the triangle. View Article
{"url":"https://www.dummies.com/author/amber-kuang-8987/","timestamp":"2024-11-02T04:35:18Z","content_type":"text/html","content_length":"249377","record_id":"<urn:uuid:23d82116-6d3f-4177-881b-07bd4dc269e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00533.warc.gz"}
Null Hypothesis (2 of 4) It should be stressed that researchers very frequently put forward a null hypothesis in the hope that they can discredit it. For a second example, consider an educational researcher who designed a new way to teach a particular concept in science, and wanted to test experimentally whether this new method worked better than the existing method. The researcher would design an experiment comparing the two methods. Since the null hypothesis would be that there is no difference between the two methods, the researcher would be hoping to reject the null hypothesis and conclude that the method he or she developed is the better of the two. The symbol H is used to indicate the null hypothesis. For the example just given, the null hypothesis would be designated by the following symbols: : µ [1 ] - µ [2 ] = 0 or by : μ [1 ] = μ The null hypothesis is typically a hypothesis of no difference as in this example where it is the hypothesis of no difference between means. That is why the word "null" in "null hypothesis" is used -- it is the hypothesis of no difference.
{"url":"https://davidmlane.com/hyperstat/A73664.html","timestamp":"2024-11-07T00:53:03Z","content_type":"application/xhtml+xml","content_length":"4351","record_id":"<urn:uuid:7ef38a64-b04b-4ab2-aa72-5ed9504b7991>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00740.warc.gz"}
Solved Example Problems - Properties of Matter | Physics Physics : Properties of Matter Solved Example Problems EXAMPLE 7.1 Within the elastic limit, the stretching strain produced in wires A, B, and C due to stress is shown in the figure. Assume the load applied are the same and discuss the elastic property of the Write down the elastic modulus in ascending order. Here, the elastic modulus is Young modulus and due to stretching, stress is tensile stress and strain is tensile strain. Within the elastic limit, stress is proportional to strain (obey Hooke’s law). Therefore, it shows a straight line behavior. So, the modulus of elasticity (here, Young modulus) can be computed by taking slope from this straight line. Hence, calculating the slope for the straight line, we get Slope of A > Slope of B > Slope of C Which implies, Young modulus of C < Young modulus of B < Young modulus of A Notice that larger the slope, lesser the strain (fractional change in length). So, the material is much stiffer. Hence, the elasticity of wire A is greater than wire B which is greater than C. From this example, we have understood that Young’s modulus measures the resistance of solid to a change in its length. EXAMPLE 7.2 A wire 10 m long has a cross-sectional area 1.25 x 10-4 m2. It is subjected to a load of 5 kg. If Young’s modulus of the material is 4 x 1010 Nm-2, calculate the elongation produced in the wire. Take g = 10 ms-2. EXAMPLE 7.3 A metallic cube of side 100 cm is subjected to a uniform force acting normal to the whole surface of the cube. The pressure is 106 pascal. If the volume changes by 1.5 x 10-5 m3, calculate the bulk modulus of the material. EXAMPLE 7.4 A metal cube of side 0.20 m is subjected to a shearing force of 4000 N. The top surface is displaced through 0.50 cm with respect to the bottom. Calculate the shear modulus of elasticity of the Here, L = 0.20 m, F = 4000 N, x = 0.50 cm = 0.005 m and Area A = L2 = 0.04 m2 Therefore, Shear modulus EXAMPLE 7.5 A wire of length 2 m with the area of cross-section 10-6m2 is used to suspend a load of 980 N. Calculate i) the stress developed in the wire ii) the strain and iii) the energy stored. Given: Y = 12 × 1010Nm−2. EXAMPLE 7.6 A solid sphere has a radius of 1.5 cm and a mass of 0.038 kg. Calculate the specific gravity or relative density of the sphere. Radius of the sphere R = 1.5 cm mass m = 0.038 kg Solved Example Problems for Pascal’s law EXAMPLE 7.7 Two pistons of a hydraulic lift have diameters of 60 cm and 5 cm. What is the force exerted by the larger piston when 50 N is placed on the smaller piston? Since, the diameter of the pistons are given, we can calculate the radius of the piston This means, with the force of 50 N, the force of 7200 N can be lifted. Solved Example Problems for Buoyancy EXAMPLE 7.8 A cube of wood floating in water supports a 300 g mass at the centre of its top face. When the mass is removed, the cube rises by 3 cm. Determine the volume of the cube. Let each side of the cube be l. The volume occupied by 3 cm depth of cube, V=(3cm) × l2 = 3l2cm According to the principle of floatation, we have Vρg = mg ⇒ Vρ = m ρ is density of water = 1000 kg m-3 (3l2 × 10-2m) × (1000 kgm-3)=300 × 10-3kg l = 10 × 10-2m = 10 cm Therefore, volume of cube V = l3 = 1000 cm3 EXAMPLE 7.9 A metal plate of area 2.5×10-4m2 is placed on a 0.25×10-3m thick layer of castor oil. If a force of 2.5 N is needed to move the plate with a velocity 3×10-2m s-1, calculate the coefficient of viscosity of castor oil. Given: A=2.5×10-4 m2, dx = 0.25×10-3m, F=2.5N and dv = 3×10-2 m s-1 EXAMPLE 7.10 Let 2 .4×10−4 J of work is done to increase the area of a film of soap bubble from 50 cm2 to 100 cm2. Calculate the value of surface tension of soap solution. A soap bubble has two free surfaces, therefore increase in surface area ∆A = A2−A1 = 2(100-50) × 10-4m2 = 100 × 10-4m2. Since, work done W = T ×ΔA ⇒T = EXAMPLE 7.11 If excess pressure is balanced by a column of oil (with specific gravity 0.8) 4mm high, where R = 2.0cm, find the surface tension of the soap bubble. The excess of pressure inside the soap bubble is T = 15.68 ×10− 2 N m−1 EXAMPLE 7.12 Water rises in a capillary tube to a height of 2.0cm. How much will the water rise through another capillary tube whose radius is one-third of the first tube? From equation (7.34), we have h ∝ 1/r ⇒hr =constant Consider two capillary tubes with radius r1 and r2 which on placing in a liquid, capillary rises to height h1 and h2, respectively. Then, EXAMPLE 7.13 Mercury has an angle of contact equal to 140° with soda lime glass. A narrow tube of radius 2mm, made of this glass is dipped in a trough containing mercury. By what amount does the mercury dip down in the tube relative to the liquid surface outside?. Surface tension of mercury T=0.456 Nm-1; Density of mercury ρ = 13.6 × 10^3 kg m^-3 Capillary descent, where, negative sign indicates that there is fall of mercury (mercury is depressed) in glass tube. EXAMPLE 7.14 In a normal adult, the average speed of the blood through the aorta (radius r = 0.8 cm) is 0.33 ms-1. From the aorta, the blood goes into major arteries, which are 30 in number, each of radius 0.4 cm . Calculate the speed of the blood through the arteries. a1v1 = 30 a2 v2 ⇒ π r12v1 = 30 π r22v2 Properties of Matter | Physics Numerical Problems 1. A capillary of diameter dmm is dipped in water such that the water rises to a height of 30mm. If the radius of the capillary is made (2/3) of its previous value, then compute the height up to which water will rise in the new capillary? (Answer: 45 mm) 2. A cylinder of length 1.5m and diameter 4 cm is fixed at one end. A tangential force of 4 × 105 N is applied at the other end. If the rigidity modulus of the cylinder is 6 × 1010 Nm-2 then, calculate the twist produced in the cylinder. Length of a cylinder = 1.5 m Diameter = 4 cm; Tangential force F - 4 × 105N Rigidity modulus η = 6 × 1010 Nm-2 Twist produced θ = ? θ = 0.053 × 10-1 = 53 × 10-4 (Answer: 45.60) 3. A spherical soap bubble A of radius 2 cm is formed inside another bubble B of radius 4 cm. Show that the radius of a single soap bubble which maintains the same pressure difference as inside the smaller and outside the larger soap bubble is lesser than radius of both soap bubbles A and B. Excess pressure create with S.T of spherical surface of the liquid = ΔP = 2T/R T - surface tension In case of soap bubbles, The excess pressure of air inside them is double due to the presence of two interfaces are inside and one outside. Excess pressure of air inside the bigger bubble Excess pressure of air inside the smaller bubble 4s 4T Air pressure difference between the smaller bubble and the atmosphere will be equal toll sum of excess pressure inside the bigger smaller bubbles. Pressure different ΔP = ΔPb + ΔPs = T + 2T = 3T Excess pressure inside a single soap bubble = 4T/R = 4T/4 = T Pressure difference of single soap bubble less than radius of both T < 3T 4. A block of Ag of mass x kg hanging from a string is immersed in a liquid of relative density 0.72. If the relative density of Ag is 10 and tension in the string is 37.12 N then compute the mass of Ag block. Relative density of liquid ρliquid = 0.72 Relative density of Ag ρAg = 10 Mass of the Ag block = ? Tension in the string T = 37.12 N Apparent weight Wapp = pAg - pliquid = 10-0.72 = 9.28 m = TAg / g m = 37.12 / 9.28 = 4 Mass x = 4 kg (Answer: x = 4 kg) 5. The reading of pressure meter attached with a closed pipe is 5 × 105 Nm-2. On opening the valve of the pipe, the reading of the pressure meter is 4.5 × 105 Nm-2. Calculate the speed of the water flowing in the pipe. (Answer: 10 ms-1)
{"url":"https://www.brainkart.com/article/Solved-Example-Problems_42574/","timestamp":"2024-11-02T13:50:48Z","content_type":"text/html","content_length":"100584","record_id":"<urn:uuid:694b66a4-83ab-4620-9b8e-77143c9cf8db>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00017.warc.gz"}
World Famous Mathematician MCQs with Answers - Youth For Pakistan Welcome to the World Famous Mathematician MCQs with Answers. In this post, we are sharing World Famous Mathematician Multiple Choice Questions and Answers in World General Knowledge section for various competitive exams in Pakistan. Find practice World Famous Mathematician practice test with answers here. Each question offers a chance to enhance your knowledge regarding World Famous Mathematician MCQs Online Test. Who is often referred to as the “Father of Geometry”? A) Euclid B) Pythagoras C) Archimedes D) Newton This Indian mathematician made significant contributions to number theory and is known for the Ramanujan prime and the Ramanujan theta function. Who is he? A) Aryabhata B) Brahmagupta C) Ramanujan D) Bhaskara I Which mathematician developed the fundamental theorems of calculus and is considered one of the most influential figures in the history of mathematics? A) Isaac Newton B) Albert Einstein C) Carl Friedrich Gauss D) Leonhard Euler Who is the French mathematician known for Fermat’s Last Theorem and contributions to number theory? A) René Descartes B) Pierre-Simon Laplace C) Évariste Galois D) Pierre de Fermat D) Pierre de Fermat This Greek mathematician is often called the “Father of Mathematics” and made significant contributions to various fields, including mathematics, physics, and engineering. Who is he? A) Pythagoras B) Euclid C) Archimedes D) Aristotle Who is known for the famous equation “E=mc²” and is celebrated for his theory of relativity? A) Blaise Pascal B) Galileo Galilei C) Albert Einstein D) Max Planck C) Albert Einstein This French mathematician is known for the Pascal’s Triangle and his contributions to probability theory. Who is he? A) Blaise Pascal B) René Descartes C) Évariste Galois D) Pierre-Simon Laplace Who formulated the laws of planetary motion and is considered a key figure in the scientific revolution? A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus A) Johannes Kepler This Persian mathematician and scholar made significant contributions to algebra and is known for the book “The Compendious Book on Calculation by Completion and Balancing.” Who is he? A) Al-Khwarizmi B) Ibn al-Haytham C) Al-Kindi D) Al-Biruni Who is the Italian mathematician and astronomer known for his work on the heliocentric model of the solar system? A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus D) Nicolaus Copernicus This Indian mathematician, who lived in the 6th century, is known for his work on algebra and for writing the first text to use symbolic algebra. Who is he? A) Aryabhata B) Brahmagupta C) Bhaskara I D) Ramanujan Who is the German mathematician known for the prime number theorem and the Riemann hypothesis? A) Georg Cantor B) David Hilbert C) Bernhard Riemann D) Carl Friedrich Gauss C) Bernhard Riemann Which Greek mathematician, known for his work on the Pythagorean theorem, is considered one of the most influential figures in the history of mathematics? A) Euclid B) Pythagoras C) Archimedes D) Euclid Who is known for his contributions to number theory, modular forms, and elliptic curves, and his famous “Last Theorem” remained unsolved for centuries until it was proven in the 1990s? A) Pierre-Simon Laplace B) Andrew Wiles C) Carl Friedrich Gauss D) Leonhard Euler This French mathematician and physicist is known for the discovery of the wave nature of light and his principle of least action. Who is he? A) René Descartes B) Blaise Pascal C) Pierre-Simon Laplace D) Évariste Galois Who is the French mathematician known for the Laplace transform and significant contributions to celestial mechanics? A) René Descartes B) Blaise Pascal C) Pierre-Simon Laplace D) Évariste Galois C) Pierre-Simon Laplace This German mathematician is known for his work on infinite sets, including the concept of different sizes of infinity. Who is he? A) Georg Cantor B) David Hilbert C) Karl Weierstrass D) Carl Friedrich Gauss Who is known for the famous equation “E=mc²” and is celebrated for his theory of relativity? A) Blaise Pascal B) Galileo Galilei C) Albert Einstein D) Max Planck C) Albert Einstein This French mathematician is known for Fermat’s Last Theorem and contributions to number theory. A) René Descartes B) Pierre-Simon Laplace C) Évariste Galois D) Pierre de Fermat D) Pierre de Fermat Who is known for his significant contributions to the field of probability and is famous for the “Pascal’s Wager”? A) Blaise Pascal B) René Descartes C) Évariste Galois D) Pierre-Simon Laplace This Greek mathematician is often called the “Father of Mathematics” and made significant contributions to various fields, including mathematics, physics, and engineering. A) Pythagoras B) Euclid C) Archimedes D) Aristotle Who is known for the famous equation “E=mc²” and is celebrated for his theory of relativity? A) Blaise Pascal B) Galileo Galilei C) Albert Einstein D) Max Planck C) Albert Einstein This Indian mathematician made significant contributions to number theory and is known for the Ramanujan prime and the Ramanujan theta function. A) Aryabhata B) Brahmagupta C) Ramanujan D) Bhaskara I Which mathematician developed the fundamental theorems of calculus and is considered one of the most influential figures in the history of mathematics? A) Isaac Newton B) Albert Einstein C) Carl Friedrich Gauss D) Leonhard Euler Who is the French mathematician known for Fermat’s Last Theorem and contributions to number theory? A) René Descartes B) Pierre-Simon Laplace C) Évariste Galois D) Pierre de Fermat D) Pierre de Fermat This Greek mathematician is often called the “Father of Geometry.” A) Euclid B) Pythagoras C) Archimedes D) Newton Who is the Italian mathematician and astronomer known for his work on the heliocentric model of the solar system? A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus D) Nicolaus Copernicus This Indian mathematician, who lived in the 6th century, is known for his work on algebra and for writing the first text to use symbolic algebra. A) Aryabhata B) Brahmagupta C) Bhaskara I D) Ramanujan Who formulated the laws of planetary motion and is considered a key figure in the scientific revolution? A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus A) Johannes Kepler This Persian mathematician and scholar made significant contributions to algebra and is known for the book “The Compendious Book on Calculation by Completion and Balancing.” A) Al-Khwarizmi B) Ibn al-Haytham C) Al-Kindi D) Al-Biruni Who is known for his contributions to number theory, modular forms, and elliptic curves, and his famous “Last Theorem” remained unsolved for centuries until it was proven in the 1990s? A) Pierre-Simon Laplace B) Andrew Wiles C) Carl Friedrich Gauss D) Leonhard Euler The German mathematician known for the prime number theorem and the Riemann hypothesis is: A) Georg Cantor B) David Hilbert C) Bernhard Riemann D) Carl Friedrich Gauss C) Bernhard Riemann This French mathematician and physicist is known for the discovery of the wave nature of light and his principle of least action. A) René Descartes B) Blaise Pascal C) Pierre-Simon Laplace D) Évariste Galois Who is the French mathematician known for the Laplace transform and significant contributions to celestial mechanics? A) René Descartes B) Blaise Pascal C) Pierre-Simon Laplace D) Évariste Galois C) Pierre-Simon Laplace This Greek mathematician is often called the “Father of Mathematics.” A) Pythagoras B) Euclid C) Archimedes D) Aristotle Who is known for his significant contributions to the field of probability and is famous for the “Pascal’s Wager”? A) Blaise Pascal B) René Descartes C) Évariste Galois D) Pierre-Simon Laplace The Indian mathematician who made significant contributions to number theory and is known for the Ramanujan prime and the Ramanujan theta function is: A) Aryabhata B) Brahmagupta C) Ramanujan D) Bhaskara I This Greek mathematician is often called the “Father of Geometry.” A) Euclid B) Pythagoras C) Archimedes D) Newton Who is known for the famous equation “E=mc²” and is celebrated for his theory of relativity? A) Blaise Pascal B) Galileo Galilei C) Albert Einstein D) Max Planck C) Albert Einstein The Italian mathematician and astronomer known for his work on the heliocentric model of the solar system is: A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus D) Nicolaus Copernicus This Indian mathematician, who lived in the 6th century, is known for his work on algebra and for writing the first text to use symbolic algebra. A) Aryabhata B) Brahmagupta C) Bhaskara I D) Ramanujan Who formulated the laws of planetary motion and is considered a key figure in the scientific revolution? A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus A) Johannes Kepler The French mathematician known for Fermat’s Last Theorem and contributions to number theory is: A) René Descartes B) Pierre-Simon Laplace C) Évariste Galois D) Pierre de Fermat D) Pierre de Fermat Who is often referred to as the “Father of Geometry”? A) Euclid B) Pythagoras C) Archimedes D) Newton The Italian mathematician and astronomer known for his work on the heliocentric model of the solar system is: A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus D) Nicolaus Copernicus This Indian mathematician made significant contributions to number theory and is known for the Ramanujan prime and the Ramanujan theta function. Who is he? A) Aryabhata B) Brahmagupta C) Ramanujan D) Bhaskara I C) Ramanujan Which mathematician developed the fundamental theorems of calculus and is considered one of the most influential figures in the history of mathematics? A) Isaac Newton B) Albert Einstein C) Carl Friedrich Gauss D) Leonhard Euler Who is the French mathematician known for Fermat’s Last Theorem and contributions to number theory? A) René Descartes B) Pierre-Simon Laplace C) Évariste Galois D) Pierre de Fermat D) Pierre de Fermat This Greek mathematician is often called the “Father of Mathematics” and made significant contributions to various fields, including mathematics, physics, and engineering. Who is he? A) Pythagoras B) Euclid C) Archimedes D) Aristotle Who is known for his significant contributions to the field of probability and is famous for the “Pascal’s Wager”? A) Blaise Pascal B) René Descartes C) Évariste Galois D) Pierre-Simon Laplace The German mathematician known for the prime number theorem and the Riemann hypothesis is: A) Georg Cantor B) David Hilbert C) Bernhard Riemann D) Carl Friedrich Gauss C) Bernhard Riemann Who is known for his contributions to number theory, modular forms, and elliptic curves, and his famous “Last Theorem” remained unsolved for centuries until it was proven in the 1990s? A) Pierre-Simon Laplace B) Andrew Wiles C) Carl Friedrich Gauss D) Leonhard Euler The French mathematician known for Fermat’s Last Theorem and contributions to number theory is: A) René Descartes B) Pierre-Simon Laplace C) Évariste Galois D) Pierre de Fermat D) Pierre de Fermat Who is often referred to as the “Father of Geometry”? A) Euclid B) Pythagoras C) Archimedes D) Newton This Indian mathematician, who lived in the 6th century, is known for his work on algebra and for writing the first text to use symbolic algebra. A) Aryabhata B) Brahmagupta C) Bhaskara I D) Ramanujan Who formulated the laws of planetary motion and is considered a key figure in the scientific revolution? A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus A) Johannes Kepler This Persian mathematician and scholar made significant contributions to algebra and is known for the book “The Compendious Book on Calculation by Completion and Balancing.” Who is he? A) Al-Khwarizmi B) Ibn al-Haytham C) Al-Kindi D) Al-Biruni Who is known for the famous equation “E=mc²” and is celebrated for his theory of relativity? A) Blaise Pascal B) Galileo Galilei C) Albert Einstein D) Max Planck C) Albert Einstein The Italian mathematician and astronomer known for his work on the heliocentric model of the solar system is: A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus D) Nicolaus Copernicus This Indian mathematician, who lived in the 6th century, is known for his work on algebra and for writing the first text to use symbolic algebra. A) Aryabhata B) Brahmagupta C) Bhaskara I D) Ramanujan Who formulated the laws of planetary motion and is considered a key figure in the scientific revolution? A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus A) Johannes Kepler The French mathematician known for Fermat’s Last Theorem and contributions to number theory is: A) René Descartes B) Pierre-Simon Laplace C) Évariste Galois D) Pierre de Fermat D) Pierre de Fermat Who is often referred to as the “Father of Geometry”? A) Euclid B) Pythagoras C) Archimedes D) Newton The Italian mathematician and astronomer known for his work on the heliocentric model of the solar system is: A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus D) Nicolaus Copernicus This Indian mathematician made significant contributions to number theory and is known for the Ramanujan prime and the Ramanujan theta function. Who is he? A) Aryabhata B) Brahmagupta C) Ramanujan D) Bhaskara I Which mathematician developed the fundamental theorems of calculus and is considered one of the most influential figures in the history of mathematics? A) Isaac Newton B) Albert Einstein C) Carl Friedrich Gauss D) Leonhard Euler Who is known for his significant contributions to the field of probability and is famous for the “Pascal’s Wager”? A) Blaise Pascal B) René Descartes C) Évariste Galois D) Pierre-Simon Laplace The German mathematician known for the prime number theorem and the Riemann hypothesis is: A) Georg Cantor B) David Hilbert C) Bernhard Riemann D) Carl Friedrich Gauss C) Bernhard Riemann Who is known for his contributions to number theory, modular forms, and elliptic curves, and his famous “Last Theorem” remained unsolved for centuries until it was proven in the 1990s? A) Pierre-Simon Laplace B) Andrew Wiles C) Carl Friedrich Gauss D) Leonhard Euler The French mathematician known for Fermat’s Last Theorem and contributions to number theory is: A) René Descartes B) Pierre-Simon Laplace C) Évariste Galois D) Pierre de Fermat D) Pierre de Fermat Who is often referred to as the “Father of Geometry”? A) Euclid B) Pythagoras C) Archimedes D) Newton The Italian mathematician and astronomer known for his work on the heliocentric model of the solar system is: A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus D) Nicolaus Copernicus This Indian mathematician, who lived in the 6th century, is known for his work on algebra and for writing the first text to use symbolic algebra. A) Aryabhata B) Brahmagupta C) Bhaskara I D) Ramanujan Who formulated the laws of planetary motion and is considered a key figure in the scientific revolution? A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus A) Johannes Kepler The French mathematician known for Fermat’s Last Theorem and contributions to number theory is: A) René Descartes B) Pierre-Simon Laplace C) Évariste Galois D) Pierre de Fermat D) Pierre de Fermat Who is often referred to as the “Father of Geometry”? A) Euclid B) Pythagoras C) Archimedes D) Newton The Italian mathematician and astronomer known for his work on the heliocentric model of the solar system is: A) Johannes Kepler B) Isaac Newton C) Galileo Galilei D) Nicolaus Copernicus D) Nicolaus Copernicus This Indian mathematician made significant contributions to number theory and is known for the Ramanujan prime and the Ramanujan theta function. Who is he? A) Aryabhata B) Brahmagupta C) Ramanujan D) Bhaskara I Which mathematician developed the fundamental theorems of calculus and is considered one of the most influential figures in the history of mathematics? A) Isaac Newton B) Albert Einstein C) Carl Friedrich Gauss D) Leonhard Euler If you are interested to enhance your knowledge regarding Physics, Chemistry, Computer, and Biology please click on the link of each category, you will be redirected to dedicated website for each Was this article helpful?
{"url":"https://youthforpakistan.org/world-famous-mathematician-mcqs/","timestamp":"2024-11-02T01:46:59Z","content_type":"text/html","content_length":"258053","record_id":"<urn:uuid:e8e33616-390e-4055-8e03-eb71454c6d69>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00598.warc.gz"}
Triangle tips and inspiration So, this triangle quilt came about because when I was at the Ann Arbor Modern Quilt Guild retreat this summer, finishing up my scrappy trip along , basting my drunkard's daisy quilt , and beginning to piece the Lotta Jansdotter quilt was piecing this In the meantime, I have noticed many amazing equilateral triangle quilts appear on blogs and on instagram (Emily made another, too, and it's only been, what? 6 weeks since the retreat?) In case it's becoming a thing, I am going to present my thoughts on it here. The triangles shrink left to right when you sew them together, but only lose 1/4" height top to bottom. And when you join rows (losing 3/4" total). Mine are 7.5" high (which makes the sides 8 2/3" each - based on geometry), and I went with 18/row and 13 rows - which is a little over 6 feet wide, and 7.3 feet tall. How many should you cut? Well, what kind of quilt do you want? How about an equation? quilt length = row # x (triangle height - 0.75) -Or, if you prefer not to do the algebra- triangle height = quilt length/row # + 0.75 (The 0.75 is your seam allowances - 0.25" lost in row piecing + 0.5" lost in row joining.) So, if you wanted a baby quilt, which to me is 3' x 4', and thought 8 rows would be cute, you would write down: triangle height = 48"/8 + 0.75 = 6.75" Half an equilateral triangle 6.75" high is a right triangle, and you can solve for the long side using geometry: side length = 2*height / √ 3 in our example: side length = 2 * 6.75" / 1.732 = 7.8" So, to find out how many triangle in a row, you actually have to figure out how many pairs, because the upper half of a triangle is tiny, while its base is huge, but they average out if you pair number of triangle pairs = quilt width / (triangle side length - 0.5) (The 0.5 is your seam allowances, again.) number of triangle pairs = 36" / (7.8"-.5") .... roughly 5 pairs, so about 10 triangles/ row, or 80 triangles for this 8 row quilt. *alternately, if you want to solve for the number of triangles, you can use: number of triangles = 2 * quilt width/ (triangle side length - 0.5) Cutting: I decided to use patterns from some of my Spoonflower fabrics for the demos. Cut your height first! And then use the 60 degree guide on your ruler to cut the triangles (if you need more details, Faith from Fresh Lemons has a great tutorial on this). In a 22" wide cut, you can get 3-4 7.5" high triangles - 18", only three, and 7 in 44" one. (At least that was my experience.) You want to have your triangle height running along the grain of the fabric, either the crosswise or the lengthwise grain is fine - depending either on convenience or on the pattern of the fabric. But the bottom side should be aligned on the grain, because, well, it is a triangle, and triangle edges are notoriously stretchy except where they run along the grain. You'll notice that I didn't start my triangle cuts right at the edge, but left a margin. That is so I can save those pieces for row ends. However, you need not do this for every fabric you cut - with 60 fabrics, I could have up to 120 of these end caps, but with only 13 rows, I only need 26. (Although, one must note that unless you are using only solids, they are chiral; not every end cap will fit on every triangle. And it becomes even worse if the fabric is directional.) step 1: make sure the grain is going in the right direction! The first blocks are easy, just line up the points and sew. Press the seams, then pick up another triangle. You may be thinking that this is a good time to chain piece- well, don't. Your best chance to line up triangles (unless you have clipped all of them) is to leave a corner free. On that note, I found it helpful not to clip dog ears, too. Line up your new triangle with free corner and the folded under dog ear. Although I did chain piece them in a fashion - I started all the rows at once! I sewed 13 triangles to 13 rows, then pressed them open, then started over. And when the row is finished, I still don't trim the dog ears, but use them for a guide to line the row up with its neighbor: These seams do get very bulky, though. This is a fairly quick and dirty way of powering through triangle piecing. If you want a more elegant approach, I would suggest this tutorial from Adrianne at On the Windy Side
{"url":"http://www.blotchandthrum.com/2013/08/triangle-tips-and-inspiration.html","timestamp":"2024-11-14T23:50:49Z","content_type":"application/xhtml+xml","content_length":"79045","record_id":"<urn:uuid:648c95d9-ac5b-4ff1-936c-bd454652cb0c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00126.warc.gz"}
Two perspectives on high-dimensional estimation problems: posterior contraction and median-of-means This thesis investigates Bayesian and frequentist procedures for challenging high-dimensional estimation problems. In a Gaussian sequence model, we study the Bayesian approach to estimate the common variance of the observations. A fraction of the means is known to be zero, whereas the non-zero means are treated as nuisance parameters. This model is non-standard in the sense that it induces inconsistent maximum likelihood. We show a general inconsistency result: the posterior distribution does not contract around the true variance as long as the nuisance parameters are drawn from an i.i.d. proper distribution. We also show that consistency is retained by a hierarchical Gaussian mixture prior. For the latter, we recover the asymptotic shape of the posterior in the Bernstein-von Mises sense and show it is non-Gaussian in the case of small means. In the nonparametric regression model, we study the Bayesian approach to the estimation of a regression function that is characterized by some underlying composition structure, parametrized by a graph and a smoothness index. This model is inspired by deep learning methods, which work well when complex objects have to be built from simpler features. In previous work, a frequentist estimator based on deep neural networks has been shown to be adaptive with respect to the underlying structure and achieve minimax estimation rates. We characterize the contraction rates of the posterior distribution arising from priors induced by the composition of Gaussian processes. With a suitable model selection prior, we show that the posterior achieves the minimax rates of estimation. In the nonparametric least-squares regression model, we study a frequentist approach to estimate the regression function and the standard deviation of the residuals. The dataset consists of i.i.d. observations contaminated by a small number of outliers, and heavy-tailed residuals. For the case of known standard deviation, robust median-of-means procedures are available, and we extend them to the case of unknown standard deviation. In the sparse linear regression case, the median-of-means estimator yields a robust version of the Lasso, whereas our method yields a robust version of the square-root Lasso thanks to a scale-invariance argument. We also provide an aggregated estimator achieving minimax convergence rates while being adaptive to the unknown sparsity level. Original language English Qualification Doctor of Philosophy Awarding Institution • University of Twente • Schmidt-Hieber, Anselm Johannes, Supervisor Supervisors/Advisors • Proksch, Katharina, Co-Supervisor • Derumigny, A., Co-Supervisor Award date 13 Oct 2021 Place of Publication Enschede Publisher • University of Twente Print ISBNs 978-90-365-5235-6 Publication status Published - 13 Oct 2021 Dive into the research topics of 'Two perspectives on high-dimensional estimation problems: posterior contraction and median-of-means'. Together they form a unique fingerprint.
{"url":"https://research.utwente.nl/en/publications/two-perspectives-on-high-dimensional-estimation-problems-posterio","timestamp":"2024-11-03T00:09:16Z","content_type":"text/html","content_length":"59142","record_id":"<urn:uuid:d3893972-e445-4837-8e52-c30aa86f1b85>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00248.warc.gz"}
Compare And Contrast Michael Armstrong's Total Reward Model WithTower Perrin's Total Reward Model. Conclude Michael Armstrong's total reward model and Tower Perrin's total reward model are both effective in terms of employee retention, attraction, and motivation. However, both of them are distinct models with different strategies, features, and characteristics. In this answer, I will compare and contrast both models to provide you with an overview of both models and which one is more effective for employee retention, attraction, and motivation. Michael Armstrong's Total Reward Model: The total reward model proposed by Michael Armstrong aims to bring together various monetary and non-monetary rewards, including benefits and opportunities to develop and advance in the organization. This model suggests that organizations should develop a reward system that comprises different elements that support the organization's goals, values, and strategies. The key components of Michael Armstrong's total reward model Tower Perrin's Total Reward Model. The total reward model proposed by Tower Perrin is also focused on integrating monetary and non-monetary rewards and benefits. This model suggests that organizations should provide a comprehensive rewards package that includes different elements, such as a competitive salary, benefits, and recognition. The key components of Tower Perrin's total reward model are: Compensation, Benefits Work-life balance, Career development opportunities, Performance and recognition; Effective communication. Comparison between Michael Armstrong's and Tower Perrin's Total Reward Models Both models are effective in attracting, retaining, and motivating employees. However, Armstrong's model is focused on developing a comprehensive reward system that combines different rewards elements, while Tower Perrin's model emphasizes creating an effective communication strategy to ensure employees understand their rewards package. Armstrong's model is more suitable for organizations that have a structured approach towards rewards and want to develop a comprehensive and integrated system that aligns with their overall objectives. On the other hand, Tower Perrin's model is more suitable for organizations that want to communicate their rewards package more effectively to employees to ensure they understand their value to the company. In conclusion, both models are effective in their ways, and the choice between them depends on the organization's requirements. However, Armstrong's model may be more effective in employee retention, attraction, and motivation as it provides a comprehensive and integrated reward system that aligns with the organization's objectives. To know more about employee visit: You can buy 833 shares of XYZ stock. The lowest price that XYZ can reach one year from now so that your margin is still above the required maintenance margin is $0.01 (rounded to the nearest cent). Stock is currently traded at $20 per share and you have $10,000 cash available. What is the maximum amount of XYZ stock you can buy? Margin requirement = 60% Therefore, you can borrow the remaining 40% of the price of the stock from the broker to purchase more shares. To find how many shares of XYZ stock you can buy, let's first find the total cost of purchasing it. Number of shares of XYZ stock you can buy = [(Amount available for investment) + (Amount borrowed)] / Price per share Amount available for investment = $10,000 Amount borrowed = 40% of total cost = 0.40 × (Price per share × Number of shares) Total cost = (Amount available for investment) + (Amount borrowed) Price per share = $20 per share Number of shares = ? We can plug in the given data to the formula above: $(10,000 + 0.4(20 × n))/20 = n$ Simplifying this equation:$(10,000 + 8n)/20 = n$ Multiplying both sides by 20: $10,000 + 8n = 20n$ Subtracting 8n from both sides: $10,000 = 12n$ Dividing both sides by 12, we get: $n ≈ 833$ So, you can buy 833 shares of XYZ stock. Now let's move to the next part. Suppose you purchased the maximum amount of XYZ stock that you are allowed to in question 7. What is the lowest price that XYZ can reach one year from now so that your margin is still above the required maintenance margin? We can use the formula for Maintenance Margin: 35% (in this case) × (Price per share - Loan value per share) / Price per share Margin call = (Amount borrowed) – (Market value of stock) Suppose the lowest price that XYZ can reach one year from now be P, and the current price per share is $20 per share. The market value of the stock will be P × 833 = 833P The loan value per share is (1 – Margin requirement) × P = 0.4P The amount borrowed will be (0.4P) × 833 = 333.2P When the margin call occurs, you will receive a margin call of: 0.35 × (P - 0.4P) / P × 833 = 0.35 × 0.6 / P × 833 = 0.21 / P × 833 This margin call must be less than or equal to the amount you borrowed:0.21 / P × 833 ≤ 333.2P Simplifying this inequality: 0.000252 / P ≤ 333.2P Dividing both sides by P:0.000252 ≤ 333.2P²Solving for P, we get:P = √(0.000252 / 333.2) ≈ $0.006 So, the lowest price that XYZ can reach one year from now so that your margin is still above the required maintenance margin is $0.01 (rounded to the nearest cent). You can buy 833 shares of XYZ stock. Suppose the lowest price that XYZ can reach one year from now be P, and the current price per share is $20 per share. The market value of the stock will be P × 833 = 833P. The loan value per share is (1 – Margin requirement) × P = 0.4P. The amount borrowed will be (0.4P) × 833 = 333.2P. When the margin call occurs, you will receive a margin call of: 0.21 / P × 833. This margin call must be less than or equal to the amount you borrowed: 0.21 / P × 833 ≤ 333.2P. Solving for P, we get: P = √(0.000252 / 333.2) ≈ $0.006. Hence, the lowest price that XYZ can reach one year from now so that your margin is still above the required maintenance margin is $0.01 (rounded to the nearest cent). Learn more about a margin call: https://brainly.com/question/903996
{"url":"https://community.carbonfields.net/question-handbook/compare-and-contrast-michael-armstrongs-total-reward-model-w-rwyo","timestamp":"2024-11-13T02:02:03Z","content_type":"text/html","content_length":"129585","record_id":"<urn:uuid:9014aa8e-4090-4881-af80-5e141ebe3b7c>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00195.warc.gz"}
Markov Decision Process A Markov Decision Process (MDP) is a mathematical framework for modeling decision-making in situations where outcomes are partly random and partly under the control of a decision-maker. It provides a formalism for modeling the environment in reinforcement learning. An MDP is defined by: 1. States ((S)): A finite set of states in the environment. 2. Actions ((A)): A finite set of actions available to the agent. 3. Transition Function ((P)): The probability ( P(s’, r | s, a) ) of moving from state ( s ) to state ( s’ ) and receiving reward ( r ) after taking action ( a ). 4. Reward Function ((R)): The immediate reward received after transitioning from state ( s ) to state ( s’ ) due to action ( a ). 5. Policy ((\pi)): A strategy that specifies the action ( \pi(a|s) ) to take in each state ( s ). The goal in an MDP is to find a policy that maximizes the expected sum of rewards over time, often discounted by a factor ( \gamma ) (discount factor) to account for the preference for immediate rewards over future rewards. Key Concepts: • Value Function: Measures the expected return (sum of rewards) from a given state under a specific policy. • State Value Function ((V(s))): Expected return starting from state ( s ) and following policy ( \pi ). • Action Value Function ((Q(s, a))): Expected return starting from state ( s ), taking action ( a ), and then following policy ( \pi ). • Bellman Equations: Fundamental recursive relationships that express the value of a state or state-action pair in terms of the values of successor states. MDPs are widely used in various fields, including robotics, economics, and artificial intelligence, for solving problems where decision-making is essential under uncertainty and dynamic conditions. this is probebility destribution on outcoms that we can get there are uncertaintly in around us در کل در باره. ی.عدم قطعیت صحبت میکند که چگونه باعث تصمیمات ما میشود that was about decidions that by In a Markov Decision Process (MDP), decisions are indeed based on uncertainty. The decision-making process in an MDP involves dealing with randomness in both transitions between states and the rewards received this is that reward that we get base on that randomless we have some actions in that hear we wana maximize this reward Certainly! I’ll walk you through the steps of the provided MDP implementation, explaining each part in detail. 1. Initialization First, the MDP class is initialized with the states, actions, transition probabilities, rewards, and a discount factor (gamma). import numpy as np class MDP: def __init__(self, states, actions, transition_prob, rewards, gamma=0.9): self.states = states self.actions = actions self.transition_prob = transition_prob self.rewards = rewards self.gamma = gamma self.V = np.zeros(len(states)) • states: A list of states. • actions: A list of actions. • transition_prob: A 3D list where transition_prob[s][a][s_prime] is the probability of transitioning from state s to state s_prime given action a. • rewards: A 3D list where rewards[s][a][s_prime] is the reward for transitioning from state s to state s_prime given action a. • gamma: The discount factor for future rewards. • V: An array to store the value function for each state. 2. Value Iteration The value_iteration method updates the value function V until it converges to within a specified threshold (epsilon). def value_iteration(self, epsilon=0.01): while True: delta = 0 for s in range(len(self.states)): v = self.V[s] self.V[s] = max([sum([self.transition_prob[s][a][s_prime] * (self.rewards[s][a][s_prime] + self.gamma * self.V[s_prime]) for s_prime in range(len(self.states))]) for a in range(len(self.actions))]) delta = max(delta, abs(v - self.V[s])) if delta < epsilon: • delta: Tracks the maximum change in value function for any state in each iteration. • For each state s, calculate the value function for all possible actions and update V[s] with the maximum value. • The nested loop calculates the expected value for each action a by summing the product of the transition probability, immediate reward, and the discounted value of the next state. • The process continues until the change in value (delta) is less than epsilon. 3. Extracting the Policy The get_policy method derives the optimal policy from the computed value function V. def get_policy(self): policy = np.zeros(len(self.states), dtype=int) for s in range(len(self.states)): policy[s] = np.argmax([sum([self.transition_prob[s][a][s_prime] * (self.rewards[s][a][s_prime] + self.gamma * self.V[s_prime]) for s_prime in range(len(self.states))]) for a in range(len(self.actions))]) return policy • For each state s, find the action a that maximizes the expected value. • np.argmax returns the action with the highest value for each state. 4. Define the MDP Components Define the states, actions, transition probabilities, and rewards. states = [0, 1, 2] actions = [0, 1] transition_prob = [ [[0.8, 0.2, 0.0], [0.0, 0.9, 0.1]], [[0.9, 0.1, 0.0], [0.0, 0.8, 0.2]], [[0.7, 0.3, 0.0], [0.0, 0.6, 0.4]] rewards = [ [[10, 0, 0], [0, 5, 5]], [[0, 2, 0], [0, 5, 10]], [[1, 0, 0], [0, 4, 7]] 5. Initialize and Solve the MDP Create an instance of the MDP class, perform value iteration, and extract the policy. # Initialize MDP mdp = MDP(states, actions, transition_prob, rewards) # Perform value iteration # Extract policy policy = mdp.get_policy() print("Optimal Value Function:", mdp.V) print("Optimal Policy:", policy) When you run the code, it will perform value iteration to find the optimal value function and then derive the optimal policy. The Optimal Value Function and Optimal Policy will be printed as the Optimal Value Function: [Expected values for each state] Optimal Policy: [Optimal actions for each state] This step-by-step approach allows you to understand how this is that how we can get that solutions by that polocy we can get that solutions just we need to define this policys این که کجا برویم و چه تصمیمی در هر لححضه بگیریممشخص میشود value is expected utility مقدار ولیو در هر لحظه ثابت میماند اما یوتیلیتی تغییر میکنند چون که Expected ان است بخش دوم برای زمانی است که حال برای ما اهمیت دارد یعنی تصمیم میگیرد که اینده بهتر است یالان for each of policy we have some value
{"url":"https://yourcharge.ir/2024/05/10/markov-decision-process/","timestamp":"2024-11-03T23:27:44Z","content_type":"text/html","content_length":"90943","record_id":"<urn:uuid:c006a594-882f-4ce1-8bf1-6fbcad24842d>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00098.warc.gz"}
Theoretical Physicist; Professor of Physics at the Center for Gravitational Physics and Geometry at Pennsylvania State University The most important invention, I believe, was a mathematical idea, which is the notion of representation: that one system of relationships, whether mathematical or physical, can be captured faithfully by another. The first full use of the idea of a representation was the analytic geometry of Descartes, which is based on the discovery of a precise relationship between two different kinds of mathematical objects, in this case, numbers and geometry. This correspondence made it possible to formulate general questions about geometrical figures in terms of numbers and functions, and when people had learned to answer these questions they had invented the calculus. By now we have understood that it is nothing other than the existence of such relationships between systems of relations that gives mathematics its real power. Many of the most important mathematical developments of the present century, such as algebraic topology, differential geometry, representation theory and algebraic geometry come from the discovery of such relationships, of which Descartes analytic geometry was only the first example. The most profound developments in present mathematics and theoretical physics are all based on the notion of a representation, which is the general term we use for a way to code one set of mathematical relationships in terms of another. There is even a branch of mathematics whose subject is the study of correspondences between different mathematical systems, which is called category theory. According to some of its developers, mathematics is at its root nothing but the study of such relationships, and for many working mathematics, category theory has replaced set theory as the foundational language within which all mathematics is expressed. Moreover, once it was understood that one mathematical system can represent another, the door was open to wondering if a mathematical system could represent a physical system, or vise versa. It was Kepler who first understood that the paths of the planets in the sky might form closed orbits, when looked at from the right reference point. This discovery of a correspondence between motion and geometry was far more profound than the Ptolemaic notion that the orbits were formed by the motion of circles on circles. Before Kepler, geometry may have played a role in the generation of motion, but only with Kepler do we have an attempt to represent the orbits themselves as geometrical figures. At the same time Galileo, by slowing motion down through the use of devices such as the pendulum and the inclined plane, realized that the motions of ordinary bodies could be represented by geometry. When combined with Descartes correspondence between geometry and number this made possible the spatialization of time, that is the representation of time and motion purely in terms of geometry. This not only made Newtonian physics possible, it is of course what we do every time we graph the motion of a body or the change of some quantity in time. It also made it possible, for the first time, to build clocks accurate enough to capture the motion of terrestrial, rather than celestial, The next step in the discovery of correspondences between mathematical and physical systems of relations came with devices for representing logical operations in terms of physical motions. This idea was realized early in mechanical calculators and logic engines, but of course came into its own with the invention of the modern computer. But the final step in the process began by Descartes analytic geometry was the discovery that if a physical system could represent a mathematical system, then one physical system might represent another. Thus, sequences of electrical pulses can represent sound waves, or pictures, and all of these can be represented by electromagnetic waves. Thus we have telecommunications, certainly among the most important inventions in its own right, which cannot even be conceived of without some notion of the representation of one system by another. Telecommunications also gave rise to a question, which is what is it that remains the same when a signal is translated from sound waves to electrical impulses or electromagnetic waves. We have a name for the answer, it is information, but I do not have the impression that we really understand its implications. For example, using this concept some people are claiming that not only is it the case that some physical or mathematical systems can be represented in terms of another but that, there is some coding that would permit every sufficiently complicated physical or mathematical system to be represented in terms of any other. This of course, brings us back to Descartes, who wanted to understand the relationship between the mind and the brain. Certainly the concept of information is not the whole answer, but it does gives us a language in which to ask the question that was not available to Descartes. Nevertheless, without his first discovery of a correspondence between two systems of relations, we would not only lack the possibility of talking about information, we would not have most of mathematics, we would not have telecommunications and we would not have the computer. This the notion of a representation is not only the most important mathematical invention, it is the idea that made it possible to conceive of many of the other important inventions of the last few
{"url":"https://www.edge.org/response-detail/11611","timestamp":"2024-11-04T12:34:46Z","content_type":"application/xhtml+xml","content_length":"53088","record_id":"<urn:uuid:e1f091be-f48e-4aac-b463-1e06c25882b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00067.warc.gz"}
How Many Batteries are Needed for a 100W, 500W and 1000W Solar Panel? - Energy Theory Different wattages of solar panels cater to a wide variety of energy requirements. Deep cycle batteries are most commonly used for their durability in solar installations. So, today, let us explore answers to how many batteries are needed for 100W, 500 W and 1000W solar panel. How many Batteries are Needed for a 100W, 500W and 1000W Solar Panel The number of batteries required for a 100W, 500W and 1000W solar panel system depends on different factors, such as: • Devices connected to the system • Battery capacity • Voltage of the battery • Time for which you need backup • Efficiency of solar panel system If you utilize a larger battery or more batteries, you will most likely need to enlarge your solar array as well. Moreover, charging larger or more batteries may take a long time, which is why you would need to increase your solar setup. Deep cycle batteries are commonly used in solar installations since they are built for longer and repeatedly charging and discharging cycles in solar installations. Now let’s find out how many batteries are needed for 100W 500 W and 1000W solar panel in the upcoming segment. How Many Batteries are Needed for a 100Watt Solar Panel? One 100Ah 12V battery will power one 100-watt 12V solar panel. You may determine that you require a larger battery or two batteries for your solar setup after assessing your power requirements. How Many Batteries Needed for a 400 Watt Solar Panel? You will need one 100Ah 12V battery to power a 400W solar panel system. However, you will require a good amount of sunlight for around 5 hours for optimal function. Also See: How Many Batteries for 5000 Watt Inverter? How Many Batteries Needed for a 500Watt Solar Panel? A 500-watt Solar panel will require a 150Ah battery or a larger battery bank. You can also verify the size of the battery and find out the amperage using this formula: Solar power watts/volts = amp hours Amp hour x sun hours = size of a battery easily charged The number of batteries along with the charge controller size for 500W solar panel are some important aspects to consider. For example, in a 500W 12V solar array, 12V is the standard voltage, but it reaches up to 18V. As per the formula: 500/18 = 27.7 amps Rounding this up to 27 amps per hour is the output from the solar system. Now, the total amperage is, 6 sunlight hours × 27 amps per hour = 162. Hence, a 12V 500-watt solar panel can produce 162 amps with 6 hours of sunlight. This much power is enough to charge a 150Ah battery. Note: This calculation is an approximation. Other battery options and combinations are also possible depending upon the system. Also Read: How Many Batteries for 10000 Watt Inverter? How Many Batteries Needed for an 800Watt Solar Panel? For an 800W solar system, two batteries of 12V are required because this would ensure the proper functioning of the system with 300Ah to 360Ah battery capacity. To determine the number of batteries needed for an 800-watt solar panel system, you should consider the size of the batteries and the power requirements of the system. Also See: How Many Solar Panels and Batteries to Power a House How Many Batteries Needed for a 1000Watt Solar Panel? Two 300Ah batteries can efficiently run a 1000 watt solar system for around 7 hours. The number of batteries needed for a 1000W solar panel system depends on the capacity of the batteries and the amount of energy storage required. However, to calculate how many batteries are needed for 100W, 500 W and 1000W solar panel, you can use the following formula: Number of batteries = Total Watt-Hours / (Battery Capacity x Battery Voltage) Apart from this, the voltage of the battery system also plays a huge role. For instance, a 300Ah battery with a 50% DOD can provide 1800 usable watts, which is good for around 2 hours. Lithium batteries can be fully discharged without affecting their capacity, making them suitable for heavy loads. But they are more expensive than lead-acid batteries. So, based on a number of factors, how many batteries needed for a 100W, 500W and 1000W Solar Panel ranges from a 100Ah battery to two 300Ah batteries. But it is recommended to get an expert in the loop before adding or removing batteries to your solar system. Recommended: How to Charge a 6V Battery With a 12V Charger
{"url":"https://energytheory.com/how-many-batteries-are-needed-for-a-100w-500w-and-1000w-solar-panel/","timestamp":"2024-11-06T08:56:42Z","content_type":"text/html","content_length":"166650","record_id":"<urn:uuid:f965bcd8-ee05-43f4-a406-f2d813388f92>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00673.warc.gz"}
Slide Rules Were The Original Personal Computers Unless you are above a certain age, the only time you may have seen a slide rule (or a slip stick, as we sometimes called them) is in the movies. You might have missed it, but slide rules show up in Titanic, This Island Earth, and Apollo 13. If you are a fan of the original Star Trek, Mr. Spock was seen using Jeppesen CSG-1 and B-1 slide rules in several episodes. But there was a time that it was common to see an engineer with a stick hanging from his belt, instead of a calculator or a cell phone. A Pickett brand slide rule flew to the moon with the astronauts and a K&E made the atomic bomb possible. Slide rules are a neat piece of math and history. They aren’t prone to destruction by EMP in the upcoming apocalypse (which may or may not include zombies). Like a lot of things in life, when it comes to slide rules bigger is definitely better, but before I tell you about the 5 foot slide rule in my collection, let’s talk about slide rules in general. History of the Slide Rule The Reverend [William Oughtred] (who probably introduced X as the multiplication symbol) developed the slide rule in the 17th century, so they’ve been around for a while and were standard issue for people serious about math until the early 1970s. Actually, doing math on a ruler wasn’t a new idea even then. [Edmund Gunter] devised a sector with slide rule-like scales, but you needed a separate set of dividers to solve problems with it. [Oughtred’s] device was a circular slide rule and one of his students, [Richard Delamain], also claimed to have invented the device. Both men accused the other of stealing it. Scholars now think that both men developed the circular slide rule independently. [Delamain] was the first to publish although [Oughtred] apparently finished his device first. However, [Oughtred] definitely developed the straight slide rule around 1650. The Theory Behind the Rule Slide rules depend on Napier’s discovery of logarithms. Besides being a strange key on your scientific calculator, Logarithms (or logs) were very important in the pre-computer math world. Let’s consider base 10 logs. If you square 10 (that is, take 10 to the second power) you get 100. So the log of 100 is 2. If you raise 10 to the fifth power you get 100000 so the log of 100000 is 5. The numbers don’t have to be integers. So the log of 200, for example is about 2.3. Table of Logs If you spent a lot of time calculating, you could create a table of numbers and their logs. The question is: why would you want to? The answer is simple. Suppose you wanted to multiply two numbers. I’ll do 200 and 100 first, even though they are easy enough to multiply without any tricks. If you don’t use any tricks (or heuristics, if you prefer) then you’d write 200 over 100 and then multiply each digit. With logarithms, however, we can do it much more easily. The log of 200 is 2.301 and the log of 100 is 2. So the log of result we want is the sum of the logarithms (2.301+2=4.301). if you compute 10 to 4.3 power, you’ll see it isn’t quite the right answer (19998.6) because I rounded the log of 200, but it is pretty close. Clearly, the more digits you have in your log table, the That seems like a dumb example, but if you wanted to multiply 7329 and 8115, it is a lot easier if you knew that the logs were 3.8650 and 3.9093, respectively. Adding those gives you 7.7743 which is the log of the result. Just look that up in your handy log table to find out the answer is 59470282 (well, it is actually 59474835, but, again, pretty close). Movable Tables What’s this have to do with a slide rule? The slide rule is effectively a log table on two pieces of wood, plastic, or metal (bamboo rules were especially prized because they were self-lubricating, felt good to handle, and were very stable). The marks are put down based on the log of the number, but the marks are labeled by the actual numbers. So the distance between 0 and 1, for example, is much greater than the distance between 8 and 9. Let’s consider a very simple example: 2 times 3. If you move the slide (the C scale; see right) so the one is over the 2 on the fixed scale (the D scale), you can then count over to the 3 mark on the sliding part. This is the same as adding log(2) and log(3). Now you only have to look down from the 3 mark to the fixed scale to see the answer (6). This is very easy to understand when you have one in your hand. For those that don’t, try this web simulator. A screenshot of the calculation is found at the bottom of this section. In some cases, moving the slide may put the answer “off the scale.” In that case, you can use the right hand side of the sliding scale (which is often marked 1, but really represents 10). Then you move to the right and remember to scale the result up by 10 (some rules offer other ways to deal with this, too). If you want to do bigger numbers, you first scale them down and then mentally scale the result back up. So computing 20 times 30 or 2 times 30 is the exact same procedure, but you know you have to scale the answer up by the number of places you shifted. Same thing if you wanted to do, say, 25 times 3.1. You’d actually multiply 2.5 times 3.1 and then scale the result. Division and Other Operations Division works almost the same way but relies on subtraction. If you line up the 3 on the sliding part with the 6 on the fixed part, you can look under the 1 on the sliding part and see the answer is 2. To help you read the numbers accurately, there is a little plastic cursor with a hairline for lining up the numbers. Some rules even had a tiny magnifying bubble there to help you read more Fancy rules will have other scales. For example, the A scale does squares and square roots and trig scales were also common. If you want a visual demo of how it works, check out the video below. Getting the Right Answer Unlike a calculator, a slide rule does require you to have some idea what the answer is (in scale) to interpret the result. It also relies on you being able to eyeball the difference between, say, 7.3, 7.35 and 7.351. That’s why bigger is better. A “normal” slide rule is usually about 10 inches long. There were pocket rules that were shorter and even one on a tie tac that wasn’t very practical to use. On the other side of the spectrum were giant slide rules meant for use in a classroom (some were 7 feet long). For very precise work, engineers would use rules shaped like a cylinder. With the scales wrapped around the circumference of the cylinder, your desk could hold the equivalent of a 30 foot slide rule. However, there was another odd sort of slide rule that was the same a 66 inch rule, but would fit in your pocket: the Otis King (also known as the Geniac; see picture to the left). At first glance, you’d think this slide rule was a telescope. But it is actually a slide rule with the scales wrapped in a spiral around the instrument. By telescoping the scales out, it was–in theory–possible to read more digits off than a normal slide rule. However, due to inaccuracy in the scale markings, it wasn’t always as accurate as it should have been. Where to Find Out More The Oughtred Society is a wealth of information about slide rules, including tutorials, history, and pictures of common and rare instruments and links to other interesting places. If you’d rather be hands on with virtual slide rules, there are plenty of those and a search for “slide rule emulator” or “slide rule simulator” will turn up lots of sites like this one. Faber-Castell made a lot of European slide rules and they have an interesting page about their history in the slide rule market. Where to Find Collectible Slide Rules You’d think it would be hard to collect slide rules, but it is actually fairly easy and can be inexpensive. The trick is that they were so widespread and disappeared so abruptly that there are plenty of used and new old stock rules out there if you can find them. Faber-Castell even mentions they still have some that they will sell you at the bottom of their history page. eBay is a prime source for slide rules (a quick search showed over 3000 listing related to slide rules). You may find you can get them cheaper by combing local antique stores. Often people don’t really know what they are and are glad to get rid of them. Also, if people find out you are a collector, they will often give you slide rules that belonged to some long gone relative and will be glad that someone will have it who will appreciate it and take care of it. I have several of these. If you do want to buy a rule, there are a few things to look for. First, make sure it has the cursor and that it isn’t fogged up. Repairing or replacing a cursor is often a big ordeal. Watch out for corrosion or color leaching from a leather holster. These rules can often be saved, but it is a lot of work. The Sphere site has some good model-specific instructions for cleaning different rules, as does another Canadian site. Irregular gaps between the slide and the fixed part can spell trouble and if you can handle the slide rule, make sure the slide and cursor both operate. It is also useful to see if the rule is warped, which is probably impossible to fix adequately. Keep Sliding If you do acquire a slide rule, you’ll need to remember a few things about taking care of it. Although bamboo is self-lubricating, other slide rules may need a little help sliding. Pledge (the furniture polish spray) works well on wood rules. Petroleum jelly was what most people used on metal rules. A little bit goes a long way. It is important to keep a slide rule clean so that grime doesn’t get under the cursor where it is more difficult to clean and interferes with the operation of the rule. Be careful putting your slide rule in direct sunlight. Depending on the type of rule you have, you should be careful using water, soaps, or solvents that might damage it. Be sure to read the cleaning sites I mentioned above and try to test things a bit if you can find an inconspicuous spot before you start washing the whole rule down with something. I collect a lot of old gear and, to me, these were the computers of their day. Not everyone wants to learn Morse code, or know how to bias a tube, or be proficient at driving a stick shift. But a lot of people enjoy keeping this kind of old knowledge alive and–you never know–when the apocalypse occurs, the slide rule jockeys left might be the best computers you have for a while. 67 thoughts on “Slide Rules Were The Original Personal Computers” 1. “the distance between 0 and 1″…. Hmmm, there is no 0 on a slide rule ;) 1. well, there would be a zero on an infinitely long slide rule ;) 2. Maybe someone can do a write up on the E-6b flight computer. 1. Thanks. That was just what I was about to say. I still use these flight computers both for flying and in everyday life (bezel on my wrist watch). There is just no easier way to find out things like “it took me 12 mins to fly 17 miles, I need to go 102 miles, how long will that take”. 1. Line up 12 with 17 and read off from 10.2 ish? 2. Great suggestion, thanks! If you have more we’d love to hear them in the tips line. 3. Oh boy! I haven’t used my E-6B in flight since getting Forefight. I should whip out that bad boy one of these days and do a cross country flight with it. 4. I hated the e6b when I had to use it, but I loved it as soon as no one forced me to use it. 3. I have a slide rule bouncing around on my desk mostly as a desk toy. But a few weeks ago, I needed to know the cube root of some number or other and saw the slide rule before I managed to find the calculator. (Yes it’s a messy desk.) It did the job. 4. The history of calculating devices at the beginning of this article is great, but overlooks my favorite (really only for the name, which makes a great old-timey exclaimation) Napier’s Bones – 1. +1 (You beat me to it…) 5. A circular slide rule got me through my college years. It was easier to carry than a straight one, since it fit right on top of my books. I didn’t see anyone else using a circular one. 1. Dr. Strangelove 2. Somewhere I have a link to a printable circular slide rule that fits in a CD case… 6. We’ll be discussing log tables next. At least I still have my slide rule from school, but I think I used log tables more. By the time I started university, Uncle Clive Sinclair had come out with the Sinclair Cambridge and the Sinclair Scientific calculators. Log tables were still more accurate ;-) 1. The day my calculator batteries gave out just as I was starting a physics exam I went out and bought a reference with a set of log tables in the back. I think I still have it somewhere with my old textbooks. 2. Still, the Sinclair scientific was genius, as they crammed it into the same ROM as a four function TI calculator: It was also the first calculator my family bought, in the mid-1970s. However, being 7 years old I hadn’t heard of RPN and so I couldn’t figure out how to do calculations with it. So we took it back and swapped it for a four-function Commodore calculator. What a missed opportunity! 7. I have a “Thachers Calculating Instrument” that belonged to my uncle, who was a professor of engineering and metallurgy. It’s in great shape, and I am toying with the idea about selling it on ebay. 8. If it matters, the CSG-1 and CSG-2 are Jeppesen’s designations for the small and large E6B originally developed by Dalton and Weems and still used in flight schools everywhere. A more compact device (though less intuitive for crosswind calculations) is the CR3 that can be a lot of fun to give to students who forget their calculators in an exam. 9. I still have the slide rule I used in high school, only getting an electronic calculator my Senior year. Using a slide rule is faster than pushing buttons, schools even had speed competitions. 1. When it’s within reach I find it a lot quicker than calc.exe, Google or my phone just by the number of steps it takes (as long as it’s just multiplication). 10. Have several slide rulers, but more scientific and graphic calculators I can recommend installing a nice calculator app on your mobiles/smartphones. I prefer Wabbitemu with Ti83+ rom since Im used to my real Ti83+, and really handy to have with you most of the time on the phone. Quite the irony to use a powerfull computor with HD touchscreen to emulate a slow 25 year old computor with a bad screen. 1. Slightly off topic, sorry, but has anyone else noticed that the high end calculators (TI, HP, etc.) are worse, yet they seem to be the only device not following Moore’s Law? I use a ~30 year old HP 11C because the buttons are far better than anything made since, how many people still use RPN? 1. I am probably a bit weird in that I have a home made DIY circular slide rule in a leg pocket and a HP 35s (not HP-35) in my backpack and another one close to my desk at home. The circular slide rule have gotten a bit worn though (again). I need to come up with something more robust than discs of laminated heavy office paper (I think I used 120 grams/square meter paper and 120 micrometer laminate for heat laminating office machines). Do a Google image search for “johan g diy circular slide rule” to see it. ;-) A key programmable scientific RPN calculator really makes a difference when I want to work out some algorithm or want to do some repetitive calculations without starting up the computer. Best way to document the programs? Forth stack diagrams! 2. I have 10 year old Casio that is ok but the buttons really don’t feel that robust. The TI-85 I used in high school however was built like a tank, I’m surprised they don’t ban them from schools as a potential weapon. 3. Once you go RPN, you never go back. Ah, I still remember my first time… Summer of freshman year of college, had a job working in a plywood plant. Blew over a weeks pay on a HP-41CX. Never looked back. Still remember my first program (it actually ties into the slide rule article). I’d read an article about one of the cable scrambling systems that stated there were 480! (factorial) possible combinations of scanlines in that particular scheme. Wrote a loop to estimate that value (by summing logarithms, of course… see it ties in). Yeah, pretty sad. But I can make it sadder. A few years later the 48SX came out. I was co-oping at the time and was able to attend a presentation held at the HP office in Huntsville, AL. I went with another HP enthusiast. We sat through the demo (they had modified a 48 display so it could sit on an overhead projector, really neat) and fought to urge to go to the back of the room where the local campus bookstore had set up a table with boxes of the calculators and a credit card machine. Later that night, I sat down and tried to figure if I could afford to get one by the end of the co-op term. After plugging in numbers for a while I finally looked at the display and noticed that the total didn’t look right. After pushing some buttons I realized that the 9 key had become flaky. I took it as a Sign. The next morning I went in to work and ran into the guy I went to the demo with. He just looked at me. “You’re getting one, right?” “Yup.” “…I’ll drive.” 4. I got into RPN in 1986, with the HP-41cx. I still use it every day, and a few programs I use all the time have been in it continuously for over 25 years, never having to re-load them. There are a few engineers who are fanatical about this machine who are still introducing new modules for it. One also sells the 41CL transplant board that’s 50 times the speed of the original and comes with over 230 modules pre-loaded. 2. Please mention that website much requires you supply a valid hp room! 11. A friend of mine in college had a Curta: It was, he said, more accurate than a slide rule. I think it was slower, also. He wouldn’t let me use it. 1. I sodl one of those on ebay a couple of years ago. It made £400 or so, I was gobsmacked. 12. Whenever I design the geometry for a new recumbent bike project, I use one of my slide rules strictly for the fun of it. Keeps me in practice just in case there is a zombie apocalypse. As always, the hardest part is remembering where to put the decimal point. 13. I used an expensive bamboo slide rule once to fix a table that was a bit wobbly. I moved the table around a bit and found which leg was the shorter one and wedged the slide rule under it. Hasn’t wobbled since! 14. Check shopgoodwill.com…they usually have a couple, and your money goes to a good cause. 1. Some of your money goes to a good cause, the rest goes to the CEOs six-figure salary. 1. who then promptly goes and buys Goodwill furniture for his home, if the story I read in the local newspaper is true. 15. Studying engineering in the age of a slide rule, as I did, had a nice benefit. Since accuracy degrades if you must make multiple calculations, test results only had to be accurate to two digits (i.e. 3.4). The downside is that slide rules don’t manage your exponents well, leaving the possibility hat the answer is 0.34, 3.4 or 34. You found what was correct by doing rough calculations in your head. Multiplying 2.31 by 38.5 became multiply 2 by 40, so the result needed to be roughly 80. My calculator says it is 88.935. A small slide rule would probably get you close to 89 for the answer, so 89 would be what you would put down. On the other hand, there were ways to get highly accurate results. I’ve been told that the lunar missions (i.e. Apollo 13) used very large slide rules for calculations. The result would be available in seconds. It’d take too long too program a mainframe. 1. That’s exactly why I keep an abacus around. You never know when you’re going to need precision out to a few beads. 1. Lol! 2. “You found what was correct by doing rough calculations in your head” Which is exactly what you *should* be doing. Kids nowadays. // uphill both ways 1. One should do that even with a calculator, because wildly differing answers alert you to the fact that you made an entry mistake. 16. LOL, When I was in 6th grade I was told to leave the class cause I was cheating using a slide rule. Looking back maybe I was but hey ! 1. I used one in Jr Hi and other kids said I was cheating but the teacher, seeing my engineering bent, not only tolerated it, but seemed to encourage it. 17. Some old slide rules are worth quite a bit. I got my grandpa’s surveying slide rule and looked it up and they sell for over $400. Yikes… 1. Faber-Castel is still selling them, brand new, up to 20″ rules. See http://service.de.faber-castell-shop.com/epages/es117781.sf/de_DE/?ViewAction=View&ObjectPath=/Shops/es117781_ServiceParts/ Categories/Faber-Castell/FaberCastell_Rechenstaebe&PageSize=50&Page=1 . 18. I got the last slide rule they passed out in my high school when I started 10th grade. The other kids were so jealous. I still have it in my den. I also have the tie-tac slide rule the author 1. It the tie-clasp one this one? http://wilsonminesco.com/SlideRules/SlideRules.html#Tie 19. More likely than not dist/ dust and a tool to make scratches in either is likely to survive any apocolypse, the truly prepared will keep brains exercised to to perform calculations with only those two simple tools. :) I’m reminded of a scene in a movie about some early exploring expedition. The navigator of the expedition was pissing and moaning that he couldn’t to his work because hi book of log and other tables was lost in a boat capsize in a river. The captain said tough shit do your job by making the calculations with out aids. 20. I have the engineering standard, the Post Versalog made of bamboo, with the leather belt case and manual. I picked it all up via ebay about 15 or 20 years ago. I went to high school in the 70s right at the time electronic calculators first became widely available. I remember seeing seniors walking around school with slide rules hanging from their belts. I bought a calculator that had a green fluorescent display and a pi key. I went through a series of calculators over the ensuing years, but my favorite of all is my HP-11C, still working perfectly, and has only had the batteries replaced twice in 20+ years of daily use. I bought an HP48GX and the keyboard lasted about 1 year. One of these days I’m going to get a Curta. 1. Rockwell Calculators, they really can’t be beat, they’ve got big green numbers and little rubber feet! 21. The foundations of slide rules are discussed in a nice whitepaper I found (pdf!): https://cseweb.ucsd.edu/~pasquale/Papers/IM11.pdf The main takeaway is that you can have a function of four variables, set any three and get the fourth. The function needs to be linear, or transformable to linear with respect to three variables. This gives quite remarkable computing power for a device as simple as a slide rule. I designed a special application slide rule based on this, and it is quite handy for its job of instrument setup: https://sites.google.com/site/dmacalculator/ 22. Why a slide rule is better than a PC 1. A Slide Rule doesn’t shut down abruptly when it gets too hot. 2. One hundred people all using Slide Rules and Paper Pads do not start wailing and screaming due to a single-point failure. 3. A Slide Rule doesn’t smoke whenever the power supply hiccups. 4. A Slide Rule doesn’t care if you eat or drink while using it. 5. You can spill coffee or a soft drink on a Slide Rule and keep on computing. 6. A Slide Rule never sends you snarky system messages about upgrades, reboots, and damaged files. 7. A Slide Rule and Paper Pad fit in a briefcase with space left over for lunch or a change of underwear. 8. You don’t get junk mail offering pricey upgrades for your Slide Rule that fix current old errors while introducing new ones. 9. A Slide Rule doesn’t need scheduled hardware maintenance, an IT staff, and a 24/7 help desk outsourced to a team of geeks in Carjackistan who barely speak English. 10. A Paper Pad supports text and graphics images easily, and can be easily upgraded from monochrome to color. 11. Slide Rules are designed to a standardized, open architecture. 12. You can use a Slide Rule to hit the obnoxious person in the next cubicle. 13. Nobody can steal your identity by hacking your Slide Rule. 14. You can upgrade your memory without limits by simply adding additional Paper Pads. No need to reconfigure anything, change any settings, or do any backups. “Backing up your data” consists of putting the old Paper Pad away in a drawer. 15. Nobody will make you feel bad by introducing a smaller, faster, cheaper Slide Rule next month. 23. Could a digital slide rule be made from some digital calipers? 1. No, but just for the fun of it, I had considered making a hexadecimal slide rule. 2. Yes, digital calipers could be used, if they could be made to read out the log value of the distance. I would not be suprised it that could be done… 24. When my son went to Field Artillery school (about 6 years ago now), they issued him slide rules for calculating artillery fire. Because you need to know how to do it by hand when your big fancy artillery computer stops working. 25. I have 2 slide rules, and I still use my HP-11C calculator. 26. I have a very small slide rule keychain that has year, month, day and name of the weekdays as parameters. Pretty cute thing. 27. Not that this thread needs more comments, but: A slide rule is really only useful for doing engineering homework and taking tests. In the real world one needs more than 3 significant digits, as most calculations involve taking a difference of prior calculations which makes the dubious 3rd significant digit even more dubious. The HP35 RPN calculator was the killer app that blew away the slide rule. For $5 you can download an HP42s app from the Apple store which is as good a calculator as was ever made. If you need more computational power, you probably should be using a spreadsheet. If the price is too high, I believe there is a free version also. 1. The Empire State building, Golden Gate bridge, aircraft and early spacecraft, even the F-16, were designed using slide rules; so they definitely were a serious tool engineers depended on heavily. I used a slide rule a lot in electronic circuit design until calculators really took hold and the price of the ones that could do at least the trig and log functions came down within reach of the common man. Tolerances of resistors used in common products was 5%, and capacitors were usually 10% or 20%. Transistors’ gains, breakdown voltages, etc. had ranges of around 2:1. So yes a slide rule’s accuracy and precision were plenty more than enough for real work. The 3rd digit should not be dubious at all with a well made 10″ rule and decent vision (in fact, checking with my calculator, I find I can nearly always get the 3rd digit correct with my 4″ rule); and chained calculations’ errors should be some plus and some minus, mostly canceling out. 28. Curta’s (https://en.wikipedia.org/wiki/Curta) were the original personal computer. The slide-rule was the faster approximation to computing. 29. The video says slide rules were used until 1972 when the HP-35 calculator came out. Actually, the HP-35’s price tag put it out of reach of most of those who would use a slide rule. I remember calculator ads through the 1970’s touting “slide-rule calculator!” meaning it had all the functions of a slide rule, which they usually did not. The 3rd sig fig was not a guess at all, to someone who was skilled at using the slide rule. In multiplication and division, I can always get the 3rd sig fig off by no more than 1 even on my 4″ slide rule, and I can quite often get the 4th sig fig on my 10″ rules. In chained calculations, the errors will be random, not always high or always low, so they tend to cancel, not accumulate. The man in the video obviously did not know how to quickly micro-adjust the settings by rolling his fingers, so he got very coarse settings. In common log (ie, base e) operations, it is not uncommon to get five digits. In electronics (my field) though, most parts’ tolerance is between 1% and 10%, so three digits is usually more than enough in circuit design. In high-school physics in 1977, I was the only one in the class still using a slide rule. One of the problems we had to solve was the orbit time of a satellite orbiting the earth. The others thought slide rules were for hardly more than an estimate, so they were surprised when my answer was only four seconds different from the teacher’s which he got with his calculator. If you use the slide rule all the time, it is not really any slower than a calculator. It adds the benefit of sharpening your mind (which is why I still go back to it once in a while), and helps understand number relations better. An engineer I was working with a few years ago commented after I quickly did several logarithms in my head in the course of our conversation about the product design, “How do you do that?!” I replied, “You’re just a little too young to have used a slide rule, aren’t you?” My slide rules, and a lot of slide-rule links, are at http://wilsonminesco.com/SlideRules/SlideRules.html 1. The early programmable and scientific calculators were horrible power hogs. Users were terrified at the thought of losing power in the middle of an exam. They also lost their program contents. At the university, I obtained an HP25C, the C for Continuous memory, it retained the program contents after it lost power. 1. I got my first calculator in ’76 or ’77 which had a very short battery life. It had four functions plus square root and percent, and that’s all. I mostly used the slide rule until I decided I needed something programmable to handle iterative operations which would loop thousands of times. The slide rule was not programmable. In Dec ’81, I took the plunge for a programmable, a TI-58c. The earlier version, without the “c,” seemed somewhat pointless because it did not have continuous memory and you could not load programs in from cards like you could the 59, and battery life was only about three hours. 30. You will notice cryptic marks on the cursor of a slide-rule, and elsewhere, called gauge points. I had a Staedtler slide-rule in the 70s but the instructions didn’t explain those marks. The center line has kW written on it, that’s kilowatts. There is a short line on the side of the A and B scales that’s HP, so you can do kilowatt to horsepower conversion (1 horsepower = 0.7457 kW). Put the HP line on a 1 on the A-scale and you’ll see the main (kW) line lines up to this value. Also there are short, unmarked lines to the top left and bottom right of the main line. These are for converting the diameter of a circle to its area and vice-versa. Put the main line on the 2 on the D-scale. The A-scale will land on the tiny tick used to represent the value of pi. A circle’s area is pi times the square of the radius, so a circle with diameter of 2 (radius of 1) has an area of exactly pi. To go in reverse, line up the value for the area on the D-scale, and the bottom right line on the A-scale will give the diameter. If you are sharp-eyed, you will notice two tiny ticks on the C-scale: ” and ‘. These refer to degrees, minutes (“) and seconds (‘). “(rho)’ at 3438, and (rho)” at 206255, in scale C, give the numbers of minutes and seconds in a radian respectively. These gauge points may be used for finding the values of trigonometrical functions of small angles. For any small angle, say, less than 2°, the sin and the tan may be taken as identical. If we set the r’ mark to the graduation in scale D, representing one-tenth of the number of minutes in the angle, the sin or tan may be read in D under the 1C or l0C.” 31. One of the beauties of a slide rule that no-one seems to have mentioned is that when using it, you have the scales before your eyes. You can see the relationships between the scales not just the number(s) that you are interested in, whereas with a calculator, you pop in a number or a couple of numbers and out comes a single number. Any relationship between what you put in and what comes out is simply not in view. I used a slide rule regularly to demonstrate this to undergraduates who had relatively poor maths skills but could quite happily ‘use’ an electronic calculator. A slide rule is a little work of visual art that you can learn a lot from simply by just looking at it. While you may admire the design of an electronic calculator, it teaches you little about maths and at best only the answer to a single problem each time you use it. 32. Howard Speegle of Diva Automation game me his permission to re-tell his story: With your background, you might enjoy some aspects of my work on the Nimbus weather satellite. It had a 250 mW transmitter and our 85-foot dish with Maser amplifier could achieve autolock at -150 dbm at a range of 3000 miles. However, the launch vehicle suffered an early burnout and the orbit was degraded, causing the satellite to be lost to the free world for three days. No one at NASA Goddard or the DEW line or any tracking stations around the world were able to locate any evidence that it existed. We scanned the skies continuously in every sort of random and geometric pattern for days, but no cigar. Finally, I whipped out my trusty circular pocket slide rule…and came up with a reasonable approximation of what the orbit would look like with a 10 second premature shutoff and suggested to my boss that they point the antenna in a certain direction at a certain time. And lo!, it came to pass. He wanted to know where I had studied astrophysics and I didn’t think he would appreciate knowing about my plastic slide rule so I simply shrugged it off. A few weeks later, the ham club at the University of Alaska, down the road, timidly asked if we might have any interest in the telemetry tapes they had made of the first pass. A bunch of kids using a hand-pointed chicken wire parabola had made beautiful recordings of the entire pass and subsequent passes. They had outperformed the combined might of NASA and the millions upon millions of dollars of state of the art equipment we had. Kudos to them. I don’t know whether we ever thanked them. Perhaps I should send this story to the university so they can add it to their list of 33. I recollect struggling with long division and multiplication at junior school. The maths teacher couldn’t understand why my answers were often out by a few decimals. She watched me and realised why… I had been using log tables and then adding / subtracting logs rather than multiply or divide. Errors in the log table book meant my approach didn’t have reliably wrong answers. Please be kind and respectful to help make the comments section excellent. (Comment Policy) This site uses Akismet to reduce spam. Learn how your comment data is processed.
{"url":"https://hackaday.com/2015/11/05/slide-rules-were-the-original-personal-computers/","timestamp":"2024-11-06T23:32:31Z","content_type":"text/html","content_length":"237109","record_id":"<urn:uuid:e8a5f118-7a4a-4a1d-8d03-b74ef59a24ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00862.warc.gz"}
bytes and stuff One function most games need is the ability to display information to the user using text and numbers. It’s not uncommon for graphics libraries to have support for TTF fonts (see SFML and SDL). However, given the style of game I’m working on, what I would prefer is a bitmap font. Just type “video game bitmap fonts” into Google image search and you can see how different they are compared to TTF. If you’re making a low resolution or pixel style game then bitmap fonts really fit the theme. I’ve found libraries that deal with TTF fonts. I’ve also found programs that will supposedly turn TTF fonts into bitmap fonts. But the fonts I want are built from the ground up as a bitmap image, and I haven’t really found any way of using bitmap fonts (with SFML .NET specifically) within a project that satisfies that. It seemed like a decent sized task to work on myself, so I took a stab at it. Detecting Line Crossings I’ve been working on a small game recently, the goal of which is to change your player’s color to match the colors of incoming lines so that you can pass through them. Detecting when the player crosses a line is at the core of the game. Detecting which side of a 2D line a point is on is a pretty simple task. I’d wager most people have encountered the problem relatively early in their game development efforts (not allowing the player to move outside of the screen, for example). Nonetheless it’s probably something that people (including me) have solved in an inefficient or needlessly complex way. So I’m going to go over the problem and solution implemented in my game. Generating a Dungeon (part 3) When I last left off I had the rooms (or data equivalent of rooms) created, the last thing that needed to be done was add in cycles so that the dungeon layout is not a tree. That’s what I’ll be tackling in this post. I’ve updated the generateMap() function and added a call to the new function fixTree() at the end: void GeneratedMap::fixTree() //get all dead ends std::vector<Room*> deadEnds; for (unsigned int i = 0; i < rooms.size(); i++) if (rooms[i]->isDeadEnd()) std::random_shuffle(deadEnds.begin(), deadEnds.end()); unsigned int i = 0; int addedCount = 0; int cyclesToAdd = (deadEnds.size() * CYCLE_PERCENT) / 100; //keep adding cycles until the quantity is met or we run out of //dead ends to try while (i < deadEnds.size() && addedCount < cyclesToAdd) Room* de = deadEnds[i++]; std::vector<Room*> adjacentRooms = getAdjacentRooms(de); if (adjacentRooms.size() <= 0) std::random_shuffle(adjacentRooms.begin(), adjacentRooms.end()); unsigned int j = 0; Room* aj = nullptr; //find first non-connected room if (!de->areConnected(adjacentRooms[j])) aj = adjacentRooms[j]; } while (j < adjacentRooms.size() && aj == nullptr); //if no non-connected room is found try the next dead-end if (aj == nullptr) aj->hasSecondEntrance = true; //rooms are adjacent, but not connected, now we connect them... with int directionOfOther = de->directionOfOtherRoom(aj); int miny = 0, minx = 0, shift = 0; if (directionOfOther == Room::LEFT || directionOfOther == Room::RIGHT) miny = std::max(de->topLeft.y, aj->topLeft.y); int maxy = std::min(de->bottomRight.y, aj->bottomRight.y); int dy = std::abs(maxy - miny); shift = dy <= 0 ? 0 : rand() % (dy + 1); else if (directionOfOther == Room::UP || directionOfOther == Room::DOWN) minx = std::max(de->topLeft.x, aj->topLeft.x); int maxx = std::min(de->bottomRight.x, aj->bottomRight.x); int dx = std::abs(maxx - minx); shift = dx <= 0 ? 0 : rand() % (dx + 1); switch (directionOfOther) case Room::LEFT: aj->startingCell2 = Vector2i(aj->bottomRight.x, miny + shift); aj->previousCell2 = Vector2i(de->topLeft.x, miny + shift); case Room::RIGHT: aj->startingCell2 = Vector2i(aj->topLeft.x, miny + shift); aj->previousCell2 = Vector2i(de->bottomRight.x, miny + shift); case Room::UP: aj->startingCell2 = Vector2i(minx + shift, aj->bottomRight.y); aj->previousCell2 = Vector2i(minx + shift, de->topLeft.y); case Room::DOWN: aj->startingCell2 = Vector2i(minx + shift, aj->topLeft.y); aj->previousCell2 = Vector2i(minx + shift, de->bottomRight.y); Generating a Dungeon (part 2) In my last post I started working on creating a randomly generated dungeon. At the end of the post there were cells that were marked as being reserved empty, ie: no rooms will be put there. In this post I’ll be adding in the rooms. Generating a Dungeon (part 1) I’ve been wanting to create a game that uses randomly generated dungeons for a while now but didn’t have a good idea about what kind of game to make. Recently though I decided “so what if I don’t have an idea for what to DO with the dungeons, I can figure that out after I make them.” So I set out to make some code that will generate random dungeons.
{"url":"https://trlewis.net/category/game-dev/","timestamp":"2024-11-14T09:00:21Z","content_type":"text/html","content_length":"41565","record_id":"<urn:uuid:f40b265a-0a7d-457b-87d7-31c6996df14d>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00758.warc.gz"}
MSOMI BORA : PHYSICS: FORM ONE: Topic 5 - ARCHIMEDES' PRINCIPLE AND LAW OF FLOTATION Join Our Groups TELEGRAM | WHATSAPP Archimedes' principle indicates that the upward buoyant force that is exerted on a body immersed in a fluid, whether fully or partially submerged, is equal to the weight of the fluid that the body The Concept of Upthrust Explain the concept of upthrust If a heavy object is lifted while immersed in water, it may be raised more easily than when outside the water. This is due to the reason that when anything is placed in liquid, it receives an upward force called upthrust. A body appears to have lost some weight when immersed in water due to the upthrust exerted on the body by the water. • By definition upthrust is referred as an upward force exerted by the body when it’s partially or totally immersed in water. • Consider the experiment below to verity the concept of upthrust. From this experiment, it will be observed that W[1]>W[2]. This is because: When a body is partially or totally immersed in any liquid, the liquid exerts an upward force. A weight recorded on the spring balance of a body that is totally or partially immersed in any liquid is called apparent weight. E.g. W[2] and the force, which temporally reduces the weight of the body, are called upthrust (u). Verification of the Archimedes' Principle Verify the archimedes principle The Archimedes’ principle describes about two important things. • A body immersed in a fluid experiences forces one of them is up-thrust • The weight of the fluid displaced is equal to the up-thrust. The forces acting on a body in air and when immersed in a liquid can be shown in following diagrams; When a body is immersed in a liquid it experiences an upward force from the liquid. This force is known as up-thrust. Up-thrust opposes the weight of the body. Since the weight of the body is opposed by the up-thrust, it follows that the weight of the body when in the liquid (W[1]) is decreased and become smaller than the weight of the body in air (W). The differences between weight of the body in air and the weight of the body in the liquid (Apparent weight) is known as Apparent loss in weight (W[2]). This apparent loss in weight (W[2]) is equal to the weight of the liquid which is displaced after immersing the body. Hence it can be shown that, Activity 1 Aim; Aim of this activity is to verify Archimedes’ principle. • Spring balance • Digital /beam balance • 1 beaker • Eureka can • water • 1 piece of small stone • A cotton thread a string • Measure the weight of empty beaker (W[b]) using beam/ digital balance. • Fill the eureka can with water until water to starts to fall out of its beak. • Attach the stone to the spring balance to measure its weight by using a cotton thread /string. • Record the weight of stone in air as W. • Place the empty beaker under the beak of the Eureka can. • Slowly immerse the stone in the water and observe the changes the readings of the spring balance. • Water will be displaced and collected by the beaker. • Record the apparent weight of the stone (weight of the stone in water) as W[1.] • Put the beaker with displaced water on the digital balance to measure the weight of the beaker +water =W[bw .] • Weight of the water displaced (W[w]) can be obtained by taking (weight of beaker + weight of displaced water) –weight of empty beaker=W[bw]W[b.] W[w]=W[bw]-W[b] • Apparent loss in weight (W[2]) of the store can be obtained by taking weight of store in air –Apparent weight =W-W[1] 1. Are the values W[w] and W[2] equal? 2. What is the Upthrust experienced by the stone? Example 1 Weight of a body in air = 10.0N Weight of a body when immersed in water = 9.2N find the upthrust. Weight of a body in air (W[1])= 10.0N Weight of a body when in water (W[2]) = 9.2N Upthrust = Loss of weight in water = W[1]-W[2] Example 2 The weight of a body when totally immersed in liquid is 4.2N. if the weight of the liquid displaced is 2.5N, find the weight of the body in the air. Apparent weight (W[2]) = 4.2N Weight of liquid displaced (u) = 2.5N Weight of body in air is 6.7N The Archimedes' Principle in Determining Relative Density Apply the archimedes principle to determine relative density Relative density (R.D) of a substance can be defined as the ratio of the mass of a substance to the mass of an equal volume of water. Relative density can also be defined as the ratio of the weight of the substance to the weight of an equal volume of water. According to Archimedes’ principle mass of equal volume of water is the mass of the water displaced when the substance is immersed in water. Note that, the volume of water displaced after immersing a substance in water is equal to the volume of the substance. Example 3 A piece of glass weigh 1.2N in air and 0.7N when completely immersed water. Calculate its: 1. Relative density 2. Density of a glass Given that density of water = 1000kg/cm^3 And acceleration due to gravity = 10N/kg Weight of the glass in air (W[1]) = 1.2N Weight of the glass in water (W[2]) = 0.7N R.D = Density of glass/Density of water Density of a glass = R.D x Density of water NB: Relative density has no SI unit
{"url":"https://www.msomibora.com/2018/07/physics-form-one-topic-5-archimedes.html","timestamp":"2024-11-02T03:02:49Z","content_type":"text/html","content_length":"256079","record_id":"<urn:uuid:d09744cb-debb-4754-a083-16b7bc318a22>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00202.warc.gz"}
microlinear space That is just weird. A little experimenting shows that there seems to be a maximum length for subscripts. x_{finite ope} works but x_{finite open} doesn't: $x_{finite ope}$ and $x_{finite open}$. Unfortunately, as we currently ship LaTeX processing off somewhere else, there's not a lot I can about that! I don't get the dollar signs to work. $x = x$$x = x$ I strenghened the proposition about microlinear loci, claiming that also in the two sheaf toposes $\mathcal{Z} = Sh(\mathbb{L})_{finite open covers}$ $\mathcal{B} = Sh(\mathbb{L})_{finite open covers and projections}$ all representables are microlinear. Either I am mixed up or this is essentially obvious. But maybe somebody feelss like checking. Use double dollar signs, or <latex> and </latex> if you want to see a preview. Also check the Instructions when you forget. oops. I forget how math works here... created microlinear space One thing I might be mixed up above: in the literature I have seen it seems to say that $ X^D x_X X^D \simeq X^{D(2)}$ $ D(2) = { (x_1,x_2) \in R \times R | x_i x_j = 0} $. But shouldn't it be $ D(2)' = { (x_1,x_2) \in R \times R | x_i^2 = 0} $. okay, thanks Andrew, at least that tells me what's going on. Was a bad idea to put that novel-like text in a subscript anyway! :-) So here what I wanted to typeset: $\mathbb{L} = (C^\infty Ring^{fin})^{fin}$ the category of smooth loci, consider the Grothendieck topology given by a) covers are finite open covers b) covers are finite open covers and projections. Then sheaves wrt the first yield the smooth topos $\mathcal{Z}$, sheaves with respect to the second the smooth topos $\mathcal{B}$, with notation as in Models for Smooth Infinitesimal Analyis (see list in appendix 2). I tried to add to microlinear space a (the supposedly obvious) proof that all representable objects in these toposes are microlinear. Just write it out without dollar signs. We can read TeX. Re #1: The first definition of D(2) is correct; it is used in Moerdijk–Reyes and other sources. It gives the first-order infinitesimal neighborhood of 0. The definition of D(2) currently in the article is not correct; it gives a certain second-order neighborhood of 0, and it is not invariant under rotation of the x,y-plane. • F. Bergeron, Objet infinitésimal en géométrie différentielle synthétique, Exposé 10 in Rapport de Recherches du Dépt. de Math. et de Stat. 80-11 and 80-12, Université de Montréal, 1980. diff, v12, current
{"url":"https://nforum.ncatlab.org/discussion/123/microlinear-space/?Focus=1124","timestamp":"2024-11-10T21:12:56Z","content_type":"application/xhtml+xml","content_length":"53909","record_id":"<urn:uuid:fc27a107-551a-42bd-9dd6-d5574660882b>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00020.warc.gz"}
Convert Ton (short/US) per second (tn/s) (Mass flow rate) Convert Ton (short/US) per second (tn/s) Direct link to this calculator: Convert Ton (short/US) per second (tn/s) (Mass flow rate) 1. Choose the right category from the selection list, in this case 'Mass flow rate'. 2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), square root (√), brackets and π (pi) are all permitted at this point. 3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'Ton (short/US) per second [tn/s]'. 4. The value will then be converted into all units of measurement the calculator is familiar with. 5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so. Utilize the full range of performance for this units calculator With this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '254 Ton (short/US) per second'. In so doing, either the full name of the unit or its abbreviation can be usedas an example, either 'Ton (short/US) per second' or 'tn/s'. Then, the calculator determines the category of the measurement unit of measure that is to be converted, in this case 'Mass flow rate'. After that, it converts the entered value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the conversion you originally sought. Regardless which of these possibilities one uses, it saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories and countless supported units. All of that is taken over for us by the calculator and it gets the job done in a fraction of a second. Furthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(93 * 62) tn/s'. But different units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '56 Ton (short/US) per second + 25 Ton (short/US) per second' or '31mm x 99cm x 68dm = ? cm^3'. The units of measure combined in this way naturally have to fit together and make sense in the combination in question. The mathematical functions sin, cos, tan and sqrt can also be used. Example: sin(π/2), cos(pi/2), tan(90°), sin(90) or sqrt(4). If a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 2.103 733 314 189 4×1022. For this form of presentation, the number will be segmented into an exponent, here 22, and the actual number, here 2.103 733 314 189 4. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket calculators, one also finds the way of writing numbers as 2.103 733 314 189 4E+22. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at this spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 21 037 333 141 894 000 000 000. Independent of the presentation of the results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications.
{"url":"https://www.convert-measurement-units.com/convert+Ton+short+US+per+second.php","timestamp":"2024-11-08T01:41:40Z","content_type":"text/html","content_length":"54788","record_id":"<urn:uuid:cbb89a96-48ad-469a-b9d3-abfb5816e2b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00067.warc.gz"}
The function template <class V> void assign_large_int(V a) inside boost::math::ntl::RR contains the lines t = static_cast<double>(a & 0xffff); m_value += ldexp(RR(t), exp).value(); My compiler (powerpc-apple-darwin8-g++-4.0.1) complains about the second line: error: cannot convert 'boost::math::ntl::RR' to 'double' for argument '1' to 'double ldexp(double, int)' I assume this is because it cannot see the function definition for RR ldexp(RR r, int exp) defined further down in RR.hpp. This trivial patch fixes the error for me: math_toolkit/boost/math/bindings$ svn diff rr.hpp Index: rr.hpp --- rr.hpp (revision 39435) +++ rr.hpp (working copy) @@ -25,6 +25,9 @@ namespace ntl +class RR; +RR ldexp(RR r, int exp); class RR but there may (probably?) be similar errors for the other functions. Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net
{"url":"https://lists.boost.org/boost-users/2007/09/30984.php","timestamp":"2024-11-07T18:27:53Z","content_type":"text/html","content_length":"11311","record_id":"<urn:uuid:f43ccbbd-f13f-4746-9d37-5e917d5cae69>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00730.warc.gz"}
2232kdc-0006kdc-0006.xmlNondeterministic dynamical systems and crossing boundaries2024413Kevin CarlsonManuel observed that the two kinds of subobject of discrete possibilitic dynamical systems (dpds, ) have some semantic content directly relevant to boundaries. To wit, a subsystem in the tight category of dpds being a subset of states which can never be escaped, in cases where is like an agent in a world, this sounds like a purely impermeable boundary. But can we make sense of the semantics here? We've been imagining has a set (or whatever) of states, so the boundary of seems, here, like a boundary the world can never cross. Could I think about this as if, if I'm , then I can never reach beyond by own boundary into parts of the world outside of me, and vice versa if I'm , a lax subobject, then I have the option to stay within my own boundary but the option (if I'm a proper subobject) to reach outside as well, in certain states?OK, let's consider as the states of the world in which I most certainly continue to exist. It seems like we can produce an ultrametric on by counting how long it would take the world to reach a state from . Thus states not in but reachable directly from could be thought of as states in which I've been just moderately deformed--perhaps I've lost a finger, or converted to a different religion. A step further away, perhaps I'm dedicating my life to advocacy for those with less than ten fingers, and I'm further away from existing, in some sense, as the self I previously was. This is a somewhat satisfying story! I wonder if it could be useful for a multi-agent case.I guess one case you could consider: two lax subobjects ,, and a third lax subobject representing those states of the world in which a certain dyad of agents certainly continues to exist. If the dyad is, say, a happy marriage, then states one step away from might include those in which the marriage is strained, and states two steps away might include those in which the marriage has ended, all of which still lie within . I'm interested in the potential of this story to incorporate the creation and destruction of things-with-boundaries. It seems to handle a notion of boundary crossing as an agent modifying itself or being modified, but there's not enough structure yet to see two agents interacting in a way that can be described in detail. That seems workable, though: if we define the agents in more detail then we'll be able to construct the state spaces discussed above as a result of properties of those agents.Related2234kdc-0005kdc-0005.xmlSubobjects of possibilistic dynamical systems2024413Kevin CarlsonA discrete possibilistic dynamical system (dpds) is a Kleisli morphism for the nonempty powerset monad , that is, a morphism where is the set of nonempty subsets of . Note that is at least a topos, here--indeed, for the nonempty powerset monad to be well behaved we may need to be in a Boolean topos. I'm not sure how necessary of a restriction this is. For possibilistic systems, think of something like one agent's model of the behavior of another agent with a comparable complexity. Especially if I don't know you personally, there's a range of things I imagine you might do while reading this paragraph: get bored and give up, go make a cup of tea, look at an email on your phone, or keep reading, and while in some sense I could probably assign probabilities to these outcomes, I think it is actually bad modeling to insist on pretending I'm doing so, when that's just not actually how I think about you: there are plausible actions, as well as things I'm substantially sure you won't do, and that's it.Strict morphisms of dpds are the obvious thing: a map of state spaces such that A key point I have to make is what this means for subobjects: they're just the maps of dpds whose underlying map is mono in , and what this means for the dynamics is a subsystem of a dpds, in the most obvious sense, is a subobject of the state space from which it is impossible to ever escape.This question of subsystems arose at the Boundaries workshop in the context of Nathaniel Osgood, Manuel Baltieri, Martin Biehl, and Matteo Capucci's work on a control theoretic approach to agent-environment boundaries. Here there is a specified "good" subspace of state space, which might be the subspace where the agent stays alive, and we look for a subobject of the system contained in this good space--this means a region of state space within which, having once entered, the agent will always remain. In generalizing from deterministic dynamical systems to possibilistic ones, then, the most natural move produces the notion of a set of states from which the agent can never depart, no matter what decision it makes. This is a natural and useful concept, but it's certainly not the only one we might propose. If we think the agent is smart and makes good decisions (hopefully), then it's enough for survival if the agent can always stay in the viable region.To this end, consider a less obvious category whose objects are still dpds. For this, it's critical that the algebras for actually form a 2-category, because the powerset is of course a poset, not just a set. Therefore we could just as well define a lax morphism of dpds using the square below; since powersets are just posets, there is a unique choice of the arrow filling the square, and all it says in elements is that for all . This condition on says that is equipped with its own dynamics, and that every state reachable from in is also so reachable in . In particular, since we're using , there is always a move from to another point in , which means we've successfully formalized a notion of a possibilistic subsystem in which it's possible, rather than unavoidable, to remain permanently. Note that we've mildly modified the intuitive notion in that comes with its own dynamics that might not be maximal with respect to making a lax subobject--there is always a maximal such choice, if it's nonempty, given by intersecting each with . This is a kind of "induced" subobject in analogy with the concept of induced subgraph; in categorical terms these are the extremal monomorphisms of the category of dpds and lax maps. You might also note that this category of dpds and lax maps looks like it should be a 2-category; but I don't think there's any reasonable way to add 2-morphisms (which would be modifications) since the state spaces are discrete.
{"url":"https://forest.localcharts.org/kdc-0006.xml","timestamp":"2024-11-06T08:06:39Z","content_type":"application/xml","content_length":"11325","record_id":"<urn:uuid:7f0f2ebe-a8f3-4cf3-8263-b4de2543345f>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00604.warc.gz"}
Intersecting with an invertible sheaf and rational equivalence Lemma 82.21.1. In Situation 82.2.1 let $X/B$ be good. Assume $X$ integral and $\dim _\delta (X) = n$. Let $\mathcal{L}$, $\mathcal{N}$ be invertible on $X$. Choose a nonzero meromorphic section $s$ of $\mathcal{L}$ and a nonzero meromorphic section $t$ of $\mathcal{N}$. Set $\alpha = \text{div}_\mathcal {L}(s)$ and $\beta = \text{div}_\mathcal {N}(t)$. Then \[ c_1(\mathcal{N}) \cap \alpha = c_1(\mathcal{L}) \cap \beta \] in $\mathop{\mathrm{CH}}\nolimits _{n - 2}(X)$. Comments (0) Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 0EQW. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 0EQW, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/0EQW","timestamp":"2024-11-11T16:01:32Z","content_type":"text/html","content_length":"18854","record_id":"<urn:uuid:a5d60f6a-565e-44af-afcc-dfef0a76c137>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00530.warc.gz"}
Proportion of Power Let $x$ and $y$ be proportional. This article, or a section of it, needs explaining. In particular: Establish what types of object $x$ and $y$ are. As it stands here, they could be anything. You can help $\mathsf{Pr} \infty \mathsf{fWiki}$ by explaining it. To discuss this page in more detail, feel free to use the talk page. When this work has been completed, you may remove this instance of {{Explain}} from the code. Let $n \in \Z$. Then $x^n \propto y^n$. Let $x \propto y$. Then $\exists k \ne 0: x = k \times y$ by the definition of proportion. Raising both sides of this equation to the $n$th power: \(\ds x^n\) \(=\) \(\ds \paren {k \times y}^n\) \(\ds \) \(=\) \(\ds k^n \times y^n\) so $k^n$ is the desired constant of proportion. The result follows from the definition of proportion.
{"url":"https://proofwiki.org/wiki/Proportion_of_Power","timestamp":"2024-11-03T10:38:06Z","content_type":"text/html","content_length":"42112","record_id":"<urn:uuid:df799f76-4152-4ba9-bfc6-04cf7d0bd818>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00008.warc.gz"}
Refractive Changes after Removal of Anterior IOLs in Temporary Piggyback IOL Implantation for Congenital Cataracts Article information Korean J Ophthalmol. 2013;27(2):93-97 Received 2011 December 19; Accepted 2012 June 27. To assess the refractive change and prediction error after temporary intraocular lens (IOL) removal in temporary polypseudophakic eyes using IOL power calculation formulas and Gills' formula. Four consecutive patients (7 eyes) who underwent temporary IOL explantation were enrolled. Postoperative refractions calculated using IOL power calculation formulas (SRK-II, SRK-T, Hoffer-Q, Holladay, and the modified Gills' formula for residual myopia and residual hyperopia) were compared to the manifest spherical equivalents checked at 1 month postoperatively. The mean ages of temporary piggyback IOL implantation and IOL removal were 6.71 ± 3.68 months (range, 3 to 12 months) and 51.14 ± 18.38 months (range, 29 to 74 months), respectively. The average refractive error was -13.11 ± 3.10 diopters (D) just before IOL removal, and improved to -1.99 ± 1.04 D after surgery. SRK-T showed the best prediction error of 1.17 ± 1.00 D. The modified Gills' formula for myopia yielded a relatively good result of 1.47 ± 1.27 D, with only the variable being axial length. Formulas to predict refractive change after temporary IOL removal in pediatric polypseudophakia were not as accurate as those used for single IOL implantation in adult eyes. Nonetheless, this study will be helpful in predicting postoperative refraction after temporary IOL removal. In the treatment of congenital cataracts, visual rehabilitation after surgery is as important as the timing of the surgery. To correct postoperative refraction errors, spectacles and contact lenses are available. However, correction with spectacles induces aniseikonia and resultant amblyopia, particularly in severe anisometropic conditions. Moreover, glasses with heavy lenses are associated with optical problems, such as pin cushion effect, peripheral scotoma, and the jack-in-the-box phenomenon. As an alternative solution, contact lenses can provide optical advantages over spectacles. However, the physician should consider certain matters such as the difficulty associated with proper management of contact lenses, limited compliance in younger people, and the increased risk of corneal injury. Temporary polypseudophakia, initially described by Wilson et al. in 2001 [1], consists of permanent intraocular lens (IOL) implantation in the capsular bag and temporary IOL insertion into the ciliary sulcus in order to achieve emmetropia. The temporary IOL can be removed or exchanged according to subsequent refractive changes. Thus, during the critical period of visual development patients can overcome the adverse effects of thick spectacles or contact lenses, and acquire a constant image similar to that of normal eyes. When a patient's eye is in need of surgical correction for high myopia after cataract surgery the physician must estimate the refractive change after removal or exchange of the temporary IOL. In this study, we assessed the difference between the expected refractive error and the true manifest refraction after removal of the IOL from the ciliary sulcus. Materials and Methods This prospective study included 7 eyes from 4 patients who received temporary IOL removal between June 2008 and February 2009. All patients had undergone cataract extraction and temporary piggyback IOL implantation as infants at the Samsung Medical Center in Seoul, Korea. Data included the power of the implanted IOL, cycloplegic refraction prior to IOL removal, and cycloplegic refraction at 1 month postoperatively. Average keratometry readings, axial length (AXL), and anterior chamber depth (ACD) were measured preoperatively using the IOL Master (Carl Zeiss Meditec, Dublin, CA, USA). To calculate the predicted change in refraction (PCR) after IOL removal, we used nomograms described by Gills and IOL power calculation formulas including: SRK-II, SRK-T, Hoffer-Q, and Holladay. The Gills' nomograms were modified as shown in Table 1. Although the original Gills' formula for residual myopia is used for the calculation of the negative-diopter IOL which is implanted secondarily, the PCR was calculated using both equations for residual hyperopia and residual myopia, respectively. The power of the permanent IOL, keratometric reading, A-constant of the IOL, and AXL were used as variables in the IOL power calculation formulas. We then evaluated the prediction error, the difference between the results acquired from each of the formulas, and the actual refractive error after IOL removal. If the patient could not tolerate wearing glasses and the predicted refraction after anterior IOL explantation was so hyperopic that the eye was could be at risk for amblyopia, we planned to insert a temporary IOL in the interim. However, we encountered no cases requiring IOL exchange. Removal of the temporary IOL was performed when the predicted refraction was plano or mildly myopic. All eye operations were performed under general anesthesia by the same surgeon (ESC). The surgeon fragmented the temporary IOL using an IOL cutter via a 3-mm temporal clear corneal incision after injection of viscoelastic material into the anterior chamber. The pieces were then extracted using lens forceps through the incision; sutures were made with 10-0 nylon at the end of the surgery. Patients underwent temporary piggyback IOL implantation at a mean age of 6.71 ± 3.68 months (range, 3 to 12 months). All surgeries were performed as soon as possible following diagnosis of the cataract. During the follow-up period, patients experienced a myopic shift from -0.14 ± 2.14 diopters (D; range, -2 to 4 D) initially to -13.11 ± 3.10 D (range, -9.3 to -16.8 D) just prior to removal of the temporary IOL. The temporary IOL was removed at 51.14 ± 18.38 months (range, 29 to 74 months) after piggyback IOL implantation. Among the biometric results shown in Table 2, three ACDs (both eyes of patient 1 and the right eye of patient 3) and one AXL (the right eye of patient 3) fell beyond one standard deviation (SD) from the mean. The mean power of the removed IOLs was 13.5 ± 2.36 D, resulting in 11.18 ± 2.96 D of actual refractive change (Table 3). We calculated the SD of the prediction error and the mean of the absolute prediction error because prediction errors with opposite signs do not cancel out. As seen in Table 4, the SD and mean absolute prediction error of the SRK-T were smaller than those of the other formulas, but the difference was not statistically significant. In the case of Gills' formula, the mean for residual myopia showed a better result than that for residual hyperopia. Among the results of the IOL power calculating formulas, the right eye of patient 2, whose keratometry reading was farthest from the mean, had the largest prediction error in the results of each IOL calculating formula. Additionally, the values were at least two to six times greater than those of Gills' formula for myopia. On the other hand, the right eye of patient 1, with a prediction error that fell beyond one SD from the mean, had the shallowest ACD and the largest prediction errors in the results from Gills' formula for residual myopia and hyperopia. As axial myopia after piggyback IOL implantation progresses, the surgeon needs to consider when the temporary IOL should be removed or exchanged. Therefore, exact prediction of refractive error after IOL explantation is mandatory. However, to the best of our knowledge, there is no published data that describes the visual outcomes after temporary IOL removal. Therefore, we performed this study to evaluate the refractive outcomes after temporary IOL removal, and the predictability of several formulas used to calculate refractive change. Although the predictability of SRK-T was the best among the formulas, the modified Gills' formula for myopia had relatively good predictability. This formula in particular yielded more accurate results when the keratometry reading showed an extreme deviation from the mean. This may be due to the fact that Gills' formula is an empiric method of predictive modeling that does not use keratometric readings as a variable. Therefore, this formula can be used if the patient's keratometric reading is unavailable or falls beyond one SD from the mean. On the other hand, there are some limitations to this formula. First, this formula assumes that the IOL position is fixed according to the AXL. Hence, when the postoperative ACD differs from the mean, this formula does not work well. This was observed in the case of patient 1, in whom the ACD was unusually shallow. As such, a myopic error would be produced in an eye with a shallow ACD and a hyperopic error would be created in an eye with a deep anterior chamber. A 1-mm error of postoperative ACD corresponds to 1.5 D of refractive difference [2,3]. Second, this formula sets no limit on the angle of the haptics or the thickness of the optics. Although it was not proven in this study, posterior-angled haptics and thick optics in an anterior IOL may push the permanent IOL backward. Thus, after removal of the anterior IOL, if the permanent IOL does not move enough anteriorly due to irreversible anatomical changes caused by the temporary IOL, the resultant refraction error may become more hyperopic than expected. This could potentially explain why the mean of the formula for residual myopia is closer to emmetropia than that of the formula for residual hyperopia. Third, there are no references that can be accepted as standard data. The reasons for inaccuracy in refractive prediction might be due to the limited anterior segment development prior to the first surgery or astigmatism development after the second surgery caused by the corneal incision. In this study, the IOL power calculation formulas had a mean absolute prediction error ranging from 1.17 to 1.87 D. Significant variability did occur with outcomes ranging from 0.2 to 2.7 D. Compared with several reports on the prediction error after single IOL implantation in children, the accuracy of the IOL calculating formula has been reported as an absolute prediction error between 1 and 2 D with a wide range over 3 D [4-7]. Therefore, the predictability in our series appears similar to that observed in other series for single IOL implantation. However, these formulas were derived from data on an adult eye. It is well known that refractive error predictability in cases of pediatric cataracts is more difficult. There are several potential sources for error in IOL power selection in children. One source is inaccuracy of the AXL and keratometry measurements. A lack of cooperation in children who have biometry measurements not performed under general anesthesia may lead to a greater magnitude in errors of keratometry and AXL, which could lead to a resultant refractive surprise [8,9]. Although the AXL, ACD, and keratometry readings in this study were obtained using the IOL Master, which is a non-contact system that can calculate exact AXL and ACD measurement, the accuracy of this system may decrease in an uncooperative child without precise central fixation [10,11 The patients in this study were cooperative with multiple preoperative check-ups so it can be assumed that the results from the IOL Master were precise and reliable. However, the predictability of the postoperative refraction was not better than that seen in adults. This is likely the reason that ACD and keratometry readings deviated from the mean in some cases. Another reason is an unexpected change in ACD after removal of the anterior IOL. Using the IOL calculation formulas, it was estimated that the outcome would be closer to myopia rather than the actual refraction after surgery in both eyes of patient 2 and in the right eye of patient 3. After removal of the anterior IOL, the permanent IOL settled in a more posterior position than anticipated by the formulas. Put another way, the permanent IOL either shifted anteriorly less than expected or not at all. There is a possibility that the temporary IOL in the ciliary sulcus affected both the ciliary body and its related anatomical structures in an irreversible manner. No consensus has been reached regarding either a proper procedure for infantile cataract extraction or a refractive goal after surgery. Even in the auctorial clinic, although aphakic correction or single IOL implantation has been applied after cataract extraction, temporary polypseudophakia has resulted in the best visual outcomes thus far. Furthermore, the frequency of glaucoma, with the greatest risk for potential complications, has been noted to be relatively low. For that reason, we have used this method in most cataract patients under age 2 since 2002. The current study includes the first patient who underwent this procedure and subsequent consecutive cases including the initial IOL removal. This study is not without limitations. The small number of cases in this study yielded results that can hardly be considered representative. No concrete cause of predictive error has been found, and advanced changes have not been identified in all aspects of the surgery. However, it is meaningful that even without a keratometry reading predictions made using Gill's formula and SRK-T show relatively precise results compared with a variety of IOL calculation formulas. No potential conflict of interest relevant to this article was reported. 1. Wilson ME, Peterseim MW, Englert JA, et al. Pseudophakia and polypseudophakia in the first year of life. J AAPOS 2001;5:238–245. 11507583. 2. Olsen T. Calculation of intraocular lens power: a review. Acta Ophthalmol Scand 2007;85:472–485. 17403024. 3. Jin H, Rabsilber T, Ehmer A, et al. Comparison of ray-tracing method and thin-lens formula in intraocular lens power calculations. J Cataract Refract Surg 2009;35:650–662. 19304085. 4. Neely DE, Plager DA, Borger SM, Golub RL. Accuracy of intraocular lens calculations in infants and children undergoing cataract surgery. J AAPOS 2005;9:160–165. 15838444. 5. Andreo LK, Wilson ME, Saunders RA. Predictive value of regression and theoretical IOL formulas in pediatric intraocular lens implantation. J Pediatr Ophthalmol Strabismus 1997;34:240–243. 9253739. 6. Tromans C, Haigh PM, Biswas S, Lloyd IC. Accuracy of intraocular lens power calculation in paediatric cataract surgery. Br J Ophthalmol 2001;85:939–941. 11466250. 7. Mezer E, Rootman DS, Abdolell M, Levin AV. Early postoperative refractive outcomes of pediatric intraocular lens implantation. J Cataract Refract Surg 2004;30:603–610. 15050256. 8. Eibschitz-Tsimhoni M, Tsimhoni O, Archer SM, Del Monte MA. Discrepancies between intraocular lens implant power prediction formulas in pediatric patients. Ophthalmology 2007;114:383–386. 17270686. 9. Eibschitz-Tsimhoni M, Tsimhoni O, Archer SM, Del Monte MA. Effect of axial length and keratometry measurement error on intraocular lens implant power prediction formulas in pediatric patients. J AAPOS 2008;12:173–176. 18423341. 10. Quinn GE, Francis EL, Nipper KS, et al. Highly precise eye length measurements in children aged 3 through 12 years. Arch Ophthalmol 2003;121:985–990. 12860802. 11. Hussin HM, Spry PG, Majid MA, Gouws P. Reliability and validity of the partial coherence interferometry for measurement of ocular axial length in children. Eye (Lond) 2006;20:1021–1024. 16096655. Article information Continued © 2013 The Korean Ophthalmological Society
{"url":"https://ekjo.org/journal/view.php?number=1009&viewtype=pubreader","timestamp":"2024-11-12T23:26:12Z","content_type":"application/xhtml+xml","content_length":"46371","record_id":"<urn:uuid:3888b686-8ac7-4c50-9fc1-32a48fbbc89b>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00194.warc.gz"}
How To Write An Equation In Matlab (Resolved) MATLAB is a powerful mathematical software package used for scientific calculating and engineering applications. Writing equations in MATLAB is an essential skill for anyone wanting to make the most of this powerful numerical computing language. With MATLAB, you can create scripts and functions to solve equations and perform a variety of other calculations. This step-by-step guide provides an overview of some of the key concepts needed to write equations in MATLAB. Before you start writing equations in MATLAB, it’s important to have knowledge of MATLAB syntax and functions. If you’re new to MATLAB it is recommended to have basic knowledge of the programming language and the working environment. If you need more help to get started, the following resources provide a good introduction to MATLAB: Writing Equations Writing equations in MATLAB is relatively straightforward and follows the same syntax as other programming languages. The basic structure of a MATLAB equation is shown below: y = expression Here, y is a single symbol that represents the output of the equation and expression is a combination of MATLAB commands, functions, and constants that describe the equation. For example, the following equation will calculate the volume of a cube with side length a: V = a^3 Here, V is the output and a^3 is the expression. In addition to writing basic equations, you can also write equations using multiple variables. In this case, the output of the equation will be an array containing the solution for each of the variables. For example, the following equation will calculate the area of a triangle with sides a, b and c: A = (a+b+c) / 2 Here, A is the output array and (a+b+c) / 2 is the expression. Finally, you can also write equations that involve conditional logic such as the if-else statement. For example, the following equation will check if an integer x is even or odd: y = (x mod 2 == 0) ? 'Even' : 'Odd' Here, y is the output of the equation. If x is even, then y will be set to 'Even' and if x is odd, then y will be set to 'Odd'. What is the syntax for writing equations in MATLAB? The basic syntax for writing equations in MATLAB is as follows: y = expression Here, y is a single symbol that represents the output of the equation and expression is a combination of MATLAB commands, functions, and constants that describe the equation. How can I use multiple variables in an equation? When writing equations with multiple variables, the output of the equation will be an array containing the solution for each of the variables. The syntax for writing equations with multiple variables is the same as for writing equations with a single variable. Can I use conditional logic in equations? Yes, you can use conditional logic such as the if-else statement in equations. The syntax for writing equations with conditional logic is similar to the syntax for writing equations with a single What is the mod operator? The mod operator (mod) is a mathematical operator that returns the remainder of a division operation. For example, the expression 7 mod 5 will return 2 since 5 divides into 7 evenly twice (with a remainder of 2). Can I save a MATLAB equation to a file? Yes, you can save a MATLAB equation to a file as an M-file. An M-file is a text document which contains MATLAB code. To save an equation to an M-file, open the MATLAB editor and type in your equation. Once you’re finished, go to File > Save As and give the file a name. Then press the Save button to finish.
{"url":"https://lxadm.com/how-to-write-an-equation-in-matlab/","timestamp":"2024-11-08T09:33:20Z","content_type":"text/html","content_length":"56870","record_id":"<urn:uuid:5785563c-f1a2-4440-b163-84569b73538d>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00492.warc.gz"}
compute R' from the following FSM + General Questions (11) For the following FSM state var: p,q inp. var: a init q trans a&p&!q&next(p)&next(q) | a&q&!p&next(p)&next(q) | a&q&!next(p)&next(q) which looks as follows we find this associated Kripke structure: There are deadend states and unreachable states that we would like to remove from the set of transitions. The deadend states s0,s1,s2,s4,s6 are described by !p&!q&!a | !p&!q&a | !p&q&!a | p&!q&!a | p&q&!a and the unreachable states s0,s1,s4,s5 by !p&!q&!a | !p&!q&a | p&!q&!a | p&!q&a so that our updated transition relation should be R' := & !(!p&!q&!a | !p&!q&a | p&!q&!a | p&!q&a) & !(!next(p)&!next(q)&!next(a) | !next(p)&!next(q)& next(a) | !next(p)& next(q)&!next(a) | next(p)&!next(q)&!next(a) | next(p)& next(q)&!next(a) and we also remove the deadends from the initial states I' := I & !(!p&!q&!a | !p&!q&a | !p&q&!a | p&!q&!a | p&q&!a) which is the following structure So, your result is missing a transition. thanks updated.how to write for below question this part is confusing because my solution is incorrect : c) R1 is obtained by removing the transitions to deadends in R′. Construct a propositional logic formula for R1.Submit your solution in the form:a&p&q&next(p)&!next(q) | a&!p&!q&next(p)&next(q) my solution: after removing dead ends: p&a&!q&next(p)&next(q)&next(a)| p&a&q&next(q)&!next(p)&!next(a) | q&a&!p&next(q)&!next(p)&next(a) |!p&a&q&next(q)&next(p)&next(a) after removing unreachable nodes: p&a&q&next(q)&!next(p)&!next(a) | q&a&!p&next(q)&!next(p)&next(a) |!p&a&q&next(q)&next(p)&next(a)
{"url":"https://q2a.cs.uni-kl.de/876/compute-r-from-the-following-fsm","timestamp":"2024-11-15T01:11:30Z","content_type":"text/html","content_length":"58243","record_id":"<urn:uuid:3c51ff52-c52f-4365-8a25-88f9be7becb2>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00660.warc.gz"}
The Exercises 1 The Exercises 1.1 On Commutators and Derived Subgroups 1.1-1 Exercise The derived subgroup G' of a group G is defined as the group generated by all commutators [a,b] := a^-1b^-1ab, where a,b ∈ G. In general, not all elements of the derived subgroup of a group are actually commutators themselves. Find a group G of smallest possible order such that the set of all commutators of elements of G does not form a group! 1.1-2 Hints Commutators can be computed by the operation Comm, and the derived subgroup can be computed by the operation DerivedSubgroup. The function Cartesian and the operations Set and AsList can be helpful in this context as well. If you are lazy but patient, you may simply use CommutatorLength. Otherwise, this exercise requires writing a couple of lines of GAP code. The wanted group has order greater than 50, but less than 100. For a solution, see Section 2.1. 1.2 On Outer Automorphisms Fixing Conjugacy Classes 1.2-1 Exercise Is there a finite group which has an automorphism which is not inner, but which nevertheless fixes all conjugacy classes setwise? -- If so, then find an example of least possible order! 1.2-2 Hints This exercise is slightly more difficult than that given in Section 1.1. Useful functions and operations are AllGroups, AutomorphismGroup, IsInnerAutomorphism, IsConjugate, Image and AsList. It is a good exercise to try to find a way to write down the group and the automorphism in a nice and human-readable form. One possible way to achieve this is to determine a "nice" permutation representation of the group, and to choose an automorphism with the desired property which fixes all but one of the generators. For a solution, see Section 2.2. 1.3 Drawing the Ulam Spiral 1.3-1 Exercise Write a GAP function which draws the Ulam spiral and saves the picture in a file! The arguments of the function should be the size (width / height) of the picture to be drawn and the name of the output file. 1.3-2 Hints This is more or less just an easy programming exercise, which does not require particular mathematical knowledge. Use the function NullMat to create a zero matrix over the field with two elements, and use this matrix as a grid to draw the spiral. The RCWA package [Koh07b] provides a function SaveAsBitmapPicture, which can be used to produce a picture file from the matrix. For a solution, see Section 2.3. 1.4 Automorphism Group of the Smallest Projective Plane 1.4-1 Exercise The automorphisms of a finite projective plane are determined by the permutations of the set of points which move lines to lines. Thus, labelling the points with integers 1, 2, dots, n, the automorphism group can be described as a subgroup of the symmetric group S_n. The smallest projective plane has 7 points and 7 lines. Any point is incident with 3 lines, and any line is incident with 3 points. We label the points with integers 1, 2, dots, 7. Then the lines are given by the sets g_1 := {1,2,3}, g_2 := {1,4,7}, g_3 := {1,5,6}, g_4 := {2,4,6}, g_5 := {2,5,7}, g_6 := {3,4,5} and g_7 := {3,6,7} of points which are incident with them. Compute the automorphism group of the smallest projective plane! 1.4-2 Hints A useful function in this context is SubgroupProperty. The actual determination of the group can be done in one statement. Can you figure out the isomorphism type of the group by theoretical means? For a solution, see Section 2.4. 1.5 Installing a Missing Method 1.5-1 Exercise What happens when you enter the command Centre(AlternatingGroup(100)); in GAP? Are you satisfied with the performance, given that the centre of a nonabelian simple group is always trivial and that GAP knows that the alternating group of degree 100 is simple? Probably not. -- Thus improve the performance radically by implementing a method for Centre for simple groups, which returns either the group itself or the trivial subgroup, depending on whether the group is abelian or not! 1.5-2 Hints Methods are installed with InstallMethod. The exercise is easy -- basically all you need to do is to look up in the documentation how this function is used. If your solution is correct, then Centre (AlternatingGroup(100)); returns the trivial subgroup immediately, and Centre(G) still computes the centre of a non-simple group G using methods implemented in the GAP Library. For a solution, see Section 2.5. 1.6 Finding Good abc Triples 1.6-1 Exercise Given a positive integer n, let rad(n) denote the product of distinct prime divisors of n. The abc conjecture states that for any ϵ > 0 there is a constant K_ϵ such that for any triple (a,b,c) of coprime positive integers satisfying the equation a + b = c we have c < K_ϵ rad(abc)^1 + ϵ. A triple (a,b,c) of coprime integers satisfying a + b = c is called an abc triple if rad(abc) is less than c. An abc triple (a,b,c) is called a good abc triple if it satisfies even ln(c)/ln( rad (abc)) > 1.4. The left-hand side of the inequality is sometimes called the ratio of the abc triple. It can be shown easily that there are infinitely many abc triples, but if the abc conjecture holds, there are only finitely many good abc triples. Write a GAP function which finds all good abc triples (a,b,c) with given radical rad(abc) and with c less than a given bound! Can you find a new triple, which is not yet on Abderrahmane Nitaj's list of known good abc triples? 1.6-2 Hints You can start by determining all positive integers less than the given bound all of whose prime factors divide the given radical. This can be done much more efficiently than by the Sieve of Eratosthenes, or even by looping over all integers in the range and factoring -- it is easy and very elementary to find out how. Then you can loop over all pairs of these integers, test whether they are coprime and compute the ratio if they are. To compute the ratio, you need the operation Float which converts integers and rationals to floating point numbers, and the function LOG_FLOAT which computes the natural logarithm of a floating point number. For a solution, see Section 2.6. 1.7 Automorphism Groups of Finite Graphs 1.7-1 Exercise Determine all undirected graphs with 6 vertices up to isomorphism! -- How many isomorphism types of such graphs are there? The graphs should be represented as sets of edges, where the edges should be written as sets of two vertices, each. Example: In this representation, the graphs [[1,2],[1,6],[2,3],[3,4],[4,5],[5,6]] and [[1,5],[1,6],[2,4],[2,6],[3,4],[3,5]] are both isomorphic to the regular hexagon. Write a function GraphAutomorphismGroup(Gamma,n) which computes the automorphism group of the graph Gamma with n vertices. -- The automorphisms are precisely the permutations of the set of vertices which move edges to edges. Note that the cardinality n of the set of vertices needs to be specified, as there may be isolated vertices. Find out which of the (up to conjugation in S_6) 16 transitive permutation groups of degree 6 occur as automorphism groups of graphs with 6 vertices! -- How do the corresponding graphs look like? 1.7-2 Hints ad a) You can obtain the set of all graphs with n vertices in the suggested notation by Combinations(Combinations([1..n],2));. Then you need to find a suitable group action on this set such that two graphs are isomorphic if and only if they lie in the same orbit. Useful operations are Orbits and Representative. ad b) In principle you can implement a fancy algorithm here, but for our purposes a very basic one is perfectly sufficient. First find out how to check whether a given permutation of the vertices induces a graph automorphism. Then you can use SubgroupProperty to determine the group formed by all such permutations. ad c) Given Part a) and b), this is more or less straightforward. Useful functions / operations are AllTransitiveGroups, NrMovedPoints and TransitiveIdentification. For a solution, see Section 2.7. 1.8 Enumerating Paths 1.8-1 Exercise Answer the following questions using GAP: In how many ways can 1 ∈ S_4 be written as a product of exactly 100 transpositions? In how many ways can a horse cross the chess board from the upper left to the lower right corner with exactly 100 moves? 1.8-2 Hints Construct a matrix x ∈ Z^n × n with ones at suitable positions and zeros everywhere else and compute powers. For Part a), choose n := 24, and for Part b), choose n := 8^2 = 64. Useful functions are e.g. NullMat, SymmetricGroup, Combinations, Cartesian and Position. For a solution, see Section 2.8. 1.9 Wieferich Primes 1.9-1 Exercise By Fermat's little theorem, for any prime number p we have p|2^p-1-1. A prime number p is called a Wieferich prime if it satisfies even p^2|2^p-1-1. Write a GAP function which checks whether a given positive integer is a Wieferich prime, and try to find as many Wieferich primes as you can! 1.9-2 Hints Writing the GAP function is easy. Use PowerModInt instead of first computing 2^p-1 and then reducing modulo p^2. Two Wieferich primes can be found easily, but finding a third one is at least hard. For a solution, see Section 2.9. 1.10 Counting Words in a File 1.10-1 Exercise Write a GAP function which, given a filename, returns a word distribution statistics. The function should return a list with entries of the form [ "word", 237 ] for any word in the file, indicating the word and the number of times it occurs. A "word" in our sense is a sequence of letters with no non-letter characters in between, i.e. it does not need to be a dictionary word. Can you put your function into one line with no more than 80 characters? 1.10-2 Hints You will probably need the operation Collected. Further, the functions StringFile and WordsString from the GAPDoc package [LN06] will be useful. For a solution, see Section 2.10. 1.11 Non-Metabelian p-Groups 1.11-1 Exercise A group G is called metabelian if it has an abelian normal subgroup N such that the quotient G/N is abelian as well. Further, a group is called a p-group if its order is a power of a prime. Find a non-metabelian p-group of least possible order! 1.11-2 Hints First determine conceivable orders of non-metabelian groups by means of theory. Then just run a brute-force search over the groups of the smallest few of these orders in the Small Groups Library [BEO07]. Useful functions / operations are AllGroups, DerivedSubgroup and IsAbelian. For a solution, see Section 2.11. 1.12 The Growth of the Sum-of-Divisors Function 1.12-1 Exercise Let σ denote the sum-of-divisors function. Given a positive integer n, let H(n) := ∑_k=1^n 1/k be the nth harmonic number, and put B(n) := H(n) + ln(H(n)) ⋅ e^H(n). Examples are σ(24) = 60 and B(24) ≈ 61.7575, as well as σ(60) = 168 and B(60) ≈ 170.977. Can you find an integer n > 1 such that σ(n) is larger than B(n)? 1.12-2 Hints In GAP, σ(n) can be computed by Sigma(n). Use the operation Float to convert integers and rationals to floating point numbers, and use the functions LOG_FLOAT and EXP_FLOAT to compute logarithms and to evaluate the exponential function, respectively. Be prepared that finding an integer n > 1 such that σ(n) is larger than B(n) is not easy. For a solution, see Section 2.12. 1.13 Pell's Equation 1.13-1 Exercise Pell's equation is any Diophantine equation of the form x^2 - ny^2 = 1, where n is a nonsquare integer. Write a GAP function which takes an argument n and returns the smallest solution of the equation x^2 - ny^2 = 1 in positive integers! This solution is called the fundamental solution. Your function should return for example the answer for n = 421 very quickly. 1.13-2 Hints Useful functions in this context are ContinuedFractionExpansionOfRoot and ContinuedFractionApproximationOfRoot. For a solution, see Section 2.13. 1.14 Automorphism Groups of Odd Order 1.14-1 Exercise Obviously, the automorphism groups of both the trivial group and the cyclic group of order 2 are trivial, and have therefore odd order. Find the smallest group of order greater than 2 whose automorphism group has odd order! 1.14-2 Hints First try to exclude the groups of even order by means of theory. Then run a brute-force search over the Small Groups Library [BEO07]. Useful functions / operations in this context are AllGroups and For a solution, see Section 2.14. 1.15 Composite Sums 1.15-1 Exercise Find an odd positive integer n such that n+2^k is composite for any positive integer k! -- Can you find the least possible such n? 1.15-2 Hints First perform a brute-force search to find candidates. Then try to verify whether these candidates indeed have the desired property. A useful function in this context is OrderMod. In addition, it may be worth to have a look at the ResClasses package [Koh07c], and there in particular at the function ResidueClass. Finding a good candidate for the least possible number with the given property is reasonably easy, but the verification that it is indeed the smallest one is computationally difficult. For a solution, see Section 2.15. 1.16 Rational Points on the Unit Sphere 1.16-1 Exercise Write a GAP function which computes all rational points on the unit sphere x^2+y^2+z^2=1 which correspond to solutions of the diophantine equation a^2+b^2+c^2=d^2 with a, b and c not exceeding a given bound. Further, your function should draw a picture showing the projection of one octant of the sphere to the x-y-plane, where the rational points are marked by black pixels. 1.16-2 Hints Just write a nested loop to determine the solutions. Note that the variables can be permuted, thus you can assume a ≥ b ≥ c and generate the solutions not satisfying this inequality by permuting a, b and c. This saves almost 5/6 of the time. Maybe a useful function in this context is Arrangements. Create an empty grid over GF(2) by the function NullMat, invert it (i.e. replace zeros by ones) and mark the solutions by zeros there. Finally, the RCWA package [Koh07b] provides a function SaveAsBitmapPicture, which can be used to write the picture to a file. For a solution, see Section 2.16. 1.17 Aliquot Sequences 1.17-1 Exercise Given a positive integer n, the Aliquot sequence n = a_1, a_2, a_3, a_4, dots starting at n is defined by a_i+1 = σ(a_i) - a_i, where σ denotes the sum-of-divisors function. We say that the Aliquot sequence starting at n stops if there is an index i such that a_i = 1, and we say that it runs into a cycle if there are distinct indices i and j such that a_i = a_j. Find out whether all Aliquot sequences starting at integers n < 100 either stop or run into cycles! -- Can you do the same for all Aliquot sequences starting at integers n < 200 or n < 300? Do you see algorithmic problems, and of which kind are they? 1.17-2 Hints In GAP, σ(n) can be computed by Sigma(n). Computing σ(n) requires factoring n. For this, the GAP Library method for the operation Sigma calls the GAP Library function FactorsInt directly. This works for small n, but for larger n, FactorsInt will often give up and raise an error message which suggests to use the FactInt package [Koh07a]. If you have loaded FactInt, you may find this strange. However, this has nothing to do with FactInt, as this package does not get a chance to help with factoring. You can make Sigma benefit from FactInt if you fetch the method for Sigma from lib/ numtheor.gi, put it into a separate file, replace FactorsInt by Factors, increase the method rank to something like SUM_FLAGS and read this file into GAP. Alternatively you can make the change directly in the GAP Library. Then you do not need to increase the method rank, but (as after every Library change) you need to rebuild the completion files. For this, start GAP with option -N and enter CreateCompletionFiles();. For a solution, see Section 2.17. 1.18 The Q Sequence 1.18-1 Exercise Hofstadter's Q sequence is defined by Q_1 = Q_2 = 1 and Q_n = Q_n-Q_n-1} + Q_n-Q_n-2} for n > 2. Write a GAP function which takes an integer argument l and computes the first l terms of the Q sequence. Write a GAP function which plots the graph of the Q sequence. 1.18-2 Hints ad a) The Q sequence is defined recursively. Ask yourself the question why a recursive implementation is not a particularly good idea in this case, anyway. ad b) Use the function NullMat to create a zero matrix over the field with two elements, turn the zeros into ones if you prefer a black graph on a white background to a white graph on a black background, and use this matrix as a grid to draw the graph. The RCWA package [Koh07b] provides a function SaveAsBitmapPicture, which can be used to produce a picture file from the matrix. For a solution, see Section 2.18. 1.19 A Quickly Growing Function 1.19-1 Exercise Have a look at the following function, which takes as arguments three nonnegative integers and which returns a positive integer: f := function ( i, j, k ) if i = 0 then if j = 0 then if k = 0 then return 2; else return 2^f(i,j,k-1); fi; else return f(i,j-1,f(i,j-1,k)); fi; return f(i-1,f(i-1,j,k),f(i-1,j,k)); Try to evaluate f for small values of i, j and k! -- How far can you get? Can you evaluate f(1,1,1) or f(2,2,2), or can you perhaps write down these values as non-recursive expressions? The function f is still a computable function -- recall however that there are functions which grow faster than any computable function! 1.19-2 Hints Some values this function takes: f(0,0,0) is 2, f(0,0,1) is 4, f(0,0,2) and f(0,1,0) are both 16, f(0,0,3) is 65536, f(0,0,4) and f(0,1,1) are both 2^65536, f(0,0,5) is 2^2^65536}, and already f (1,1,1) is basically too large to be written down in a non-recursive way. The exercise asks just for some experimentation -- thus there is no solution given. 1.20 The 3n+1 Conjecture 1.20-1 Exercise The 3n+1 conjecture, also known as Collatz conjecture, asserts that iterated application of the Collatz mapping to any given positive integer eventually yields 1. This problem has been posed by Lothar Collatz in the 1930's, and it is still open today. Investigate Collatz' conjecture by means of computation with GAP, and try to find a proof or a counterexample! 1.20-2 Hints The 3n+1 conjecture is generally believed to be true, but if it is false, this may be for two reasons: firstly, there may be unbounded sequences, and secondly, there may be sequences which run into cycles not containing 1. Jeffrey C. Lagarias has compiled a comprehensive annotated bibliography [Lag07] on this conjecture. The GAP package RCWA [Koh07b] provides a large variety of methods to compute with mappings like the Collatz mapping. We show just how to enter the Collatz mapping and how to compute the sequences we are interested in: gap> T := RcwaMapping([[1,0,2],[3,1,2]]); <rcwa mapping of Z with modulus 2> gap> SetName(T,"T"); gap> Display(T); Rcwa mapping of Z with modulus 2 n mod 2 | n^T 0 | n/2 1 | (3n + 1)/2 gap> Trajectory(T,15,[1]); [ 15, 23, 35, 53, 80, 40, 20, 10, 5, 8, 4, 2, 1 ] gap> Trajectory(T,27,[1]); [ 27, 41, 62, 31, 47, 71, 107, 161, 242, 121, 182, 91, 137, 206, 103, 155, 233, 350, 175, 263, 395, 593, 890, 445, 668, 334, 167, 251, 377, 566, 283, 425, 638, 319, 479, 719, 1079, 1619, 2429, 3644, 1822, 911, 1367, 2051, 3077, 4616, 2308, 1154, 577, 866, 433, 650, 325, 488, 244, 122, 61, 92, 46, 23, 35, 53, 80, 40, 20, 10, 5, 8, 4, 2, 1 ] There is (of course!) no solution given for this exercise.
{"url":"https://stefan-kohl.github.io/gap-exercises/chap1.html","timestamp":"2024-11-03T18:26:06Z","content_type":"application/xhtml+xml","content_length":"50590","record_id":"<urn:uuid:c20c9b4f-260b-4443-8600-01b39b6318db>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00593.warc.gz"}
Overview: MODECLUS Procedure The MODECLUS procedure clusters observations in a SAS data set by using any of several algorithms based on nonparametric density estimates. The data can be numeric coordinates or distances. PROC MODECLUS can perform approximate significance tests for the number of clusters and can hierarchically join nonsignificant clusters. The significance tests are empirically validated by simulations with sample sizes ranging from 20 to 2000. PROC MODECLUS produces output data sets containing density estimates and cluster membership, various cluster statistics including approximate Most clustering methods are biased toward finding clusters possessing certain characteristics related to size (number of members), shape, or dispersion. Methods based on the least squares criterion (Sarle; 1982), such as The CLUSTER Procedure ) is somewhat biased toward finding clusters of equal variance. Many clustering methods tend to produce compact, roughly hyperspherical clusters and are incapable of detecting clusters with highly elongated or irregular shapes. The methods with the least bias are those based on nonparametric density estimation (Silverman 1986, pp. 130–146; Scott 1992 , pp. 125–190) such as density linkage (see Chapter 29, The CLUSTER Procedure ), Wong and Lane (1983), and Wong and Schaack (1982). The biases of many commonly used clustering methods are discussed in Chapter 11, Introduction to Clustering Procedures. PROC MODECLUS implements several clustering methods by using nonparametric density estimation. Such clustering methods are referred to hereafter as nonparametric clustering methods. The methods in PROC MODECLUS are related to, but not identical to, methods developed by Gitman (1973), Huizinga (1978), Koontz and Fukunaga (1972a, 1972b), Koontz, Narendra, and Fukunaga (1976), Mizoguchi and Shimura (1980), Wong and Lane (1983). Details of the algorithms are provided in the section Clustering Methods. For nonparametric clustering methods, a cluster is loosely defined as a region surrounding a local maximum of the probability density function (see the section Significance Tests for a more rigorous definition). Given a sufficiently large sample, nonparametric clustering methods are capable of detecting clusters of unequal size and dispersion and with highly irregular shapes. Nonparametric methods can also obtain good results for compact clusters of equal size and dispersion, but they naturally require larger sample sizes for good recovery than clustering methods that are biased toward finding such "nice" clusters. For coordinate data, nonparametric clustering methods are less sensitive to changes in scale of the variables or to affine transformations of the variables than are most other commonly used clustering methods. Nevertheless, it is necessary to consider questions of scaling and transformation, since variables with large variances tend to have more of an effect on the resulting clusters than those with small variances. If two or more variables are not measured in comparable units, some type of standardization or scaling is necessary; otherwise, the distances used by the procedure might be based on inappropriate apples-and-oranges computations. For variables with comparable units of measurement, standardization or scaling might still be desirable if the scale estimates of the variables are not related to their expected importance for defining clusters. If you want two variables to have equal importance in the analysis, they should have roughly equal scale estimates. If you want one variable to have more of an effect than another, the former should be scaled to have a greater scale estimate than the latter. The STD option in the PROC MODECLUS statement scales all variables to equal variance. However, the variance is not necessarily the most appropriate scale estimate for cluster analysis. In particular, outliers should be removed before using PROC MODECLUS with the STD option. A variety of scale estimators including robust estimators are provided in the STDIZE procedure (for detailed information, see Chapter 82, The STDIZE Procedure ). Additionally, the ACECLUS procedure provides another way to transform the variables to try to improve the separation of clusters. Since clusters are defined in terms of local maxima of the probability density function, nonlinear transformations of the data can change the number of population clusters. The variables should be transformed so that equal differences are of equal practical importance. An interval scale of measurement is required. Ordinal or ranked data are generally inappropriate, since monotone transformations can produce any arbitrary number of modes. Unlike the methods in the CLUSTER procedure, the methods in the MODECLUS procedure are not inherently hierarchical. However, PROC MODECLUS can do approximate nonparametric significance tests for the number of clusters by obtaining an approximate Another important difference between the MODECLUS procedure and many other clustering methods is that you do not tell PROC MODECLUS how many clusters you want. Instead, you specify a smoothing parameter (see the section Density Estimation) and, optionally, a significance level, and PROC MODECLUS determines the number of clusters. You can specify a list of smoothing parameters, and PROC MODECLUS performs a separate cluster analysis for each value in the list.
{"url":"http://support.sas.com/documentation/cdl/en/statug/63347/HTML/default/statug_modeclus_sect001.htm","timestamp":"2024-11-08T12:48:50Z","content_type":"application/xhtml+xml","content_length":"17154","record_id":"<urn:uuid:7bbdbbb2-8765-4672-bf64-da1fc93d8627>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00232.warc.gz"}
Simulating Wind on Procedural Terrain using GPU Accelerated Lattice Boltzmann Method Note: The full source code for this article can be found here (summarized version). Individual sections contain references to specific code snippets below. For a long time I have been working on generating terrain by simulating geomorphological processes. This primarily includes various forms of erosion, i.e. hydraulic, thermal and wind erosion processes. I believe that in order to generate interesting terrain, you have to simulate processes which are often as complex as they are subtle. I have always been intrigued by the fact that the atmosphere is a major driving force behind geomorphological processes. From climatological patterns, to wind-shadow effects and the coupling to vegetation and erosion, the movement of mass and energy through wind is an important contributor. Initially, I was motivated by the idea of simulating cloud patterns with realistic coupling to the terrain. While highly complex, it always seemed like a fascinating goal. While I made some previous attempts at simulating clouds, climate and wind patterns, these were all in 2.5D, which I found insufficient to simulate a real micro-climate. Note: Many interesting atmospheric phenomena, such as updrafts, require a 3D or layered atmospheric model. Some of my previous attempts at atmosphere simulation included Finite Volume Methods and Cellular Automata. I decided to overcome this by stepping into 3D simulation of wind on terrain using the Lattice Boltzmann Method. I chose to focus on wind simulation first, because a good wind model can act as the base for future atmospheric simulation. In this article, I want to give context to the relation between the physical and mathematical modelling of fluid flow problems and explain the Lattice Boltzmann Method. Then I will show how I apply this method to wind on terrain, and present an implementation (with code samples) in GPU accelerated form using compute shaders in OpenGL4 / C++. Real-Time Wind on Terrain Visualization. The terrain was generated using my SimpleHydrology system (code, article). Wind as a Fluid Flow Problem Note: This section assumes familiarity with calculus of integrals and derivatives, in order to relate mathematical and physical models. Feel free to skip it if you are familiar with computational fluid dynamics or are only interested in the implementation. What is a Fluid? While we know intuitively that water is a fluid, it surprised me to learn in my first fluid mechanics lecture that the definition of a fluid has nothing to do with gaseous or liquid states of matter. In fact the definition is much more precise. So what is a fluid? According to Wikipedia, a fluid is a medium which cannot resist any shear stress applied to it. This means that it does not resist deformation by an external force. In contrast, a solid resists “deformation” (e.g. by generating an elastic restoring force proportional to the shear force), while a fluid does not and simply deforms. Note: In general, different types of medium (fluid, solid, viscoelastic fluid, non-newtonian etc.) can be classified by the relation between applied shear-force and internal rate of deformation (a.k.a. “strain”)! A chart showing the relationship between shear stress and strain for various mediums. A newtonian fluid has a linear relationship, while a non-newtonian fluid like Ooblek is shear thickening. In this graph, a solid would be represented by a vertical line from the origin. Source: Wikimedia Commons, Dhollm By this definition, air is a fluid, since it behaves like a fluid as defined by its deformation behavior, and thus we can apply the laws of fluid mechanics to it. Discrete and Continuous Mechanics While Classical Mechanics models how discrete particles with mass behave under the application of different forces, Continuum Mechanics applies the same principles to a mass where properties change continuously in space. One well-known result from classical mechanics is the elastic collision (remember physics class?), which results from the application of the conservation of mass, momentum and energy to discrete, colliding masses. A GPU implementation of N-Ball elastic collisions with boundary conditions. Note that each particle has discrete properties, such as velocity. Applying a constant right-ward body force to all particles gives the appearance of flow around the cylinders. NParticles = 20’000. Source Code Similarly, a well-known result from fluid mechanics are the Navier-Stokes Equations, which fundamentally encode the principles of conservation of mass, momentum and energy (“conservation laws“); as applied to a continuum. A similar simulation, where flow around cylinders is shown. Note that we compute a continuous velocity field over the entire domain, instead of at specific locations. This is implemented as a 2D Lattice Boltzmann simulation on the GPU. Source Code Note: The Navier-Stokes Equations (for newtonian fluids) are derived by combining the differential forms of the conservation laws with a linear relationship for the shear-stress strain relationship, proportional to the fluid viscosity. The mathematical tool we use to express conservation laws “continuously” are partial differential equations (PDE). Note: Fluid mechanics is a discipline of continuum mechanics. Continuum mechanics also apply to solid mechanics. The Navier-Stokes equations are partial differential equations. The (Differential) Conservation Law A partial differential equation relates the partial derivatives of an unknown multivariable function to each other, thereby describing its behavior in various dimensions (commonly space, time) and implicitly defining the function. A PDE can be interpreted as a constraint, and solving a PDE means finding functions which satisfy the constraint. Note: A function which satisfies a given PDE is generally not known, and is generally not guaranteed to exist or be unique, e.g. a PDE can be satisfied by a family of functions. A differential conservation law is a PDE which describes the behavior of a continuous function of a conserved quantity. We construct the differential conservation law in such a way that it enforces conservation as a constraint (see below). Writing any physically conserved quantity as a function of time and space, this function must satisfy the differential conservation law PDE constraint in order to be conserved. How can we express a conservation law as a PDE? Note: There are different types of PDE, and conservation laws are typically “hyperbolic“. This implies that disturbances in time and space move with finite speed. The other types of PDE are elliptic and parabolic, and exhibit different properties. The Advection-Diffusion Equation One of the most basic differential conservation laws is the “Advection-Diffusion Equation“, which describes how mass is conserved for a diffusing, dilute solute in a moving fluid. This states that the derivative of the solute concentration in time (at a fixed location) is given by the sum of three terms: The diffusion term, the advection term and the source term. Note: The diffusion term derives from Fick’s Law, which states that solute flux is proportional to the concentration gradient. The advective flux is proportional to the concentration times the fluid velocity. Taking the gradient of the flux then yields the accumulation, equivalent to (what goes in) – (what goes out). The source term R is the only “non-conservative” term, and describes the generation or destruction of c in the volume. Setting the source R to 0 means that the only way for mass to enter or leave a volume is by diffusion or advecting across the differential volume boundary, thus conserving mass. More generally, conservation laws express the constraint: what goes in minus what goes out is what accumulates inside For mass, this means mass is never generated anywhere, but has to come from across the boundary (i.e. somewhere else). Example of the Advection-Diffusion-Equation acting on a 2D function. As the PDE evolves the function time, the integral of the function (i.e. mass / volume) is conserved. Source: Shiyu Ji, Wikimedia Numerical Solution of PDEs Given a PDE and sufficient boundary and initial conditions (BC, IC), we wish to find a function which satisfies the PDE. Integrating PDEs analytically, and in particular the Navier-Stokes Equations, is typically only possible for a small number of well-known problem statements with simple boundary and initial conditions (e.g. Hagen-Poiseuille-Flow). Solving problems with more complex initial- and boundary- conditions requires making approximations in the form of numerical methods. Even with numerical methods, solution of PDEs for various problem statements defies a one-size-fits-all approach. This has spawned a large ecosystem of methods and tools for the solution of PDEs, but most of them involve some form of discretization and subsequent numerical integration of the continuous equations. Each method makes different assumptions and approximations, resulting in varying efficiency, accuracy and stability. Computational Fluid Dynamics (CFD) therefore concerns itself with the numerical solution of fluid flow problems, and a large area of research is focused on improving the speed, accuracy and stability of these methods for solving PDEs. One such method is the Lattice-Boltzmann Method. While the Boltzmann Equation is generally more difficult to solve analytically, it has computational advantages as we will see. Note: The three most common “direct” solution methods for the NSE are the Finite Difference (FD), Finite Volume (FV) and Finite Element (FE) methods. Each uses a different approach to discretising, integrating and boundary condition handling. – FD: Approximates continuum through point values, approximates derivatives through finite-differences. – FV: Approximates continuum through cell averages – FE: Approximates continuum as weighted sum of based functions for interpolating values A number of more exotic methods exist which solve fluid flow problems not by modelling the continuum directly but by modelling e.g. particles (SPH) or velocity distributions (LBM) in such a way to still fulfil the underlying constraints of the NSE. Both of these methods are more suitable to multi-phase flow. The Lattice Boltzmann Method Note: The main source for the following description is the book “The Lattice Boltzmann Method, Principles and Practice” (2017) by Timm Krueger. I recommend reading it for details. The Lattice Boltzmann Method (LBM) is a numerical method for fluid flow simulation, which similar to other methods aims to satisfy the fundamental conservation laws. The key difference is that the Lattice Boltzmann Method does not solve for the macroscopic properties of the fluid (density, energy, momentum) directly, but uses a different variable (the Particle Distribution Function) and differential conservation law PDE (the Boltzmann Equation), which together indirectly satisfy the conservation constraints. Computationally, this change of problem framing represents a new trade-off between parallelism vs. memory efficiency of the problem, which lends itself to GPU acceleration. In the following chapter, I will present the Boltzmann Equation, its variable the Particle Distribution Function, and how these two concepts relate to our differential conservation laws. Finally, I will show how the Boltzmann Equation is solved numerically on a lattice as the LBM, and how we can exploit the parallelism of LBM for efficient GPU acceleration. The Particle Distribution Function While the Navier-Stokes Equations (NSE) explicitly model the evolution of macroscopic properties (density, energy, momentum) through partial differential equations (PDE), the Boltzmann-Equation is a single PDE which models the evolution of the particle distribution function f, from which these macroscopic properties can be derived. The particle distribution function f is a probability density, which gives the probability of finding a particle with position x and velocity v at time t. In 2D and 3D, the particle distribution function is a function of 5 and 7 parameters respectively. Note: One can therefore say that the LBM models the evolution and interaction of a group of particles as described by f. Deriving Macroscopic Properties The particle distribution function allows us to derive the original, locally conserved macroscopic fluid properties. In fact, the macroscopic fluid properties can be derived as the moments of the particle distribution function. The macroscopic density is given by the zeroth moment of f in v, defined as the integral of f over the space of velocities: Similarly, the macroscopic momentum density is given by the first order moment of f in v: and the macroscopic total energy density is given by the second order moment of f in v, expressed as: Note: The quantities shown here are macroscopic densities. Dividing by the density (i.e. zeroth moment or distribution cardinality), gives us our normalized macroscopic quantities. For a given particle distribution function over space, time and velocity we can thus compute the following quantities: 1. macroscopic density 2. macroscopic momentum (from 1) 3. macroscopic total energy (from 1) 4. macroscopic bulk velocity (from 1, 2) 5. macroscopic internal energy (from 4) 6. temperature, pressure (from 5) Note: The macroscopic total energy density includes contributions from the internal energy and bulk motion. We isolate the internal energy by computing the second moment with the relative velocity (i.e. subtract the bulk velocity). The Boltzmann Equation The Boltzmann-Equation is a partial differential equation which describes the evolution of the particle distribution function f. Taking the total derivative of f wrt. time we get: which can be simplified to the Boltzmann Equation: by expressing the change of position as proportional to velocity and change of velocity as proportional to force. Note: The total derivative of a function wrt. to its arguments captures all contributing terms to the change of a quantity. This resembles a differential conservation law similar to the advection-diffusion equation. The first two terms relate the change of the distribution at a position due to advection of the distribution with velocity v. The third term represents changes in the distribution due to some force F. Finally, the entire equation is equal to a source term Ω, which represents the redistribution of f in time due to particle collisions, and is thus known as the collision operator. Choosing the collision operator such that the conservation law constraints are fulfilled is what allows the Boltzmann-Equation to model the macroscopic behavior of a fluid and thus the Navier-Stokes Note: The Boltzmann-Equation provides a second order accurate approximation of the Navier-Stokes Equations. For a detailed derivation on how and to what accuracy the Boltzmann-Equation approximates the NSE, I refer you to the book “The Lattice Boltzmann Method – Principles and Practice” by Tim Krüger or Chapman-Enskog theory. The Collision Operator A valid collision operator should enforce the conservation of mass, momentum and energy as well as the relaxation of the distribution to equilibrium. How do we enforce this? We know that the collision operator Ω is equal to the total derivative of our particle distribution function f: Using the fact that the total derivative of our conserved macroscopic quantities is zero (by definition), we can apply the expressions for these quantities as the moments of f to directly enforce the conservation constraints on Ω. For mass conservation, we can write: and similarly for momentum and energy conservation: A valid collision operator must satisfy these constraints! The simplest collision operator which satisfies the stated constraints is the BGK collision operator, named after Bhatnagar, Gross and Krook. Others include the two-relaxation-time or multi-relaxation time operators. The BGK Collision Operator The BGK collision operator explicitly forces the relaxation of the distribution towards its equilibrium distribution: where tau represents the relaxation time. Finally, the equilibrium distribution is given by the well-known Maxwell-Boltzmann Distribution, which describes the speeds of particles for an idealised gas, given by: The choice of relaxation time and time-step results in the kinematic viscosity being given by: where cs is the speed of sound. Note: It appears that the relation between our choice of collision operator with the relaxation time and the definition of our equilibrium distribution constrains the kinematic viscosity to this expression, which can be derived via Chapman-Enskog Theory. I’m sorry for not deriving the equilibrium distribution for you, but I am sure you can find more qualified explanations! The Lattice Boltzmann Equation “Solving” the Boltzmann-Equation means finding a function f which satisfies this differential relation for our collision operator Ω and appropriate initial and boundary conditions. To solve the Boltzmann-Equation numerically, we need to discretize the particle distribution function in its arguments: space, time and in particular velocity, and finally integrate. The resulting equation is the Lattice-Boltzmann-Equation. We can utilize the common method of having discrete time-steps on a lattice / grid for our space and time discretization. Discretizing velocity space is a unique feature of the Lattice-Boltzmann method, and is described below. Discretizing Velocity Space Instead of modeling a distribution f of particles over all possible velocities in 2D or 3D, we choose a discrete set of “base velocities” v_i and assume that all particles in the system can only move with one of these velocities. With this assumption, we can rewrite f as a distribution in time and space for each individual velocity component: A natural choice of velocity set is a set of velocities where after one time-step dt, a particle moves exactly to the position of a neighboring lattice element. Thus the velocity set is fully defined by choice of “neighborhood” and dimension of the problem. In LBM, the velocity sets are therefore commonly named DdQq, where d is the dimension of space and q is the size of neighborhood. This discretization of the velocity space is what is most commonly associated with the Lattice-Boltzmann method and yields the iconic “velocity set distribution” graphics. The D2Q9 LBM unit velocity set. For 2D problems, a common choice is the D2Q9 velocity set, while for 3D problems a common choice is the D3Q19 set. Note that the zero-velocity is also a component of the set. In practical terms, the velocity discretization means: We must store an array of values of f at every position in space x_i, for every element of the velocity set v_i at every time-step t_i. This is why the LBM is associated with a larger memory footprint than direct macroscopic property solvers. Deriving Macroscopic Properties Using our newly discretized velocity space, the integral expressions for our macroscopic properties as moments of the particle distribution function simplify to summations: Note: In an implementation, f_i is strictly only defined at discrete time-steps and lattice locations. Solving the Lattice-Boltzmann Equation To solve the Lattice-Boltzmann Equation, we can discretize the Boltzmann-Equation in space, time and velocity space: where our new discretized collision operator is given by: and our new discretized equilibrium distribution is given by: Solving this equation sequentially in time is very straightforward! We can iteratively compute the next time-step using the so-called collision and streaming steps. Note: While the Lattice-Boltzmann Equation has a larger memory footprint, it is easy to solve and parallelize because it is a hyperbolic differential equation, which is easy to time-step and which only depends on local values. The Collision Step The collision (or relaxation step) is an intermediary step which consists of computing the distribution function after collisions, but before particles have moved: The Streaming Step The streaming step consists of moving our intermediate values to their next locations on the lattice. This can either be by “pushing” or “pulling” depending on implementation: Boundary Conditions Finally, to conclude our theoretical background on the Lattice-Boltzmann Method, we will take a look at how we can introduce basic boundary conditions into a simulation. Note: For our situation, we will only consider straight boundary conditions which align with the borders between lattice nodes. Boundary conditions for the LBM are not applied to the macroscopic properties, but are instead applied to the population distribution function f. The simplest “link-wise” boundary condition is the so-called bounce-back (BB) boundary condition. The idea is that when a particle meets a rigid boundary, it is reflected back to its original location with its velocity reversed. where the hat notation ^ refers to the reverse direction: and the B subscript refers to any node with at least one boundary connecting to a “solid” node. The BB boundary condition effectively implements a stable macroscopic no-slip boundary condition for a resting wall. It also guarantees strict mass conservation and is very straightforward to Source: The Lattice Boltzmann Method, Principles and Practice by Tim Krueger, Modified Visualization Note: An alternative commonly used boundary condition are the so-called wet-node schemes, which instead explicitly define the populations f at the boundary nodes. The key difference is that link-wise schemes consider the boundary to be between nodes, while the wet-node schemes considers the boundary to be on the nodes. GPU Accelerated Implementation The key steps to solve the Lattice-Boltzmann Equation are: 1. Define the lattice and the velocity set 2. Define the initial and boundary conditions 3. Compute the local macroscopic fluid properties 4. Perform the collision step 5. Perform the streaming step 6. Apply the boundary conditions and any forces 7. Start next time-step from step 3 This algorithm was implemented in C++ with TinyEngine as the base for implementing the GPU acceleration code. The provided implementation uses OpenGL4 compute shaders operating on shader storage buffer objects (SSBOs). The entire code is implemented using 4 compute shaders: lbm.cs, init.cs, collide.cs, stream.cs, with lbm.cs acting as an “include” shader for shared definitions. In the following section, I will detail implementations for both 2D and 3D. Memory Layout and Allocation After defining the size of our domain (i.e. NX, NY and NZ), we choose a velocity set depending on the dimensionality. A common choice for 2D is D2Q9, and a common choice for 3D is D3Q19. This defines the full GPU memory footprint. Our main GPU storage buffers are the particle distribution function array, a propagation array, macroscopic quantities which we store for efficiency and our boundary condition. Note that for every lattice node in our domain, we have 1 value for each macroscopic property, but Q values of the particle distribution function (one for each velocity comp.). The particle distribution function and propagation buffers therefore have size N^d*Q, while the macroscopic quantities and boundary conditions have size N^d. For our implementation, we simply declare our buffers with the correct size on the CPU. Note that we index these buffers with a flattened index in the shader code. // Initialize our Arrays (NX*NY*Q) Buffer f(NX*NY*Q, (float*)NULL); //Raw F Buffer Buffer fprop(NX*NY*Q, (float*)NULL); //Raw FProp Buffer // Computed Quantities (For Efficiency, NX*NY) Buffer rho(NX*NY, (float*)NULL); Buffer v(NX*NY, (glm::vec4*)NULL); // Boundary Condition (NX*NY) Buffer b(NX*NY, (float*)NULL); //Boundary Condition We then bind these buffers as SSBOs to our main shaders: // main.cpp // Initialization Compute Shader Compute init("shader/init.cs", {"f", "fprop", "b", "rho", "v"}); init.bind<float>("f", &f); init.bind<float>("fprop", &fprop); init.bind<float>("b", &b); init.bind<float>("rho", &rho); init.bind<glm::vec4>("v", &dirbuf); // Collision and Streaming Compute Shaders Compute collide("shader/collide.cs", {"f", "fprop", "b", "rho", "v"}); collide.bind<float>("f", &f); collide.bind<float>("fprop", &fprop); collide.bind<float>("b", &b); collide.bind<float>("rho", &rho); collide.bind<glm::vec4>("v", &dirbuf); Compute stream("shader/stream.cs", {"f", "fprop", "b"}); stream.bind<float>("f", &f); stream.bind<float>("fprop", &fprop); stream.bind<float>("b", &b); Note: Because of the GLSL std430 storage layout specifier specification, the velocity vector has to be bound bound as a vector of vec4 in the 3D case. and access them in the shader using: // Main Buffers and Parameters layout (std430, binding = 0) buffer f { float F[]; layout (std430, binding = 1) buffer fprop { float FPROP[]; layout (std430, binding = 2) buffer b { float B[]; //init.cs, collide.cs #version 460 core layout(local_size_x = 16, local_size_y = 1, local_size_z = 16) in; #include lbm.cs layout (std430, binding = 3) buffer rho { float RHO[]; layout (std430, binding = 4) buffer v { vec4 V[]; #version 460 core layout(local_size_x = 16, local_size_y = 1, local_size_z = 16) in; #include lbm.cs layout (std430, binding = 3) buffer rho { float RHO[]; Note: Not every shader needs access to every buffer. 2D and 3D Velocity Sets It is convenient to define our velocity set, directions and weights in a small shader code for use throughout in lbm.cs. D2Q9 Velocity Set Shader Code // 2D LBM uniform int NX = 256; uniform int NY = 128; const int Q = 9; // Velocity Set // Weights const float w[Q] = {4.0/9.0, 1.0/9.0, 1.0/9.0, 1.0/9.0, 1.0/9.0, 1.0/36.0, 1.0/36.0, 1.0/36.0, 1.0/36.0}; // Complementary Direction Index const int cp[Q] = {0, 3, 4, 1, 2, 7, 8, 5, 6}; // Directions const ivec2 c[Q] = { ivec2(0, 0), ivec2(1, 0), ivec2(0, 1), ivec2(-1, 0), ivec2(0, -1), ivec2(1, 1), ivec2(-1, 1), ivec2(-1, -1), ivec2(1, -1) D3Q19 Velocity Set Shader Code // 3D LBM uniform int NX = 64; uniform int NY = 64; uniform int NZ = 64; const int Q = 19; const float w[Q] = { 1.0/18.0, 1.0/18.0, 1.0/18.0, 1.0/18.0, 1.0/18.0, 1.0/18.0, 1.0/36.0, 1.0/36.0, 1.0/36.0, 1.0/36.0, 1.0/36.0, 1.0/36.0, 1.0/36.0, 1.0/36.0, 1.0/36.0, 1.0/36.0, 1.0/36.0, 1.0/36.0 const ivec3 c[Q] = { ivec3( 0, 0, 0), ivec3( 1, 0, 0), ivec3(-1, 0, 0), ivec3( 0, 1, 0), ivec3( 0, -1, 0), ivec3( 0, 0, 1), ivec3( 0, 0, -1), ivec3( 1, 1, 0), ivec3(-1, -1, 0), ivec3( 1, 0, 1), ivec3(-1, 0, -1), ivec3( 0, 1, 1), ivec3( 0, -1, -1), ivec3( 1, -1, 0), ivec3(-1, 1, 0), ivec3( 1, 0, -1), ivec3(-1, 0, 1), ivec3( 0, 1, -1), ivec3( 0, -1, 1) const int cp[Q] = { 2, 1, 4, 3, 6, 5, 8, 7, 10, 9, 12, 11, 14, 13, 16, 15, 18, 17 Equilibrium and Macroscopic Quantities Using our velocity set, we can define an equilibrium function and functions to compute the macroscopic velocities and densities for a given lattice node. const float cs = 1.0/sqrt(3.0); const float cs2 = 1.0/cs/cs; const float cs4 = 1.0/cs/cs/cs/cs; // Parameters const vec2 init_velocity = vec2(0.0, 0); const float init_density = 1.0; // Compute the Equilibrium Boltzmann Distribution float equilibrium(int q, float rho, vec2 v){ float eq = w[q]*rho; eq += w[q]*rho*(dot(v, c[q]))*cs2; eq += w[q]*rho*(dot(v, c[q])*dot(v, c[q]))*0.5*cs4; eq -= w[q]*rho*(dot(v, v))*0.5*cs2; return eq; // Compute the Density at a Position float getRho(uint ind){ float rho = 0.0; for(int q = 0; q < 9; q++) rho += F[q*NX*NY + ind]; return rho; // Compute the Momentum at a Position vec2 getV(uint ind){ vec2 v = vec2(0); for(int q = 0; q < 9; q++) v += F[q*NX*NY + ind]*c[q]; return v; The Collision Step The collision step begins by computing our macroscopic density and velocity and storing these values in their respective buffers. It then writes the propagated values / post collision values of the particle distribution function to fprop for each node using the BGK collision operator. const float tau = 0.6; const float dt = 1.0; void main(){ const uint ind = gl_GlobalInvocationID.x*NY + gl_GlobalInvocationID.y; // Macroscopic Quantities const float _rho = getRho(ind); const vec2 _v = getV(ind)/_rho; RHO[ind] = _rho; V[ind] = _v; // BGK Method: Compute next Distribution Values! for(int q = 0; q < Q; q++){ FPROP[q*NX*NY + ind] = (1.0 - dt/tau)*F[q*NX*NY + ind] + dt/tau*equilibrium(q, _rho, _v); The Streaming Step The streaming step subsequently checks the boundary condition, bounces back if it encounters a wall, and otherwise uses a push scheme to propagate the particle distribution function values along their velocity direction. void main(){ const uint ind = gl_GlobalInvocationID.x*NY + gl_GlobalInvocationID.y; for(int q = 0; q < Q; q++){ // Stream-To Position (Push Scheme) ivec2 n = ivec2(gl_GlobalInvocationID.xy) + c[q]; if(n.x < 0 || n.x >= NX) continue; if(n.y < 0 || n.y >= NY) continue; const int nind = n.x*NY + n.y; // Bounce-Back or Push if(B[nind] == 0.0) F[q*NX*NY+nind] = FPROP[q*NX*NY+ind]; F[cp[q]*NX*NY+ind] = FPROP[q*NX*NY+ind]; Initial and Force Boundary Conditions We can use this equilibrium distribution function to initialize our particle distribution function directly in a shader init.cs. This shader is dispatched once at the beginning. #version 460 core layout(local_size_x = 32, local_size_y = 32) in; #include lbm.cs void main(){ uint ind = gl_GlobalInvocationID.x*NY + gl_GlobalInvocationID.y; // Initialize the Boltzmann Distribution for(int q = 0; q < Q; q++) F[q*NX*NY + ind] = equilibrium(q, init_density, init_velocity); Additionally, a Dirichlet-style force boundary condition can be imposed by setting the particle distribution function as equal to the equilibrium distribution that satisfies the BC. // Optional: Force Boundary Condition vec2 force = 0.2f*vec2(1, 0); gl_GlobalInvocationID.x == 0 || gl_GlobalInvocationID.x == NX-1 || gl_GlobalInvocationID.y == 0 || gl_GlobalInvocationID.y == NY-1 for(int q = 0; q < Q; q++) F[q*NX*NY + ind] = equilibrium(q, 1.0, force); With proper initialization and alternating between collision and streaming steps, our computed and stored macroscopic quantities can be used by other shaders directly to generate visualizations of the fluid simulation in real time. 2D Lattice Boltzmann Method The first test I implemented to check the implementation is to check if vortex shedding occurs in 2D. Here, I am visualizing the x and y components of the velocity as RG. The implementation successfully worked for more complex and arbitrary boundary conditions. We can also visualize the flow’s vorticity using a shader-based implementation of the curl operator: Note: The artefacts on the boundary are a result of the method of computing the curl, which requires accessing neighbouring values. This fails at the boundary. 3D Lattice-Boltzmann Method In the 2D version, an image is a natural choice for visualizing the macroscopic properties. Visualizing the 3D method is less straight forward. I found an acceptable method to be using short streamlines, which can visualize velocity by their direction and length. Here is a visualization of flow around a sphere, with the streamlines colored using their velocity vectors, and transparency added to streamlines with a direction close to the equilibrium velocity. Take care when implementing streamlines to make sure that vertex positions are updated correctly. If you update each vertex position for every streamline at each time-step, then your streamline is only valid for a static velocity field. Once your velocity field changes, the entire streamline has to be recomputed to be consistent with the velocity field. Note: This is because for a given line x_i at time-step t_0, we know that x_1 = x_0 + dt*v(x_0) and x_2 = x_1 + dt*v(x_1). If we move x_0 to x_1 and x_1 to x_2, then at time-step t_1 we won’t have x_2 = x_1 + dt*v(x_1) because the velocity changed. Therefore, I found that the best method for implementing streamlines was using geometry shaders. A particle swarm can move through the velocity field, each spawning a stream line generated from the velocity field at each time step. Terrain as Boundary Condition Finally, implementing terrain as a boundary condition is as simple as generating a heightmap with your tool of choice, and then determining which lattice nodes are considered boundary nodes and which ones aren’t. This creates very nice visualizations in 3D: Note: This video was sped up 4x because otherwise the video file would be too large. Here is the same method implemented on top of SoilMachine, my unified geomorphology simulator. This video is sped up 4x as well. A video of wind blowing over dunes in real time, simulated in SoilMachine. With the availability of the surface normals of the terrain, in theory it would be possible to incorporate a more complex boundary condition which doesn’t discretely “voxelize” the terrain (e.g. a wet-node boundary condition or a distributive bounce-back boundary condition). This is beyond the scope of this article though. Final Words It took me a very long time to publish this article, because I was very distracted in the last months with real life, and writing it alone took a lot of time. I tried to take care and provide a sufficiently detailed theoretical background into why the Lattice Boltzmann Method simulates fluids. Still, I am glad that it is finally out and that others can see the code! One aspect which I struggled with while developing this system was controlling the trade-off between accuracy, speed and stability of the simulation without affecting the effective viscosity of the fluid. I believe that I need to reexamine the dimensionality of the domain and the relation between effective viscosity and simulation parameters to gain better control over this. There are many other aspects which I would have liked to explore, like transport systems (for moving humidity and temperature), but I couldn’t get sufficient results to make it worthwhile showcasing in this article (which is already long). In the future, I think it would be interesting to see how the wind simulation can be more tightly coupled to the terrain generation itself. Currently, the wind simulation is a mere one-way coupling, and assumes a static terrain. Ultimately, deriving a dynamic rainfall density map coupled to the terrain would be my goal. Additionally, performance improvements could have the potential to increase the resolution, which I believe are required for a proper mass / energy transport (i.e. cloud) simulation. I believe that I have pushed my integrated GPU to the limit on this one though. As always, if you have any questions about the system or the code, feel free to reach out to me.
{"url":"https://nickmcd.me/2022/10/01/procedural-wind-and-clouds-using-gpu-accelerated-lattice-boltzmann-method/","timestamp":"2024-11-10T12:48:11Z","content_type":"text/html","content_length":"101055","record_id":"<urn:uuid:656e95db-77f6-4c4a-a587-12643e591203>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00550.warc.gz"}
Consistency and Limit Distributions of Estimators of Parameters in Explosive Stochastic Difference Equations Ann. Math. Statist. 32(1): 195-218 (March, 1961). DOI: 10.1214/aoms/1177705151 Let $\{X_t, t \geq 1\}$ be a stochastic process which satisfies the following set of assumptions: ASSUMPTION 1: For every $t, X_t$ satisfies \begin{equation*}\tag{1}X_t = \alpha_1X_{t - 1} + \ alpha_2X_{t - 2} + \cdots + \alpha_kX_{t - k} + u_t,\end{equation*} where $\alpha_1, \cdots, \alpha_k$ are $k$ finite real numbers (unknown parameters) and $u_t, t$ positive, are independent, identically distributed random variables with mean zero and a finite positive variance $\sigma^2$. ASSUMPTION 2: The distribution of $u_t$ is continuous. (Actually $\mathrm{Pr}\{u_t = 0\} = 0$ suffices.) ASSUMPTION 3: The roots $m_1, \cdots, m_k$ of the characteristic equation \begin{equation*}\tag{2}m^k - \alpha_1m^{k - 1} - \alpha_2m^{k - 2} - \cdots - \alpha_k = 0,\end{equation*} of (1), are distinct. ASSUMPTION 4: There is a unique root $\rho$ of (2) such that $|\rho| > 1$, and $|\rho| > \max_{j = 2, \cdots, k} |m_j|$. Here $\rho$ is identified with $m_1$ for convenience. Since complex roots enter in pairs, it follows from this assumption that $\rho$ is real. Note that there can be $m_j, j > 1$, such that $|m_j| > 1$. ASSUMPTION 5: For $t$ non-positive, $u_t = 0$. If Assumption 4 holds, the process $\{X_t, t \geqq 1\}$ is said to be (strongly) explosive, and the corresponding difference equation (1) is called an explosive (linear homogeneous) stochastic difference equation; this is the subject of the present paper. Under the assumptions above, it follows (cf., C. Jordan [5], p. 564, Mann and Wald [8], p. 178, and also the footnote on p. 22 of [10]) that $X_t = \sum^t_{r = 1}\sum^k_{q = 1} \lambda_qm^{t - r}_qu_r,$ $t$ positive, and that $\lambda_q$ satisfy the relations \begin{equation*}\tag{3}\delta_{1t} = \sum^k_{q = 1} \lambda_qm^{t - 1}_q,\ quad t = 1, 0, -1, \cdots, - (k - 2),\end{equation*} where $\delta_{1t} = 1$ if $t = 1$ and 0 otherwise. (Note that $\sum^k_{q = 1}\lambda_q = 1$.) For convenience, define the random variables \begin {equation*}\tag{4}X_{i,t} = \sum^t_{r = 1} m^{t - r}_iu_r,\quad i = 1, 2, \cdots, k, (m_1 = \rho),\end{equation*} so that $X_{i,t} = 0$ for $t$ non-positive. Thus one may write $X_t$ as follows: \ begin{equation*}\tag{5}X_t = \lambda_1X_{1,t} + \lambda_2X_{2,t} + \cdots + \lambda_kX_{k,t}.\end{equation*} The first part of this paper is devoted to finding a consistent estimator of $\rho$ and its limit distribution. Consequently, in Section 3 some lemmas will be proved for use in the consistency proof (Theorem I). Similarly, in Section 5, some lemmas leading to the proof of the limit distribution of the estimator (Theorem II) will be given. In the second part, the consistency of the Least Squares (L.S.) or Maximum Likelihood (M.L.) estimators of the "structural parameters" $\ alpha_i$ of (1) will be considered (Theorem III). The procedure becomes much more involved because the direct application of the usual limit theorems is not possible, since the process under consideration is explosive. It is noteworthy that Lemmas 9, 10, 14-16, and Theorem I are rather general, in that they hold under the only global Assumptions 1-5 above, and the further requirement $| m_j| < 1, j = 2, \cdots, k$, so essential for the rest of the analysis of this paper, is unnecessary for them. The corresponding problem, in the case $|\rho| < 1$, has been completely solved by Mann and Wald [8]. If $k = 1$ in (1), the results of this paper reduce to those obtained by Rubin [13], White [14], and T. W. Anderson [1]. The vector case has also been treated by Anderson in [1], but a comparison of the results in this case with those of the present paper shows that they do not imply each other except in the first order. In the latter case, however, both reduce to Rubin's [13] result. The available results on stochastic difference equations are summarized in a table at the end of the paper. Some of the details and computations omitted in this paper may be found in [10]. In the following section, some known lemmas related to stochastic convergence are collected and stated in a convenient form, as they will be constantly referred to in both parts of the paper. (For proofs, see [2], [3], [4], [6] and [9].) Download Citation M. M. Rao. "Consistency and Limit Distributions of Estimators of Parameters in Explosive Stochastic Difference Equations." Ann. Math. Statist. 32 (1) 195 - 218, March, 1961. https://doi.org/10.1214/ Published: March, 1961 First available in Project Euclid: 27 April 2007 Digital Object Identifier: 10.1214/aoms/1177705151 Rights: Copyright © 1961 Institute of Mathematical Statistics Vol.32 • No. 1 • March, 1961
{"url":"https://projecteuclid.org/journals/annals-of-mathematical-statistics/volume-32/issue-1/Consistency-and-Limit-Distributions-of-Estimators-of-Parameters-in-Explosive/10.1214/aoms/1177705151.full","timestamp":"2024-11-11T12:02:12Z","content_type":"text/html","content_length":"150635","record_id":"<urn:uuid:c0eab6a9-d47c-4b45-91c2-d53ec42cc2d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00838.warc.gz"}
Detailed Course Information MTH 252 is a calculus course covering definite and indefinite integrals. Specific topics include conceptual development of the definite integral, properties of the definite integral, the first and second Fundamental Theorems of Calculus, constructing antiderivatives, techniques of indefinite integration, approximating definite integrals, and applications. Analytical, graphical, and numerical methods are used to support one another in developing the course material. Opportunities are provided for students to work in groups, verbalize concepts with one another, and explore concepts and applications using technology. 5.000 Credit hours 50.000 TO 60.000 Lecture hours Syllabus Available Levels: Credit Schedule Types: Lecture Mathematics Division Mathematics Department Course Attributes: Tuition, Science/Math/Computer Science Must be enrolled in one of the following Levels: Skills Development May not be enrolled in one of the following Colleges: College Now General Requirements: ( Course or Test: MTH 251 May not be taken concurrently. ) ( Course or Test: TEST 5M51 May be taken concurrently. ) ( Course or Test: TEST 6APB3 to 6APB4 May be taken concurrently. ) ( Course or Test: TEST 6APA3 to 6APA4 May be taken concurrently. ) ( Course or Test: TEST 6FMH4 May be taken concurrently. ) ( Course or Test: MTH 252 Minimum Grade of C- May not be taken concurrently. )
{"url":"https://crater.lanecc.edu/banp/bwckctlg.p_disp_course_detail?cat_term_in=202440&subj_code_in=MTH&crse_numb_in=252","timestamp":"2024-11-08T02:50:58Z","content_type":"text/html","content_length":"10627","record_id":"<urn:uuid:02ee2b84-dbcd-473e-87eb-34a83e227683>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00473.warc.gz"}
Relationship Between Weight and Mass Author: admintanbourit The terms “weight” and “mass” are often used interchangeably, leading many people to believe they are the same thing. However, in the realm of physics, these two terms have very distinct meanings and understanding the difference between them is crucial to understanding how the world works. First, let’s define what weight and mass actually are. Mass is a measurement of the amount of matter an object contains, while weight is a measurement of the force of gravity acting on an object. In other words, mass is an intrinsic property of an object that remains the same regardless of where it is, while weight can vary depending on the gravitational pull of the planet or body it is on. To better understand the relationship between weight and mass, let’s take a closer look at Newton’s second law of motion, which states that force (F) is equal to mass (m) times acceleration (a). Mathematically, this can be expressed as F=ma. This means that the force acting on an object is directly proportional to its mass and the acceleration it experiences. So, how does this relate to weight? Well, when an object is on Earth, it experiences a constant acceleration due to the force of gravity. This acceleration is known as the gravitational acceleration, denoted by the letter “g”. On Earth, the value of g is approximately 9.8 meters per second squared (m/s^2). This means that for every kilogram of mass an object has, it will experience a weight of 9.8 newtons (N) due to the force of gravity. To illustrate this further, let’s consider a 5-kilogram object. Using the formula F=ma, we can determine that the weight of this object on Earth would be 5kg x 9.8m/s^2, which equals 49N. This means that the 5kg object would experience a weight of 49N due to the Earth’s gravitational pull. Now, let’s take this same 5-kilogram object and imagine it on the moon. The moon has a weaker gravitational pull than Earth, with a gravitational acceleration of only 1.62m/s^2. This means that the same 5kg object, when on the moon, would experience a weight of only 5kg x 1.62m/s^2, which equals 8.1N. This is significantly less than the weight it would experience on Earth, even though the object’s mass remains the same. This example clearly demonstrates the relationship between weight and mass. Weight is directly proportional to mass but is also influenced by the strength of the gravitational force acting on the Furthermore, it is important to note that weight is a force, while mass is a measure of the amount of matter an object contains. In physics, force is defined as any influence that causes an object to undergo acceleration. This means that when an object is at rest, its weight is balanced by an equal and opposite force, known as the normal force. Our understanding of the relationship between weight and mass has a significant impact on many practical applications. For example, in the field of sports, weightlifting is all about overcoming the weight of an object, not its mass. This is why athletes in different weight categories can lift similar weights, even though they may have different masses. In conclusion, while weight and mass may seem like interchangeable terms in everyday language, they have distinct meanings in the realm of physics. Mass is a measure of the amount of matter an object contains, while weight is a measure of the force of gravity acting on an object. The relationship between the two is directly proportional, and understanding this relationship is crucial in understanding how objects behave in different gravitational environments.
{"url":"https://tanbourit.com/relationship-between-weight-and-mass/","timestamp":"2024-11-08T15:12:12Z","content_type":"text/html","content_length":"112272","record_id":"<urn:uuid:ee3b45ae-3be4-47dd-ab9b-b38729bf6054>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00068.warc.gz"}
Bioequivalence and Bioavailability Forum SHAM(e) math [NCA / SHAM] Dear Helmut! ❝ Reach for the stars, even if you have to stand on a cactus. Susan Longacre Uno puede estar mirando las estrellas y al mismo tiempo verse la punta de las pestañas (Julio Cortázar) I am very grateful for your answers and the references provided! I've found some of them and will search for others although for some reason I have strong doubts that our local library has books on pharmacokinetics on german printed in 50th ❝ Not only that. As a rule of thumb at \(\small{MRT}\) ~⅔ of the drug is eliminated. It is very useful comparing PK models with different compartments. The slowest t[½] might be misleading (see there, slides 24–28). There is a big problem with it. To get a reliable estimate of AUC one has to cover 95% (!) of AUC[0–∞] (note that I’m not taking about BE but hard-core PK). For AUMC is should be 99%. I’m quoting Les Benet. Don’t blame me. I was wondering from where such a rule of thumb was going and integrated the area for simple exponential elimination. It turns out that at MRT (1-exp(-1))~0,632 of the drug is eliminated for IV and slightly lower for EV (so the rule of pinky is 0,632 versus the rule of thumb (2/3=0,(6)) As for physics there exists inaccuracy in the considerations on the slide "Excursion to Hydrodynamics" . "Same proportions is emptied in the same time interval" is true only when you are solving school problems with a pool. Exactly the unexpired volume leaked depends on the form of the vessel. For the cylindric vessel for example water height and thus the volume is proportional to t . If you want to have a constant proportion you need a vessel with a form of parabola x that is or consider Mariotte’s bottle ❝ What I learned: The variability of VRT sucks. Not surprising cause we have \(\small{t^2}\) and \(\small{C^2}\) in it. I've calculated C for several real studies according to simple linear trapezoidal rule: $$C_c=\frac{1}{3}\frac{\sum\limits_{i}(t_{i+1}-t_i)(C^{2}_{i}+C^{2}_{i+1}+C_{i}\cdot C_{i+1})}{\sum\limits_{i}(t_{i+1}-t_{i})(C_{i+1}+C_i)}\tag{7}.$$ Although it has C in it, it's variability was always lower than that of C , but I should've check it more carefully. Dear ElMaestro! ❝ I think F may be in its own right also included on your list of crackpot ideas from the odd sock drawer? PMDA have a sentence about it in their guidance. "If F can be calculated by deconvolution, F may be used instead of AUC" Thank you! I will definitely add it to my collection of weird PK parameters! Need to know more about deconvolution... Dear mittyri! ❝ By the way I couldn't follow Yamaoka's logic regarding that magic cut-off errors. How did they find it? I am puzzled with the same question. How did they calculated the time to reach 5% of C I slightly modified the Helmut's on the article of Scheerans et al. (2008) Let us consider a one-compartment model with first-order absorbtion of the form: $$C=\frac{A}{k_a-k_e}(\textrm{e}^{-k_et}-\textrm{e}^{-k_at}) \tag{6}$$ then residual area (1-AUC ) should be as follows: $$AUC_{resid}(x,t)=\frac{x\textrm{e}^{-t\cdot k_e}-\textrm{e}^{-x\cdot k_e*t}}{x-1},\quad \textrm{where}\qquad x=\frac{k_a}{k_e}.$$ Let n define the ratio of t to T , then $$AUC_{resid}(x,n)=\frac{x\cdot2^{-n}-2^{-nx}}{x-1}\sim \frac{x\cdot 2^{-n}}{x-1}\qquad for\qquad nx>>1. \tag{8}$$ In order to estimate the duration of sampling to achieve specific AUC we can use the simplifyed formula $$n=\textrm{log}_2\left(\frac{x}{(x-1)AUC_{resid}}\right) \tag{9},$$ for example for x=2 and AUC =1% the duration should be n=7.64 T , for AUC =20% the duration should be n=3.32 T (the exact value is In particular, $$AUC_{resid}(T_{1/2},x)=\frac{x-2^{1-x}}{2(x-1)};\quad AUC_{resid}(T_{max},x)=\frac{x^{\frac{2-x}{1-x}}-x^{\frac{x}{1-x}}}{(x-1)};\quad AUC_{resid}(2T_{max},x)=\frac{x^{\frac{3-x} {1-x}}-x^{\frac{2x}{1-x}}}{(x-1)}. $$ ,x) is a monotone function of x limited from 2/e (0.736) to 1; ,x) is a monotone function of x limited from 3/e (0.406) to 1. "Being in minority, even a minority of one, did not make you mad" Complete thread:
{"url":"https://forum.bebac.at/forum_entry.php?id=21565&order=time","timestamp":"2024-11-13T14:45:43Z","content_type":"text/html","content_length":"20188","record_id":"<urn:uuid:b7239034-1a75-4029-81b1-6aa2a764f219>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00008.warc.gz"}
S 2-D Software Reproducing, Evaluating and Visualizing Schwartz’s 2-Dimensional Value Space What does S2-D do? S2-D is a computer progamme that helps researchers to reproduce, evaluate and visualise Schwartz’s 2-dimensional value space using empirical data in a Goodness-of-Fit test procedure. The automation of this procedure almost completely replaces the manual methods used previously by researchers working with Schwartz’s 2-dimensional model and performing confirmatory Smallest Space Analysis (SSA). SSA is a specific multi-dimensional scaling technique that plots variables as points on a multi-dimensional spatial map and into the smallest possible geometric space. The points or co-ordinates are located on the spatial map according to the similarity of the coefficients (e.g. Pearson Correlation Coefficients) among the variables: closely related variables are plotted close together and dissimilar variables further apart. S2-D is faster, more accurate and standardises the Goodness-of-Fit test procedure in reproducing Schwartz’s two-dimensional model based on SSA.
{"url":"https://s2-dsoftware.fh-linz.at/","timestamp":"2024-11-08T23:27:16Z","content_type":"application/xhtml+xml","content_length":"18935","record_id":"<urn:uuid:a7ef283f-9dbc-469b-98f7-b712159d83c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00521.warc.gz"}
Removing the Wigner bound in non-perturbative effective field theory The Wigner bound, setting an upper limit on the scattering effective range, is examined at different orders of contact effective field theory. Using cutoff regulator we show that the bound loosens when higher orders of the theory are considered. For a sharp and Gaussian regulators, we conjecture an analytic formula for the dependence of the Wigner bound on the theory's order. It follows that the bound vanishes in the limit of infinite order. Using a concrete numerical example we demonstrate that the above surmise still holds after renormalization at finite cutoff. Studying the 3-body system with this example, we have found that limiting the permissible range of cutoffs by the Wigner bound, we avoid the Thomas collapse, and don't need to promote the 3-body force to leading order. Bibliographical note Publisher Copyright: © 2020 The Authors Dive into the research topics of 'Removing the Wigner bound in non-perturbative effective field theory'. Together they form a unique fingerprint.
{"url":"https://cris.huji.ac.il/en/publications/removing-the-wigner-bound-in-non-perturbative-effective-field-the","timestamp":"2024-11-04T05:08:29Z","content_type":"text/html","content_length":"48502","record_id":"<urn:uuid:9f723d3b-9784-4ca4-a545-490e04013076>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00453.warc.gz"}
SAS : Find Variable with the Max or Min in a row This tutorial demonstrates how to find a variable with the maximum or minimum value for each row (observation) in SAS. It's pretty straightforward to calculate max or min value but a little problematic to identify the variable name against the value. Let's create Sample Data data readin; input y1-y6; Find Variable Name Containing Max Value in a row data out; set readin; array values y1-y6; largest = max(of values[*]); index = whichn(largest, of values[*]); name = vname(values[index]); proc print; Explanation : 1. array values y1-y6 : Lists all the variables for calculating max value 2. max() function calculates maximum value of all the variables listed in step 1 across rows 3. whichn() function returns the column index number of the matching value. In this case, it is searching the maximum value across rows and returns the column position in the listed variables. For example, it returns 6 in row 1 as 87 is the maximum value in row 1 and it is placed at 6th column of y1-y6 4. vname() function returns the variable name. In this case, it calculates variable name of the largest value. To Find Variable Name containing minimum value Use min() function instead of max() in the code above and the remaining code would be same. Find Top 3 Variables Suppose you are asked to identify top 3 variables across rows. You can use largest function in SAS. LARGEST Function : Syntax largest(k, variables) k : kth value you want (2 for the second largest value) data want; set readin; array values[*] y1-y6; array large[3]; array names[3] $32; do i = 1 to dim(large); large[i] = largest(i,of values[*]); index = whichn(large[i],of values[*]); names[i] = vname(values[index]); drop i index; proc print; Explanation : 1. array values[*] y1-y6 - Specify all the variables from which you want to calculate top 3 variables 2. array large[3] - Top 3 large values 3. array names[3] $32 - Names of top 3 variables 4. do i = 1 to dim(large) - 3 iterations for calculating first, second and third largest values 5. large[i] = largest(i,of values[*]) - largest value when i =1, second largest when i =2 and so on. 6. index = whichn(large[i],of values[*]) : Column index for kth largest values 7. names[i] = vname(values[index]) : Extract the variable name of the largest using largest index 8. drop i index; : Dropping Irrelevant Variables Find Bottom 3 Variables Refer the code above and change largest() function to smallest() function. Post Comment 3 Responses to "SAS : Find Variable with the Max or Min in a row" 1. I found this very useful 2. Your post is interesting - thank you for publishing it. Unfortunately, the algorithm is not very robust. Let's suppose that some of your values are missing or that two or more values are identical within an observation. You'll see it is not working properly. I tried to find a correction but unfortunately, I cannot. Best regards, Jean Hardy SAS Consultant and trainer 1. Did you find a solution to this ?
{"url":"https://www.listendata.com/2016/09/sas-variable-with-maximum-value-in-row.html","timestamp":"2024-11-04T23:28:06Z","content_type":"application/xhtml+xml","content_length":"112446","record_id":"<urn:uuid:b651d046-8ae5-43c0-957c-4b4b3af256bb>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00658.warc.gz"}
The Bode Plot When we perform the preceding experiment the output we measure is a voltage Thus our steady state output for an input We now wish to make a Bode plot of the magnitude (in decibels) of the output voltage corresponding to the cart position versus the frequency (in radians) of the input. The magnitude in decibels of The Bode plot thus consists of three parts. First there is a constant gain term of
{"url":"https://daviddeley.com/pendulum/page15.htm","timestamp":"2024-11-10T01:18:29Z","content_type":"text/html","content_length":"4358","record_id":"<urn:uuid:855b47e3-5cc1-4f4e-a60f-d2248a76ea71>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00722.warc.gz"}
[R-sig-ME] interpreting significance from lmer results for dummies (like me) [R-sig-ME] interpreting significance from lmer results for dummies (like me) Andrew Robinson A.Robinson at ms.unimelb.edu.au Sat Apr 26 09:59:21 CEST 2008 Hi Mark, On Fri, Apr 25, 2008 at 11:53:24PM -0400, Mark Kimpel wrote: > I am a bioinformatistician, with my strongest background in molecular > biology. I have been trying to learn about mixed-effects to improve the > analysis of my experiments, which certainly contain random effects. I will > admit to being totally lost in the discussions regarding lack of p-value > reporting in the current versions of lmer. Furthermore, I suspect those that > need to publish to non-statistical journals will face reviewers who are > equally in the dark. Where can I find a biologist-level explanation of the > current controversy, I'll take a stab. 1) the traditional, Fisher-style test of a null hypothesis is based on computing the probability of observing a test statistic as extreme or more extreme than the one actually observed, assuming that the null hypothesis is true. This probability is called the p-value. If the p-value is less than some cut-off, e.g. 0.01, then the null hypothesis is rejected. 2) in order to compute that p-value, we need to know the cumulative distribution function of the test statistic when the null hypothesis is true. In simple cases this is easy: for example, we use the t-distribution for the comparison of two normal means (with assumed equal variances etc). 3) in (many) hierarchical models the cumulative distribution function of the test statistic when the null hypothesis is true is simply not known. So, we can't compute the p-value. 3a) in a limited range of hierarchical models that have historically dominated analysis of variance, e.g. split-plot designs, the reference distribution is known (it's F). 3b) Numerous experts have (quite reasonably) built up a bulwark of intuitive knowledge about the analysis of such designs. 3c) the intuition does not necessarily pertain to the analysis of any arbitrary hierarchical design, which might be unbalanced, and have crossed random effects. That is, the intuition might be applied, but inappropriately. 4) in any case, the distribution that is intuitively or otherwise assumed is the F, because it works in the cases mentioned in 3a. All that remains is to define the degrees of freedom. The numerator degrees of freedom are obvious, but the denominator degrees of freedom are not known. 4a) numerous other packages supply approximations to the denominator degrees of freedom, eg Satterthwaite, and KR (which is related). They have been subjected to a modest degree of scrutiny by 5) however, it is not clear that the reference distribution is really F at all, and therefore it is not clear that correcting the denominator degrees of freedom is what is needed. Confusion reigns on how the p-values should be computed. And because of this confusion, Doug Bates declines to provide p-values. > how can I learn how to properly judge significance from my lmer > results, There are numerous approximations, but no way to properly judge significance as far as I am aware. Try the R-wiki for algorithms, and be conservative. Or, use lme, report the p-values computed therein, and be aware that they are not necessarily telling you exactly what you want to know. > and what peer-reviewed references can I steer reviewers > towards? Not sure about that one. I'm working on some simulations with Doug but it's slow going, mainly because I'm chronically disorganised. > I understand, from other threads, that some believe a paradigm shift > away from p-values may be necessary, but I it is not clear to me > what paradigm will replace this entrenced view. I can appreciate the > fact that there may be conflicting opinions about the best > equations/algorithms for determining significance, but is there any > agreement on the goal we are heading towards? The conflict is not about p-values per se, but about the way that they are calculated. I would bet that the joint goal is to find an algorithm that provides robust, reasonable inference in a sufficiently wide variety of cases that its implementation proves to be worthwhile. I hope that this was helpful. Andrew Robinson Department of Mathematics and Statistics Tel: +61-3-8344-6410 University of Melbourne, VIC 3010 Australia Fax: +61-3-8344-4599 More information about the R-sig-mixed-models mailing list
{"url":"https://stat.ethz.ch/pipermail/r-sig-mixed-models/2008q2/000904.html","timestamp":"2024-11-09T00:10:03Z","content_type":"text/html","content_length":"7894","record_id":"<urn:uuid:78cc611e-5339-44e1-aa40-49287381de4a>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00421.warc.gz"}
Algebra, algebraic topology, and their interactionsLecture notes in mathematics (Springer-Verlag) proceedings of a conference held in Stockholm, Aug. 3-13, 1983, and later developments Roos, J.-E 1935- Berlin New York Springer-Verlag c1986 x, 396 p ill 25 cm edited by J.-E. Roos Cuprinde bibliografie pe capitole Algebra Congresses Algebraic topology Congresses 510 s 512 0387164537 (U.S. : pbk.) 86006428
{"url":"http://library.imar.ro/cgi-bin/koha/opac-export.pl?op=export&bib=6188&format=mods","timestamp":"2024-11-05T09:14:50Z","content_type":"application/xml","content_length":"1927","record_id":"<urn:uuid:8001acb8-889f-4d5a-b11f-21c1e50b67fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00218.warc.gz"}
5WA907530Q gateway Requested by user online chat to support this gateway. I have it in hand now. I need to know the pinout to connect power and ground so I can start to development. The CPU is SPC58. Re: 5WA907530Q gateway T32 connector Pin 1 +12v constant Pin 16 - GND Pin 32.- GND Re: 5WA907530Q gateway Thx. I will be back in two days Re: 5WA907530Q gateway I checked that pin 16 is GND. However I cannot see pin 32 is also a GND. Re: 5WA907530Q gateway It could be that it isn't but it's what diagram shows. Re: 5WA907530Q gateway Is there a watch dog? I tried to deter it but the power seems to get to 0 amps. Prior to detect,the power looks normal. Re: 5WA907530Q gateway usbbdm wrote: ↑Sun Sep 17, 2023 7:17 am Is there a watch dog? I tried to deter it but the power seems to get to 0 amps. Prior to detect,the power looks normal. All newer VAG modules need IGN via canbus. If this is missing probably unit is in deep-sleep. I have no idea regarding the internals of PCB. Re: 5WA907530Q gateway What is IGN? Re: 5WA907530Q gateway usbbdm wrote: ↑Mon Sep 18, 2023 3:59 amWhat is IGN? Re: 5WA907530Q gateway Got it sorted out. I can detect CPU. Need a little bit time to see if it is censored.
{"url":"https://usbjtag.com/phpbb3/viewtopic.php?f=10&t=10009&p=63896&sid=b0adc9d90886677973691602999f17a2","timestamp":"2024-11-01T22:36:15Z","content_type":"text/html","content_length":"51313","record_id":"<urn:uuid:a0cfd2a9-ea6b-4ddb-8e43-0199e1a40376>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00187.warc.gz"}
The Engineer's Guide to Turbulence and Laminar Flow: What's the Difference? Fluid dynamics is a complex field of study that deals with the behavior of liquids and gases in motion. Understanding the difference between turbulence and laminar flow is crucial for engineers as it affects the design and operation of various systems, from pipelines to aircraft. This guide delves into the nuances of these flow types, the physics governing them, and their implications in engineering applications, offering insights into managing and utilizing these flows effectively. Key Takeaways • Turbulence and laminar flow are distinct states of fluid motion, with turbulence characterized by chaotic changes in pressure and velocity, while laminar flow features smooth, orderly layers of • The Reynolds number is a critical dimensionless quantity that helps predict whether a fluid flow will be laminar or turbulent, based on flow velocity, characteristic length, and viscosity. • Laminar flow is desirable in many engineering applications due to its lower frictional losses and predictable behavior, but achieving and maintaining it can be challenging. • Engineers can employ various turbulence control techniques to reduce the negative impacts of turbulent flow, such as drag and vibration, in systems like pipelines and aircraft. • Advancements in computational fluid dynamics (CFD) and turbulence modeling are essential for the accurate prediction and analysis of fluid flow, leading to more efficient and innovative engineering designs. Understanding Fluid Dynamics: Turbulence vs. Laminar Flow Defining Fluid Dynamics Fluid dynamics is the branch of physics concerned with the study of fluids (liquids and gases) in motion. Understanding the behavior of fluid flow is crucial for various engineering disciplines, as it impacts design, efficiency, and functionality of systems involving fluid transport. Fluid flow can be broadly categorized into two types: laminar and turbulent. Laminar flow is smooth and orderly, often visualized as layers sliding past one another with minimal mixing. In contrast, turbulent flow is chaotic and characterized by eddies, swirls, and unpredictability. The distinction between these flow types is not just academic; it has practical implications in fields ranging from aerospace to biomedical engineering. Understanding these flow characteristics enables engineers to design more effective systems, whether they're aiming for the predictability of laminar flow or managing the complexities of turbulence. The following list outlines the key aspects of fluid dynamics that are essential for engineers: • The properties of the fluid, such as density and viscosity • The geometry and scale of the flow system • The speed and pressure conditions of the fluid • The influence of external forces, like gravity or magnetic fields Characteristics of Laminar Flow In the realm of fluid dynamics, laminar flow represents a highly ordered fluid motion with layers that glide smoothly past one another. Fluid particles move in parallel paths, maintaining a consistent flow rate and direction. This type of flow is characterized by its predictability and the absence of cross-currents, eddies, or swirls. • Fluid velocity is consistent across any cross-section perpendicular to the flow direction. • There is a minimal mixing of fluid layers, which helps in maintaining a uniform composition. • The flow is silent or generates very low noise levels due to the lack of disturbances. The occurrence of laminar flow is often associated with lower velocities and higher fluid viscosity. It is the preferred state in many engineering applications due to its predictable nature and ease of mathematical modeling. Characteristics of Turbulent Flow Turbulent flow is a complex phenomenon in fluid dynamics that stands in stark contrast to the orderly layers of laminar flow. In turbulence, the flow is characterized by chaotic changes in pressure and velocity in both time and space. Fluid particles move in an irregular manner, often resulting in eddies and vortices that vary in size and shape. • Turbulent flow is highly mixed and has greater momentum diffusion. • It is associated with higher energy dissipation due to the constant formation and decay of vortices. • The flow has a non-uniform velocity distribution across the cross-section of a pipe or channel. Transition Between Laminar and Turbulent Flow The transition from laminar to turbulent flow is not an abrupt change but rather a gradual process influenced by several factors. At low velocities, fluid flows in smooth, orderly layers, known as laminar flow. As the velocity increases, these layers can become unstable, and at a certain point, the flow becomes chaotic and random, marking the onset of turbulence. The critical threshold at which this transition occurs is determined by the Reynolds number, a dimensionless quantity that predicts the flow regime based on the fluid's properties and flow conditions. Factors such as surface roughness, obstacles in the flow path, and fluctuations in velocity can induce the transition even at lower Reynolds numbers. • Surface roughness • Obstacles in the flow • Velocity fluctuations The Physics Behind Turbulence The Concept of Viscosity Viscosity is a fundamental property of fluids that describes their resistance to flow. The higher the viscosity, the thicker the fluid, and the more it resists deformation. This property is crucial in determining whether a fluid will exhibit laminar or turbulent flow under certain conditions. In the context of fluid dynamics, viscosity can be thought of as the internal friction within a fluid. It's the force that must be overcome to allow one layer of fluid to move relative to another. Water, for example, has a low viscosity and flows easily, while honey has a high viscosity and flows much more slowly. • Newtonian fluids: Maintain constant viscosity regardless of the applied stress. • Non-Newtonian fluids: Viscosity changes when under stress or over time. Understanding viscosity is essential for engineers as it affects the energy required to pump fluids through pipes, the efficiency of mixing processes, and the behavior of fluids under different flow Reynolds Number: The Key to Predicting Flow Type The Reynolds number is a dimensionless quantity used in fluid mechanics to predict the flow regime in a fluid system. It is defined as the ratio of inertial forces to viscous forces and is used to determine whether a flow will be laminar or turbulent. The higher the Reynolds number, the more likely the flow is to be turbulent. Reynolds number is calculated using the formula: (Re = \frac{\rho V L}{\mu}) where (\rho) is the fluid density, (V) is the velocity of the fluid, (L) is a characteristic linear dimension (such as diameter of a pipe), and (\mu) is the dynamic viscosity of the fluid. Typically, for flows in pipes, a Reynolds number less than 2000 indicates laminar flow, while a value greater than 4000 suggests turbulent flow. The region between these two values is the transitional flow regime. Here's a simple table summarizing these regimes: Understanding the Reynolds number is crucial for engineers as it helps in designing systems that can either take advantage of laminar flow's predictability or manage the chaotic nature of turbulence. Energy Dissipation in Turbulent Flows In turbulent flows, energy dissipation is a critical phenomenon that engineers must account for. Energy is dissipated primarily through viscous action, where the fluid's kinetic energy is converted into heat. This process is more intense in turbulent flow due to the chaotic and irregular motion of the fluid particles. Viscosity plays a pivotal role in the rate of energy dissipation. In engineering applications, understanding how energy is dissipated can inform the design of systems to ensure they are efficient and safe. For instance, in heat exchangers, the dissipation rate can affect the overall heat transfer efficiency. The following table summarizes key variables that influence energy dissipation in turbulent flows: Understanding these variables helps engineers predict and control the dissipation of energy in turbulent systems, which is essential for optimizing performance and longevity of fluid-handling Factors Influencing Turbulence in Fluids Several factors play a pivotal role in determining whether a fluid flow will be turbulent or laminar. Fluid velocity is a primary factor; as velocity increases, the likelihood of turbulence also rises. The viscosity of the fluid is another critical element, with lower viscosity fluids more prone to turbulent flow. • Surface Roughness: Rough surfaces can disrupt the flow, making it more susceptible to turbulence. • Flow Obstacles: Objects in the flow path can create eddies and increase turbulence. • Temperature: Temperature changes can affect fluid viscosity and density, influencing flow characteristics. The geometry of the flow channel also has a significant impact. For instance, a sudden expansion or contraction in pipe diameter can lead to separation of the flow and increased turbulence. By carefully considering these factors, engineers can design systems that either minimize turbulence or harness it effectively, depending on the application requirements. Laminar Flow in Engineering Applications Ideal Conditions for Laminar Flow Achieving laminar flow in engineering systems requires specific conditions that minimize disturbances and maintain a steady, uniform flow of fluid. Uniform velocity at the inlet and smooth, straight piping or channels are fundamental to sustaining laminar flow. The fluid should be free of particulates and bubbles, as these can introduce fluctuations leading to turbulence. Temperature and pressure also play a critical role in maintaining laminar conditions. They must be kept constant to prevent changes in fluid density and viscosity, which can disrupt the flow regime. In applications where precision is paramount, such as in the services provided by Ian Coll McEachern, maintaining laminar flow is essential. The following list outlines the ideal conditions for laminar flow: • Steady inlet conditions • Smooth, straight conduits • Absence of particulates and bubbles • Constant temperature and pressure • Minimal vibrations and external disturbances Benefits of Laminar Flow in Engineering Laminar flow, characterized by smooth and orderly fluid motion, offers several advantages in engineering applications. One of the primary benefits is the reduction in frictional forces, which leads to lower energy consumption and increased efficiency in systems such as pipelines and ducts. Predictability is another significant advantage of laminar flow. The orderly movement of fluid makes it easier to model and predict flow behavior, which is crucial for the design and optimization of engineering systems. This predictability also enhances the reliability and consistency of processes that depend on precise fluid control. The following list outlines additional benefits of laminar flow: • Improved heat transfer due to consistent fluid contact with surfaces • Reduced noise levels as a result of less turbulence • Enhanced control over material coatings and sprays, leading to better product quality Understanding these benefits allows engineers to leverage laminar flow in various applications, from aerospace to biomedical engineering, where minimal turbulence is desired. Challenges and Limitations of Laminar Flow While laminar flow is often idealized in engineering applications for its predictability and smoothness, it comes with its own set of challenges and limitations. Maintaining laminar flow can be difficult, especially in practical scenarios where environmental and physical conditions are not constant. The requirement for smooth surfaces and controlled conditions can lead to increased costs and complexity in design and maintenance. Scalability is another limitation. As systems scale up, the likelihood of disturbances that can transition the flow to turbulence increases. This is particularly problematic in large-scale operations where even minor disruptions can have significant consequences. • Sensitivity to disturbances • Requirement for precise control • Increased cost for maintaining ideal conditions • Difficulty in scaling up without transitioning to turbulence Case Studies: Laminar Flow in Action In the realm of engineering, laminar flow is often a desirable condition due to its predictable and orderly nature. One notable case study is the use of laminar flow in the aerospace industry, where it significantly reduces drag on aircraft surfaces, leading to improved fuel efficiency. Microfluidics technology, which is pivotal in biomedical devices, relies on laminar flow to manipulate small volumes of fluids. Here, the precise control of flow allows for accurate testing and • Medical Equipment: Laminar flow hoods are used to create sterile environments for pharmaceutical manufacturing and surgical procedures. • Coating Processes: Uniform application of paints and coatings in the automotive industry is achieved through laminar flow systems. • Semiconductor Manufacturing: Cleanrooms maintain a laminar flow to prevent contamination of sensitive electronic components. Managing Turbulence in Engineering Systems Turbulence Control Techniques Controlling turbulence is crucial in many engineering systems to reduce wear and tear, improve efficiency, and maintain stability. Effective turbulence management can lead to significant improvements in system performance and longevity. One common approach is the use of flow straighteners or honeycomb structures to streamline the flow and dampen turbulence. Vortex generators are another technique employed to delay or prevent the transition from laminar to turbulent flow. These small, wing-like devices are strategically placed to create counter-rotating vortices that energize the boundary layer, which can help in maintaining a laminar flow over a greater distance. The following list outlines some of the key techniques used in turbulence control: • Flow straighteners and honeycomb structures • Vortex generators • Boundary layer suction or blowing • Streamwise riblets • Active flow control using sensors and actuators Designing for Turbulence: Practical Considerations When engineering systems are expected to operate under turbulent conditions, thoughtful design can mitigate negative impacts and enhance performance. Designers must account for the dynamic nature of turbulence while ensuring that structures can withstand the associated stresses. Material selection, shape optimization, and flow path design are critical factors in this process. Flow path design is particularly crucial as it directly influences the turbulence characteristics within a system. Engineers often use streamlined shapes to reduce resistance and prevent flow separation, which can lead to increased turbulence. Additionally, the incorporation of features such as vortex generators can be beneficial in managing flow behavior. • Material Selection: Choosing materials with appropriate strength and fatigue resistance. • Shape Optimization: Streamlining structures to minimize resistance and flow separation. • Flow Path Design: Incorporating features to manage turbulence, like vortex generators. Mitigating Turbulent Effects in Pipelines and Channels In the realm of fluid transport, mitigating turbulent effects in pipelines and channels is crucial for enhancing efficiency and reducing wear and tear on infrastructure. One effective strategy is to reduce the flow velocity, which can significantly diminish the intensity of turbulence. Smooth surfaces also play a vital role, as they offer less resistance to the flow of fluids, thereby minimizing the generation of turbulent eddies. Another approach involves altering the fluid's properties. By increasing the fluid's viscosity, engineers can suppress turbulent fluctuations and promote a more stable flow. This can be achieved through the addition of certain polymers or other substances that increase the cohesive forces within the fluid. While these methods are beneficial, they must be applied judiciously to balance the trade-offs between turbulence suppression and the potential for increased pumping costs or reduced flow rates. Innovations in Turbulence Management The field of turbulence management is witnessing significant advancements, driven by the need to improve efficiency and performance in various engineering systems. Innovative techniques are being developed to better control and mitigate the effects of turbulent flows. One such innovation is the use of active flow control systems, which adjust in real-time to changes in flow conditions. These systems employ sensors and actuators to modify the flow, reducing drag and preventing flow separation. • Smart materials that change shape or properties in response to flow conditions • Biomimicry-inspired designs that emulate the natural flow control seen in marine animals • Advanced algorithms for real-time flow adjustment and control These advancements not only enhance the performance of engineering systems but also contribute to the sustainability of operations by reducing energy consumption and material wear. Advanced Topics in Fluid Flow Computational Fluid Dynamics (CFD) and Flow Simulation Computational Fluid Dynamics (CFD) is a crucial tool in the engineer's arsenal for analyzing complex fluid flows. By leveraging numerical methods and algorithms, CFD allows for the simulation of fluid movement and interaction with surfaces under a variety of conditions. The accuracy of CFD simulations is paramount for predicting flow behavior and designing efficient systems. Simulation fidelity depends on the resolution of the computational mesh and the physical models employed. Below is a list of key factors that influence the quality of a CFD simulation: • Mesh size and refinement • Turbulence modeling accuracy • Boundary condition precision • Solver stability and convergence CFD has become an indispensable part of the design process, enabling engineers to visualize and optimize flow patterns before physical prototypes are constructed. This not only saves time and resources but also opens up new possibilities for innovation in fluid dynamics. Turbulence Modelling: From Theory to Practice Turbulence modelling is a critical aspect of computational fluid dynamics (CFD) that allows engineers to predict and analyze the complex behavior of turbulent flows. Accurate models are essential for designing systems that can withstand the chaotic nature of turbulence. The transition from theoretical models to practical applications involves several steps, including model selection, calibration, and validation. Each model has its own set of assumptions and limitations, which must be carefully considered: • Selection of the appropriate turbulence model based on flow conditions • Calibration of model constants using experimental data • Validation against real-world scenarios to ensure reliability SOMA Design Lab in San Francisco is at the forefront of applying these principles, offering facilities for innovation that complement the theoretical aspects of turbulence modelling. The Role of Non-Newtonian Fluids in Flow Dynamics Non-Newtonian fluids exhibit unique flow characteristics that significantly differ from those of Newtonian fluids. The viscosity of non-Newtonian fluids is not constant and changes in response to applied stress, which profoundly affects their flow dynamics. For instance, shear-thinning liquids, like ketchup, become less viscous as shear stress increases, leading to a more turbulent flow under certain conditions. Shear-thickening fluids, on the other hand, such as cornstarch mixed with water, become more viscous when subjected to stress, which can induce a transition to laminar flow or create highly complex flow patterns. Understanding these behaviors is crucial for engineers who design systems involving non-Newtonian fluids, as the flow type can dramatically impact performance and efficiency. The following list outlines some of the key considerations when dealing with non-Newtonian fluids in engineering applications: • Recognizing the type of non-Newtonian behavior (shear-thinning, shear-thickening, thixotropic, etc.) • Assessing the impact of temperature and other environmental factors on viscosity • Designing equipment and systems to accommodate variable flow characteristics • Implementing precise control mechanisms to manage flow behavior Future Trends in Fluid Flow Research As the field of fluid dynamics continues to evolve, innovative technologies and methodologies are shaping the future of fluid flow research. The integration of advanced materials, such as smart fluids, and the development of more sophisticated sensors are expected to enhance the precision of flow control and measurement. • Emergence of nano-scale flow manipulation techniques • Increased use of machine learning for predictive analysis • Development of environmentally sustainable flow systems • Greater emphasis on multidisciplinary approaches The pursuit of energy efficiency and reduction of carbon footprint will drive research towards optimizing flow systems for minimal energy consumption and maximal output. The exploration of flow behavior in extreme conditions, such as microgravity or high-pressure environments, will likely yield insights applicable to a broad range of engineering challenges. Understanding the differences between turbulence and laminar flow is crucial for engineers in various fields, from aerospace to civil engineering. Turbulent flow, characterized by chaotic fluid motion, is often associated with higher energy losses but can enhance mixing and heat transfer. On the other hand, laminar flow, with its orderly layers of fluid motion, offers minimal resistance and is preferred in systems where a smooth flow is essential. By grasping the fundamentals of these flow regimes, engineers can better design and optimize systems for efficiency, safety, and performance. This guide has aimed to elucidate the key characteristics, applications, and implications of turbulence and laminar flow, providing a solid foundation for those looking to navigate the complexities of fluid dynamics in the engineering world. Frequently Asked Questions What is the difference between turbulence and laminar flow? Turbulence refers to a chaotic, irregular flow pattern with eddies and vortices, while laminar flow is characterized by smooth, parallel layers of fluid that move in orderly paths with minimal mixing between them. How does the Reynolds number predict flow type? The Reynolds number is a dimensionless quantity that helps predict whether a flow will be laminar or turbulent. It is calculated based on the fluid's velocity, characteristic length, and viscosity. Lower values typically indicate laminar flow, while higher values suggest turbulent flow. What are the benefits of laminar flow in engineering? Laminar flow offers benefits such as reduced frictional resistance, lower energy loss, and more predictable fluid behavior, which are advantageous in applications like microfluidics, coating processes, and in the design of streamlined objects. What techniques are used to control turbulence in engineering systems? Turbulence can be controlled using various techniques such as flow straighteners, boundary layer manipulation, vortex generators, and by designing system components that promote smooth flow or absorb turbulent energy. How does computational fluid dynamics (CFD) help in understanding fluid flow? CFD is a branch of fluid mechanics that uses algorithms and numerical analysis to simulate and analyze fluid flows. It allows engineers to model complex flow scenarios, optimize designs, and predict the performance of fluid-related systems before physical prototypes are made. Why is understanding non-Newtonian fluids important in flow dynamics? Non-Newtonian fluids have viscosity that changes with the rate of shear strain, which affects their flow characteristics. Understanding these fluids is crucial for designing systems that handle them, such as in the food, cosmetics, and biomedical industries.
{"url":"https://www.iancollmceachern.com/single-post/the-engineer-s-guide-to-turbulence-and-laminar-flow-what-s-the-difference","timestamp":"2024-11-13T18:21:25Z","content_type":"text/html","content_length":"1050487","record_id":"<urn:uuid:f8be71b3-fe94-4308-bbf8-1988138539a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00668.warc.gz"}
Such distinctions are however important in the produc- tion of limit cycles close to the foci in perturbations of the systems ISSN: 1072-6691. URL: http://ejde.math.txstate.edu or http://ejde.math.unt.edu ftp ejde.math.txstate.edu TOTAL FINITE MULTIPLICITY mf = 2 JOAN C. ART ´ES, JAUME LLIBRE, DANA SCHLOMIUK, NICOLAE VULPE Abstract. In this work we consider the problem of classifying all configura- tions of singularities, both finite and infinite of quadratic differential systems, with respect to thegeometric equivalence relationdefined in [3]. This relation is deeper than thetopological equivalence relation which does not distinguish between a focus and a node or between a strong and a weak focus or between foci of different orders. Such distinctions are however important in the produc- tion of limit cycles close to the foci in perturbations of the systems. The notion ofgeometric equivalence relationof configurations of singularities allows to in- corporates all these important geometric features which can be expressed in purely algebraic terms. This equivalence relation is also deeper than thequal- itative equivalence relation introduced in [17]. Thegeometric classification of all configurations of singularities, finite and infinite, of quadratic systems was initiated in [4] where the classification was done for systems with total multiplicitymf of finite singularities less than or equal to one. In this article we continue the work initiated in [4] and obtain thegeometric classification of singularities, finite and infinite, for the subclass of quadratic differential systems possessing finite singularities of total multiplicitymf= 2. We obtain 197geometrically distinct configurations of singularities for this family. We also give here the global bifurcation diagram of configurations of singularities, both finite and infinite, with respect to thegeometric equivalence relation, for this class of systems. The bifurcation set of this diagram is algebraic. The bifurcation diagram is done in the 12-dimensional space of parameters and it is expressed in terms of polynomial invariants. The results can therefore be applied for any family of quadratic systems in this class, given in any normal form. Determining the geometric configurations of singularities for any such family, becomes thus a simple task using computer algebra calculations. 1. Introduction and statement of main results 2 2. Compactifications associated to planar polynomial differential systems 7 2.1. Compactification on the sphere and on the Poincar´e disk 7 2.2. Compactification on the projective plane 11 2000Mathematics Subject Classification. 58K45, 34C05, 34A34. Key words and phrases. Quadratic vector fields; infinite and finite singularities; affine invariant polynomials; Poincar´e compactification; configuration of singularities; geometric equivalence relation. 2014 Texas State University - San Marcos. Submitted November 25, 2013. Published July 18, 2014. 2.3. Assembling data on infinite singularities in divisors of the line at infinity 13 3. Some geometric concepts 16 4. Notation for singularities of polynomial differential systems 27 Semi–elemental points: 28 Nilpotent points: 28 Intricate points: 28 Line at infinity filled up with singularities: 29 5. Invariant polynomials and preliminary results 30 5.1. Affine invariant polynomials associated with infinite singularities 30 5.2. Affine invariant polynomials associated to finite singularities 32 6. Proof of the main theorem 37 6.1. The family of quadratic differential systems with only two distinct complex finite singularities 38 A. Systems withm=h= 1. 40 B. Systems withm= 1, h= 0. 40 6.2. The family of quadratic differential systems with two real distinct finite singularities which in additional are elemental 43 A. Systems (6.29) 60 B. Systems (6.30) 69 6.3. The family of quadratic differential systems with only one finite singularity which in addition is of multiplicity two 71 Acknowledgments 77 References 77 1. Introduction and statement of main results We consider here differential systems of the form dt =p(x, y), dy dt =q(x, y), (1.1) where p, q ∈ R[x, y], i.e. p, q are polynomials inx, y over R. We call degree of a system (1.1) the integer m= max(degp,degq). In particular we callquadratic a differential system (1.1) withm= 2. We denote here by QS the whole class of real quadratic differential systems. The study of the class QS has proved to be quite a challenge since hard problems formulated more than a century ago, are still open for this class. It is expected that we have a finite number of phase portraits in QS. Although we have phase portraits for several subclasses of QS, the complete list of phase portraits of this class is not known and attempting to topologically classify these systems, which occur rather often in applications, is a very complex task. This is partly due to the elusive nature of limit cycles and partly to the rather large number of parameters involved. This family of systems depends on twelve parameters but due to the group action of real affine transformations and time homotheties, the class ultimately depends on five parameters which is still a rather large number of parameters. For the moment only subclasses depending on at most three parameters were studied globally, including global bifurcation diagrams (for example [2]). On the other hand we can restrict the study of the whole quadratic class by focusing on specific global features of the systems in this family. We may thus focus on the global study of singularities and their bifurcation diagram. The singularities are of two kinds: finite and infinite. The infinite singularities are obtained by compactifying the differential systems on the sphere or on the Poincar´e disk as defined in Section 2 (see also [14]). The global study of quadratic vector fields in the neighborhood of infinity was initiated by Coll in [13] where he characterized all the possible phase portraits in a neighborhood of infinity. Later on Nikolaev and Vulpe in [20] classified topologically the singularities at infinity in terms of invariant polynomials. Schlomiuk and Vulpe used geometric concepts defined in [25], and also introduced some new geometric concepts in [26] in order to simplify the invariant polynomials and the classification. To reduce the number of phase portraits in half, in both cases thetopological equiv- alence relation was taken to mean the existence of a homeomorphism of the plane carrying orbits to orbits andpreserving or reversing the orientation. In [5] the au- thors classified topologically (adding also the distinction between nodes and foci) the whole quadratic class, according to configurations of their finite singularities. In the topological classification no distinction was made among the various types of foci or saddles, strong or weak of various orders. However these distinctions of an algebraic nature are very important in the study of perturbations of systems possessing such singularities. Indeed, the maximum number of limit cycles which can be produced close to the weak foci in perturbations depends on the orders of the foci. There are also three kinds of simple nodes as we can see in Figure 1 below where the local phase portraits around the singularities are given. Figure 1. Different types of nodes. In the three phase portraits of Figure 1 the corresponding three singularities are stable nodes. These portraits are topologically equivalent but the solution curves do not arrive at the nodes in the same way. In the first case, any two distinct non- trivial phase curves arrive at the node with distinct slopes. Such a node is called a star node. In the second picture all non-trivial solution curves excepting two of them arrive at the node with the same slope but the two exception curves arrive at the node with a different slope. This is the generic node with two directions. In the third phase portrait all phase curves arrive at the node with the same slope. Here algebraic distinction means that the linearization matrices at these nodes and their eigenvalues, distinguish the nodes in Figure 1, see [27]. We recall that the first and the third types of nodes could produce foci in per- turbations and the first type of nodes is also involved in the existence of invariant straight lines of differential systems. For example it can easily be shown that if a quadratic differential system has two finite star nodes then necessarily the system possesses invariant straight lines of total multiplicity 6. Furthermore, a generic node at infinity may or may not have the two exceptional curves lying on the line at infinity. This leads to two different situations for the phase portraits. For this reason we split the generic nodes at infinity in two types. The distinctions among the nilpotent and linearly zero singularities finite or infinite can also be refined, as done in [4, Section 4]. Thegeometric equivalence relation for finite or infinite singularities, introduced in [3] and used in [4], takes into account such distinctions. The concept ofgeometric equivalence of configurations of singularities was defined and discussed in detail in a full section (Section 4) of our paper [4], also in [3]. This concept involves several notions such as “tangent equivalence”, “order equivalence of weak singularities” and “blow-up equivalence”. This last notion is subtle and cannot be described briefly. Therefore we advise the interested reader to consult Section 4 of [4] or of [3]. This equivalence relation is deeper than the qualitative equivalence relation in- troduced by Jiang and Llibre in [17] because it distinguishes among the foci (or saddles) of different orders and among the various types of nodes. This equivalence relation also induces a deeper distinction among the more complicated degenerate singularities. To distinguish among the foci (or saddles) of various orders we use the algebraic concept of Poincar´e-Lyapunov constants. We call strong focus (or strong saddle) a focus (or a saddle) with non–zero trace of the linearization matrix at this point. Such a focus (or saddle) will be considered to have the order zero. A focus (or saddle) with trace zero is called a weak focus (weak saddle). For details on Poincar´e- Lyapunov constants and weak foci we refer to [24], [18]. Algebraic information may not be significant for the local (topological) phase portrait around a singularity. For example, topologically there is no distinction between a focus and a node or between a weak and a strong focus. However, as indicated before, algebraic information plays a fundamental role in the study of perturbations of systems possessing such singularities. The following is a legitimate question: How far can we go in the global theory of quadratic (or more gen- erally polynomial) vector fields by using mainly algebraic means? For certain subclasses of quadratic vector fields the full description of the phase portraits as well as of the bifurcation diagrams can be obtained using only algebraic tools. Examples of such classes are: • the quadratic vector fields possessing a center [34, 23, 37, 21]; • the quadratic Hamiltonian vector fields [1, 6]; • the quadratic vector fields with invariant straight lines of total multiplicity at least four [27, 28]; • the planar quadratic differential systems possessing a line of singularities at infinity [29]; • the quadratic vector fields possessing an integrable saddle [7]. • the family of Lotka-Volterra systems [30, 31], once we assume Bautin’s analytic result saying that such systems have no limit cycles; In the case of other subclasses of the quadratic class QS, such as the subclass of systems with a weak focus of order 3 or 2 (see [18, 2]) the bifurcation diagrams were obtained by using an interplay of algebraic, analytic and numerical methods. These subclasses were of dimensions 2 and 3 modulo the action of the affine group and time rescaling. So far no 4-dimensional subclasses of QS were studied globally so as to also produce bifurcation diagrams and such problems are very difficult due to the number of parameters as well as the increased complexities of these classes. Although we now know that in trying to understand these systems, there is a limit to the power of algebraic methods, these methods have not been used far enough. For example the global classification of singularities, finite and infinite, using thegeometric equivalence relation,can be done by using only algebraic meth- ods. The first step in this direction was done in [3] where the study of the whole class QS, according to the configurations of the singularities at infinity was ob- tained by using only algebraic methods. This classification was done with respect to the geometric equivalence relation of configurations of singularities. Our work in [3] can be extended so as to also include the finite singularities for the whole class QS. To obtain the globalgeometric classificationof all possible configurations of singularities, finite and infinite, of the class QS, by purely algebraic means is a long term goal since we expect to finally obtain over 1000 distinct configurations of singularities. In [4] we initiated the work on this project by studying the configu- rations of singularities for the subclass of QS for which the total multiplicitymf of finite singularities is less than or equal to one. Our goal here is to continue this work by geometrically classifying the config- urations of all singularities with total finite multiplicity m[f] = 2 for systems in QS. We recall here below the notion ofgeometric configuration of singularitiesdefined in [4] for both finite and infinite singularities. We distinguish two cases: (1) If we have a finite number of infinite singular points and a finite number of finite singularities we callgeometric configuration of singularities, finite and infinite, the set of all these singularities each endowed with its own multiplicity together with their local phase portraits endowed with additional geometric structure involving the concepts of tangent, order and blow–up equivalences defined in Section 4 of [4] and using the notations described here in Section 4. (2) If the line at infinity Z = 0 is filled up with singularities, in each one of the charts at infinityX 6= 0 andY 6= 0, the corresponding system in the Poincar´e compactification (see Section 2) is degenerate and we need to do a rescaling of an appropriate degree of the system, so that the degeneracy be removed. The resulting systems have only a finite number of singularities on the lineZ= 0. In this case we callgeometric configuration of singularities, finite and infinite, the union of the set of all points at infinity (they are all singularities) with the set of finite singularities - taking care to single out the singularities at infinity of the “reduced” system, taken together with the local phase portraits of finite singularities endowed with additional geometric structure as above and the local phase portraits of the infinite singularities of the reduced system. We define the following affine invariants: Let ΣCbe the sum of the finite orders of weak singularities (foci or weak saddles) in a configurationCof a quadratic system and letMCbe the maximum finite order of a weak singularity in a configurationCof a quadratic system. Clearly ΣC andMC are affine invariants. Let Σ2 (respectively M2) be the maximum of all ΣC (respectively MC) for the subclass of QS with mf = 2. In stating our theorem we take care to include the results about the configu- rations containing centers and integrable saddles or containing weak singularities which are foci or saddles, since these singularities are especially important having the potential of producing limit cycles in perturbations. We use the notation in- troduced in [4] denoting byf^(i),s^(i), the weak foci and the weak saddles of orderi and bycand$the centers and integrable saddles. Our results are stated in the following theorem. Theorem 1.1. (A) We consider here all configurations of singularities, finite and infinite, of quadratic vector fields with finite singularities of total multiplicitym[f] = 2. These configurations are classified in Diagrams 1–3 according to the geometric equivalence relation. We have 197 geometric distinct configurations of singularities, finite and infinite. More precisely 16 configurations with two distinct complex finite singularities; 151 configurations with two distinct real finite singularities and 30 with one real finite singularity of multiplicity2. (B) For the subclass of QS withm[f] = 2we haveΣ[2]= 2 =M[2]. There are only 6 configurations of singularities with finite weak singular points withΣ[C]= 2. These have the following combinations of finite singularities: f^(1), f^(1);s^(1), s^(1); s^(2), n; s^(2), n^d;s^(2), f;f^(2), s. There are 7 configurations of singularities with finite weak singular points with ΣC = 1. These have the following combinations of finite singularities: f^(1), n; f^(1), n^d;f^(1), s;f^(1), f;s^(1), n;s^(1), n^d;s^(1), f. There are 19 configurations containing a center or an integrable saddle, only 6 of them with a center. There are 8 distinct couples of finite singularities occurring in these configurations. They are: c,$;c, s;$,$;$,s;$,n;$,n^∗;$,n^d;$,f. (C) Necessary and sufficient conditions for each one of the 197 different equiv- alence classes can be assembled from these diagrams in terms of 31 invariant poly- nomials with respect to the action of the affine group and time rescaling, given in Section 5. (D) The Diagrams 1–3 actually contain the global bifurcation diagram in the 12- dimensional space of parameters, of the global configurations of singularities, finite and infinite, of this family of quadratic differential systems. (E) Of all the phase portraits in the neighborhood of the line at infinity, which are here given in Figure 2, six are not realized in the family of systems withm[f] = 2. They are Configs 17; 19; 30; 32; 43; 44. (see Figure 2). Remark 1.2. The diagrams are constructed using the invariant polynomials µ0, µ1,. . . which are defined in Section 5. In the diagrams conditions on these invariant polynomials are listed on the left side of the diagrams, while the specific geometric configurations appear on the right side of the diagram. These configurations are expressed using the notation described in Section 4. The invariants and comitants of differential equations used for proving our main results are obtained following the theory of algebraic invariants of polynomial dif- ferential systems, developed by Sibirsky and his disciples (see for instance [32, 35, 22, 8, 12]). Remark 1.3. We note that the geometric equivalence relation for configurations is much deeper than the topological equivalence. Indeed, for example the topo- logical equivalence does not distinguish between the following three configurations Diagram 1. Global configurations: the caseµ0=µ1= 0,µ26= 0, U<0. which are geometrically non-equivalent: n,f, SN, c, c;n,f^(1), SN, c, cand n^d, f^(1), SN, c, c where n means a singularity which is a node, capital letters indicate points at infinity, cin case of a complex point andSN a saddle–node at infinity. 2. Compactifications associated to planar polynomial differential systems 2.1. Compactification on the sphere and on the Poincar´e disk. Planar polynomial differential systems (1.1) can be compactified on the sphere. For this we consider the affine plane of coordinates (x, y) as being the plane Z = 1 inR^3 with the origin located at (0,0,1), thex–axis parallel with theX–axis inR^3, and they–axis parallel to theY–axis. We use a central projection to project this plane on the sphere as follows: for each point (x, y,1) we consider the line joining the Diagram 2. Global configurations: the caseµ0=µ1= 0,µ26= 0, U>0. origin with (x, y,1). This line intersects the sphere in two pointsP1= (X, Y, Z) and P2= (−X,−Y,−Z) where (X, Y, Z) = (1/p x^2+y^2+ 1)(x, y,1). The applications (x, y)7→ P1 and (x, y) 7→P2 are bianalytic and associate to a vector field on the plane (x, y) an analytic vector field Ψ on the upper hemisphere and also an analytic vector field Ψ^0on the lower hemisphere. A theorem stated by Poincar´e and proved Diagram2 (continued). Global configurations: the case µ0=µ1= 0,µ26= 0, U>0. in [15] says that there exists an analytic vector field Θ on the whole sphere which simultaneously extends the vector fields on the two hemispheres. By thePoincar´e compactification on the sphere of a planar polynomial vector field we mean the restriction ¯Ψ of the vector field Θ to the union of the upper hemisphere with the equator. For more details we refer to [14]. The vertical projection of ¯Ψ on the plane Z = 0 gives rise to an analytic vector field Φ on the unit disk of this plane. By Diagram2 (continued). Global configurations: the case µ0=µ1= 0,µ26= 0, U>0. the compactification on the Poincar´e disk of a planar polynomial vector field we understand the vector field Φ. By asingular point at infinityof a planar polynomial vector field we mean a singular point of the vector field ¯Ψ which is located on the equator of the sphere, respectively a singular point of the vector field Φ located on the boundary circle of the Poincar´e disk. Diagram2 (continued). Global configurations: the case µ0=µ1= 0,µ26= 0, U>0. 2.2. Compactification on the projective plane. To polynomial system (1.1) we can associate a differential equation ω1 =q(x, y)dx−p(x, y)dy= 0. Since the differential system (1.1) is with real coefficients, we may associate to it a foliation with singularities on the real, respectively complex, projective plane as indicated below. The equation ω[1] = 0 defines a foliation with singularities on the real or complex plane depending if we consider the equation as being defined over the Diagram2 (continued). Global configurations: the case µ0=µ1= 0,µ26= 0, U>0. real or complex affine plane. It is known that we can compactify these foliations with singularities on the real respectively complex projective plane. In the study of real planar polynomial vector fields, their associated complex vector fields and their singularities play an important role. In particular such a vector field could have complex, non-real singularities, by this meaning singularities of the associated Diagram2 (continued). Global configurations: the case µ[0]=µ[1]= 0,µ[2]6= 0, U>0. complex vector field. We briefly recall below how these foliations with singularities are defined. The application Υ :K^2 −→P2(K) defined by (x, y)7→[x:y : 1] is an injection of the plane K^2 over the field Kinto the projective plane P2(K) whose image is the set of [X :Y : Z] with Z 6= 0. If K isR or C this application is an analytic injection. If Z 6= 0 then (Υ)^−1([X : Y :Z]) = (x, y) where (x, y) = (X/Z, Y /Z). We obtain a mapi:K^3\ {Z = 0} −→K^2defined by [X:Y :Z]7→(X/Z, Y /Z). Considering thatdx=d(X/Z) = (ZdX−XdZ)/Z^2anddy= (ZdY−Y dZ)/Z^2, the pull-back of the formω1 via the mapiyields the form i∗(ω1) =q(X/Z, Y /Z)(ZdX−XdZ)/Z^2−p(X/Z, Y /Z)(ZdY −Y dZ)/Z^2 which has poles on Z = 0. Then the formω =Z^m+2i[∗](ω1) on K^3\ {Z = 0}, K beingRorCandmbeing the degree of systems (1.1) yields the equationω= 0: A(X, Y, Z)dX+B(X, Y, Z)dY +C(X, Y, Z)dZ= 0 (2.1) onK^3\ {Z= 0} whereA,B,Care homogeneous polynomials over K with A(X, Y, Z) =ZQ(X, Y, Z), Q(X, Y, Z) =Z^mq(X/Z, Y /Z), B(X, Y, Z) =ZP(X, Y, Z), P(X, Y, Z) =Z^mp(X/Z, Y /Z), C(X, Y, Z) =Y P(X, Y, Z)−XQ(X, Y, Z). The equationAdX +BdY +CdZ = 0 defines a foliation F with singularities on the projective plane over K with K either R or C. The points at infinity of the foliation defined by ω[1] = 0 on the affine plane are the points [X :Y : 0] and the lineZ = 0 is called theline at infinity of the foliation with singularities generated byω1= 0. The singular points of the foliation F are the solutions of the three equations A = 0, B = 0, C = 0. In view of the definitions of A, B, C it is clear that the singular points at infinity are the points of intersection ofZ = 0 withC= 0. 2.3. Assembling data on infinite singularities in divisors of the line at infinity. In the previous sections we have seen that there are two types of multi- plicities for a singular point pat infinity: one expresses the maximum number m of infinite singularities which can split fromp, in small perturbations of the system and the other expresses the maximum number m^0 of finite singularities which can Diagram 3. Global configurations: the caseµ0=µ1= 0,µ26= 0, U= 0. split fromp, in small perturbations of the system. We shall use a column (m^0, m)^t to indicate this situation. We are interested in the global picture which includesall singularities at infinity. Therefore we need to assemble the data for individual singularities in a convenient, Figure 2. Topologically distinct local configurations of ISPs ([26],[29]). precise way. To do this we use for this situation the notion ofcycleon an algebraic variety as indicated in [21] and which was used in [18] as well as in [26]. We briefly recall here the definition of cycle. LetV be an irreducible algebraic variety over a fieldK. A cycle of dimension r or r−cycleon V is a formal sum P Wn[W]W, whereW is a subvariety of V of dimension rwhich is not contained in the singular locus ofV,n[W] ∈Z, and only a finite number of the coefficientsn[W] are non-zero. Thedegreedeg(J) of a cycleJ is defined byP WnW. An (n−1)-cycle is called adivisoronV. These notions were used for classification purposes of planar quadratic differential systems in [21, 18, 26]. To system (1.1) we can associate two divisors on the line at infinity Z = 0 of the complex projective plane: D[S](P, Q;Z) = P wI[w](P, Q)w and D[S](C, Z) = P wI[w](C, Z)wwherew∈ {Z= 0}and where byI[w](F, G) we mean the intersection multiplicity atwof the curves F(X, Y, Z) = 0 and G(X, Y, Z) = 0, withF andG homogeneous polynomials inX, Y, Z overC. For more details see [18]. Following [26] we assemble the above two divisors on the line at infinity into just one but with values in the ringZ^2: D[S] = X Iw(P, Q) Iw(C, Z) This divisor encodes the total number of singularities at infinity of a system (1.1) as well as the two kinds of multiplicities which each singularity has. The meaning of these two kinds of multiplicities are described in the definition of the two divisors DS(P, Q;Z) andDS(C, Z) on the line at infinity. 3. Some geometric concepts Firstly we recall some terminology. We callelemental a singular point with its both eigenvalues not zero. We callsemi–elementala singular point with exactly one of its eigenvalues equal to zero. We callnilpotent a singular point with both its eigenvalues zero but with its Jacobian matrix at this point not identically zero. We callintricate a singular point with its Jacobian matrix identically zero. Theintricate singularities are usually called in the literature linearly zero. We use here the term intricate to indicate the rather complicated behavior of phase curves around such a singularity. In this section we use the same concepts we considered in [3] and [4] such as orbitγ tangent to a semi–lineLatp,well defined angle atp,characteristic orbit at a singular point p, characteristic angle at a singular point, characteristic direction atp. Since these are basic concepts for the notion ofgeometric equivalence relation we recall here these notions as well as a few others. We assume that we have an isolated singularityp. Suppose that in a neighbor- hood U of p there is no other singularity. Consider an orbit γ in U defined by a solution Γ(t) = (x(t), y(t)) such that lim[t→+∞]Γ(t) =p(or lim[t→−∞]Γ(t) = p). For a fixed t consider the unit vector C(t) = (−−−−−→ Γ(t)−pk. Let L be a semi–line ending atp. We shall say thatthe orbitγ is tangent to a semi–lineLat pif lim[t→+∞]C(t) (or lim[t→−∞]C(t)) exists andLcontains this limit point on the unit circle centered atp. In this case we callwell defined angle of Γatpthe angle between the positivex–axis and the semi–lineL measured in the counterclockwise sense. We may also say thatthe solution curveΓ(t) tends topwith a well defined angle. A characteristic orbit at a singular point pis the orbit of a solution curve Γ(t) which tends topwith a well defined angle. We callcharacteristic angle at the singular point pa well defined angle of a solution curve Γ(t). The line through p extending the semi-lineLis called acharacteristic direction. Assume the singularity is placed at (0,0). Then the polynomial P CD(x, y) = ypm(x, y)−xqm(x, y), wherem is the starting degree of a polynomial differential system of the form (1.1), is called the Polynomial of Characteristic Directions of (1.1). In fact in case P CD(x, y) 6≡ 0 the factorization of P CD(x, y) gives the characteristic directions at the origin. If a singular point has an infinite number of characteristic directions, we will call it astar–likepoint. It is known that the neighborhood of any isolated singular point of a polynomial vector field, which is not a focus or a center, is formed by a finite number of sectors which could only be of three types: parabolic, hyperbolic and elliptic (see [14]). It is also known that any degenerate singular point (nilpotent or intricate) can be desingularized by means of a finite number of changes of variables, called blow–up’s, into elementary singular points (for more details see the Section on blow–up in [3] or [14]). Consider the three singular points given in Figure 3. All three are topologically equivalent and their neighborhoods can be described as having two elliptic sectors and two parabolic ones. But we can easily detect some geometric features which distinguish them. For example (a) and (b) have three characteristic directions and (c) has only two. Moreover in (a) the solution curves of the parabolic sectors are tangent to only one characteristic direction and in (b) they are tangent to two characteristic directions. All these properties can be determined algebraically. Figure 3. Some topologically equivalent singular points. The usual definition of a sector is of topological nature and it is local with respect to a neighborhood around the singular point. We work with a new notion, namely ofgeometric local sector, introduced in [3] which distinguishes the phase portraits of Figure 3. As we shall later see this notion is characterized in algebraic terms. We begin with the elemental singular points having characteristic directions. These are either two-directions nodes, one-direction nodes, star nodes or saddles. The first three cases are distinguished algebraically using their eigenvalues (see Figure 1). In the case of saddles the notion of geometric local sector coincides with usual notion of topological We consider now the semi–elemental singular points. These could be saddles, nodes or saddle–nodes. Each saddle has four separatrices and four hyperbolic sec- tors. Here again we call geometric local sector any one of these hyperbolic sectors and we call borsec (contraction of border with sector) any one of the four separa- trices. A semi–elemental node has two characteristic directions generating four half lines. For each one of these half lines there exists at least one orbit tangent to that half line and we pick an orbit tangent to that half line. Removing these four orbits together with the singular point, we are left with four sectors which we call geometric local sectors and we callborsecs these four orbits. Consider now a semi–elemental saddle–node. Such a singular point has three separatrices and three topological sectors, two hyperbolic ones and one parabolic sector. Such a singular point has four characteristic half lines and one of them separates the parabolic sector in two. By removing an orbit tangent to a half line for each one of the half lines as well as the singular point we obtain four sectors which we call geometric local sectors. We call borsecs these four orbits. We now proceed to extend the notion of geometric local sector and of borsec for nilpotent and intricate singular points. The introduction of the concept of borsec in the general case will play a role in distinguishing a semi–elemental saddle–node from an intricate saddle–node such as the one indicate in Figure 4. In the semi–elemental saddle–node all orbits inside the parabolic sector are tangent to the same half–line but in the saddle-node of Figure 4 the orbits in the parabolic sector are not all tangent to the same half–line. The orbits in this parabolic sector are of three kinds: the ones tangent to separatrix (a), the ones tangent to separatrix (c) and a single orbit which is tangent to other half–line of the characteristic direction defined by separatrix (b). In this case this last orbit is called the borsec. The other three borsecs are separatrices as in the case of the semi–elemental saddle–node. Figure 4. Local phase portrait of a non semi–elemental saddle–node. To extend the notion of geometric local sector and of borsec for nilpotent and intricate singular points we start by introducing some terminology. Let δ be the border of a sufficiently small open disc D centered at point p so that δintersects all the elliptic, parabolic and hyperbolic sectors of a nilpotent or intricate singular pointp. Consider a solution Γ : (a, b)→ R^2 where (a, b) is its maximal interval of defini- tion and letγbe the orbit of Γ, i.e. γ={Γ(t)|t∈(a, b)}. We callhalf orbitofγat a singular pointpa subsetγ^0 ⊆γsuch that there existst[1]∈(a, b) for which we have either γ^0 ={Γ(t)|t∈(a, t[1])} in which case we have a =−∞, limt→−∞Γ(t) =p, Γ(t1) ∈ δ and Γ(t) ∈ D for t ∈ (−∞, t1), or γ^0 = {Γ(t)|t∈(t1, b)}, b = +∞, limt→+∞Γ(t) =p, Γ(t1)∈δand Γ(t)∈Dfort∈(t1,∞). We note that in the case of elliptic sectors there may exist orbits which are divided exactly in two half orbits. Let Ω[p]={γ^0 :γ^0 is a half orbit atp}. We shall define a relation of equivalence on Ω[p] by using the complete desingu- larization of the singular pointpin case this point is nilpotent or intricate. There are two ways to desingularize such a singular point: by passing to polar coordinates or by using rational changes of coordinates. The first has the inconvenience of us- ing trigonometrical functions, and this becomes a serious problem when a chain of blow–ups are needed in order to complete the desingularization of the degenerate point. The second uses rational changes of coordinates, convenient for our polyno- mial systems. In such a case two blow–ups in different directions are needed and information from both must be glued together to obtain the desired portrait. Here for desingularization we use the second possibility, namely with rational changes of coordinates at each stage of the process. Two rational changes are needed, one for each direction of the blow–up. If at a stage the coordinates are (x, y) and we do a blow–up of a singular point iny-direction, this means that we introduce a new variablez and consider the diffeomorphism of the (x, y) plane for x6= 0 defined by φ(x, y) = (x, y, z) where y =xz. This diffeomorphism transfers our vector field on the subset x 6= 0 of the plane (x, y) on the subset x 6= 0 of the algebraic surface y = zx. It can easily be checked that the projection (x, xz, z) 7→(x, z) of this surface on the (x, z) plane is a diffeomorphism. So our vector field on the plane (x, y) for x 6= 0 is diffeomeorphic to the vector field thus obtained on the (x, z) plane forx6= 0. The singular point (x0, y0) which we can assume to be placed at the origin (0,0), is then replaced by the straight line x= 0 = y in the 3-dimensional space of coordinatesx, y, z. This line is also the z-axis of the plane (x, z) and it is calledblow–up line. Analogously we can do a blow-up in the x-direction using the change (x, y)→ (zy, y) which is a diffeomorphism fory6= 0. The two directional blow–ups can be simplified in just one 1–direction blow–up if we make sure that the direction in which we do a blow–up is not a characteristic direction, so as to be sure that we are not going to lose information doing the blow– up in the chosen direction. This can be easily solved by a simple linear change of coordinates of the type (x, y)→(x+ky, y) where k is a constant (usually 1). It seems natural to call this linear change a k–twist as the y–axis gets twisted with some angle depending onk. It is obvious that the phase portrait of the degenerate point which is studied cannot depend on the set ofk’s used in the desingularization process. Since the complete desingularization of a nilpotent or an intricate singular point in general needs more than one blow–up, we have as many blow–up lines as we have blow–ups. As indicated above a blow–up line may be transformed by means of linear changes and through other blow–up’s in other straight lines. We will call such straight linesblow–up lines of higher order. We now introduce an equivalent relation on Ωp. We say that two half or- bits γ[1]^0, γ[2]^0 ∈ Ωp are equivalent if and only if (i) for both γ[1]^0 and γ[2]^0 we have limt→−∞Γ1(t) =p= limt→−∞Γ2(t) or limt→+∞Γ1(t) =p= limt→+∞Γ2(t), and (ii) after the complete desingularization, these orbits lifted to the final stage are tangent to the same half–line at the same singular point, or end as orbits of a star node on the same half–plane defined by the blown–up line, and (iii) both orbits must remain in the same half–plane in all the successive blow–up’s. We recall that after a complete desingularization all singular points are elemental or semi–elemental. We now single out two types of equivalence classes: (a) Suppose that an equivalence class C ∈ Ω[p]/ ∼ is such that its half orbits lifted to the last stage in the desingularization process lead to orbits which possess the following properties: i) they belong to an elemental two–directions node or to a semi–elemental saddle–node, and ii) they are all tangent to the same half–line which lies on the blow–up line. (b) Suppose that an equivalence class C ∈Ωp/∼is such that (i) its half orbits lifted to the final stage of the desingularization process, are tangent to a blow–up line of higher order, and (ii) its lifted orbits blown–down to the previous stage of the desingularization, form a part of an elliptic sector. Let Ω^0[p]/ ∼ be the set of all equivalence classes which are of type (a) or (b). Then consider the complement B[p] = (Ω[p]/ ∼)−(Ω^0[p]/ ∼) and consider a set of representatives ofBp. We callborsecanyone of these representatives. Note that the definition of borsec is independent of the choice of the discDwith boundaryδifD is sufficiently small. We callgeometric local sectorof a singular pointpwith respect to a neighborhood V, a region inV delimited by two consecutive borsecs. To illustrate the definitions of borsec and geometric local sector we will discuss the following example given in Figures 5, 6A and 6B. We have portrayed an intricate singular point pwhose desingularization needs a chain of two blow–ups and where all different kinds of elemental singular points and semi–elemental saddle–nodes appear in every possible position with respect of the blow–up line. We have taken a small enough neighborhood of the point pof boundaryδ. We split the boundary δ in different arcs and points which will correspond to the different equivalence classes of orbits. We have enumerated them from 1 to 24. The arcs of δ denoted with ∅[1] and ∅[2] correspond to hyperbolic sectors which are not considered in the equivalence classes since the orbits do not tend to p. Some of these equivalence classes have a unique orbit which is then a borsec (like 14^∗ or 4^∗). We add an asterisk superscript to denote these equivalence classes. Other equivalence classes are arcs, like 16^− or 12^−, and one representative of each one of them is taken as a borsec. We add a dash superscript to denote these equivalence classes. The remaining equivalence classes, just denoted by their number, are those which do not produce a borsec by the exceptions given in the definition. We have drawn the separatrices (which are always borsecs) with a bold continuous line. We have drawn the borsecs which are not separatrices with bold dashed lines. Other orbits are drawn as thin continuous lines. Finally, the vertical dashed line is the y-direction in which the first blow-up was done. We describe a little the blow–ups of the phase portrait of the intricate pointp given in Figure 5. Its first blow–up is given in Figure 6A. In it we see from the upper part of the figure to its lower part: q[1]) an elemental two–directions node with all but two orbits tangent to the blow–up line;q2) a semi–elemental saddle–node with direction associated to the non–zero eigenvalue being the blow–up line;q3) another intricate singular point which needs another blow–up portrayed in Figure 6B; q4) Figure 5. Local phase portrait of an intricate singular point. an elemental saddle; andq5) an elemental one–direction node which necessarily has its characteristic direction coinciding with the blow–up line. In order to make the vertical blow–up of the intricate pointq3 we must first do anε–twist since the vertical direction which corresponds to the previous blow–up line is a characteristic direction In this second blow–up given in Figure 6B we see going down from its upper part, the following elemental or semi–elemental singular points: r1) a two–directions node with only two orbits tangent to the blow–up line (this singular point corresponds to the characteristic direction given by the previous blow–up line);r[2]) a saddle;r[3]) a A B Figure 6. The two needed blow–ups for point of Figure 5. saddle–node with the direction associated to the zero eigenvalue being the blow–up line;r4) a star node. Now we describe all the classes of equivalence that we obtain in order to clarify the definitions of borsec and geometric local sector. We must move from the second blow–up to the first and after that to the original phase portrait. We enumerate the arcs in the boundary of Figure 6B (following the clockwise sense) which will correspond to the classes of equivalence of orbits in Figure 5 as follows. (1^−) The arc 1^−goes from the pointaon the vertical axis to the pointbwithout including any of them. (2) The arc 2 goes from the point b to the point 3^∗ without including any of them. The orbit that ends at pointbcorresponds to the blow–up line in the Figure 6A, and so does not survive in the original phase portrait. Thus the orbits associated to arc 1^− cannot belong to the same equivalence class as the orbits associated to arc 2 since in Figure 6A they are in different half–planes defined by the blow–up line. (3^∗) The point 3^∗ belongs to the orbit which is a separatrix of the saddler2. (∅1) The open arc∅1 goes between the points 3^∗ and 4^∗ and it is associated to a hyperbolic sector and plays no role. (4^∗) The point 4^∗ belongs to the orbit which is a separatrix of the saddle–node r3. (5) The arc 5 goes from the point 4^∗to the pointc including only the second. (6^−) The arc 6^−goes from the pointcto the pointdon the vertical axis, including the pointc. The pointc belongs to both arcs 5 and 6^−. In fact it is just a point of partition of the boundary, splitting the orbits that come from r[3] from the orbits that go to r4. Since the equivalence classes are defined regarding the half orbits there is no contradiction. (7^−) The arc 7^−goes from the pointdon the vertical axis to the pointeincluding the pointe(i.e. 7^−= (d, e] ). (8) The arc 8 goes from the pointeto the point 9^∗including the pointe. The same comment made for the pointcapplies to point e. (9^∗) The point 9^∗ belongs to the orbit which is a separatrix of the saddle–node r3. (∅2) The open arc∅2between the points 9^∗and 10^∗is associated to a hyperbolic sector and plays no role. (10^∗) The point 10^∗ belongs to the orbit which is a separatrix of the saddler[2]. (11) The arc 11 goes from the point 10^∗to the point f without including any of them. (12^−) The arc 12^− goes from the point f to the point a in the vertical axis without including any of them (i.e. 12^− = (d, e) ). The same comment done for the pointbapplies to pointf. Now we translate these notations to Figure 6A and complete the notation of the arcs on the boundary of this figure again following the clockwise sense. (13) The arc 13 goes from the point g on the vertical axis to the point 14^∗ without including any of them. (14^∗) The point 14^∗ belongs to the orbit which is tangent to the eigenvector associated to the greatest eigenvalue of the nodeq1. (15) The arc 15 goes from the point 14^∗to the pointhincluding only the second. (16^−) The arc 16^− goes from the point h to the point i including both (i.e. 16^−= [d, e] ). The following arcs and points from the point i to the point 17^∗ have already received their names when we did the blow–down from Figure 6B to Figure 6A. The arcs 6^− and 12^− of Figure 6B become adjacent in Figure 6A and the points a andd are glued together and correspond to the point which after the−ε–twist goes to the vertical axis. The region defined by these arcs forms now an elliptic sector. (17^∗) The point 17^∗ belongs to the orbit which is a separatrix of the saddleq[4]. (18^−) The arc 18^− goes from the point 17^∗ to the pointjwithout including any of them (i.e. 18^−= (17^∗, j)). (19^−) The arc 19^− goes from the pointj to the point 20^∗without including any of them (i.e. 19^−= (j,20^∗)). (20^∗) The point 20^∗ belongs to the orbit which is a separatrix of the saddleq4. The following arcs and points from the point 20^∗ to the point 21^∗ have already received their names when we have done the blow–down from Figure 6B to Figure 6A. (21^∗) The point 21^∗belongs to the orbit which is a separatrix of the saddle–node q2. (22) The arc 22 goes from the point 21^∗ to the point 23^∗without including any of them. (23^∗) The point 23^∗ belongs to the orbit which is tangent to the eigenvector associated to the greatest eigenvalue of the nodeq[1]. (24) The arc 24 goes from the point 23^∗to the pointgin the vertical axis without including any of them. Now we move to the original phase portrait in Figure 5. For clarity it is conve- nient to start the description with a hyperbolic sector. The orbit associated to the point 4^∗ defines an equivalent class with a single element and then, this element is a borsec. Moreover it is a global separatrix. The orbits associated to the points of the arc 5 form a class of equivalence but define no borsec since in the final desingularization (Figure 6B) these orbits end at a saddle–node tangent to the blow–up line and thus these orbits are in a class of equivalence of type (a) which does not produce borsec. The orbits associated to the points of the arc 6^− form a class of equivalence defining a borsec which splits the two local geometric elliptic sectors that we see in Figure 5. This borsec is not a The orbits associated to the points of the arc 12^− form a class of equivalence defining a borsec which splits a local elliptic sector from a parabolic local sector that we can see in Figure 5. Even though the class 12^− has been split from class 11 by the blow–up line of higher order (the straight line passing through pointr[1] and going from pointb to pointf in Figure 6B), we see that class 12 ^− corresponds to the part of an elliptic sector with its characteristic direction tangent to the blow–up line. So, this class of equivalence is not of type (b) and we must define a borsec there. The point (b) however will occur later on in our discussion, more precisely when we consider the arc 11. The orbit associated to the point 17^∗ defines an equivalent class with a single element and then, this element is a borsec. This borsec is not a separatrix. It is just part of a global parabolic sector but locally distinguishes the three different characteristic directions of the orbits in the arc ofδ going fromdtol. The orbits associated to the points of the arc 18^− form a class of equivalence defining a borsec which splits a local elliptic sector from a parabolic one that we can see in Figure 5. The orbits associated to the points of the arc 24 form a class of equivalence but this does not define a borsec because in the final desingularization, the correspond- ing orbits end at a two–directions node tangent to the blow–up line (this class of equivalence is of type (a)). The orbit associated to the point 23^∗ defines an equivalent class with a single element and then this element is a borsec which splits a local elliptic sector from a parabolic one that we can see in Figure 5. The orbits associated to the points of the arc 22^−form a class of equivalence but this does not define a borsec because in the final desingularization, the correspond- ing orbits end at a two–directions node tangent to the blow–up line (this class of equivalence is of type (a)). The orbit associated to the point 21^∗ defines an equivalent class with a single element and then, this element is a borsec. The orbits associated to the points of the arc 1^− form a class of equivalence defining a borsec which splits a local elliptic sector from a parabolic one that we can see in Figure 5. Even though the class 1^− has been split from class 2 by the blow–up line of higher order, in Figure 6B, we see that class 1^− corresponds to a part of an elliptic sector with its characteristic direction tangent to the blow–up line. So, this is not a class of equivalence of type (b) and we must define a borsec here. The orbits associated to the points of the arc 7^− form a class of equivalence defining a borsec which splits the two local elliptic sectors that we see in Figure 5. As in the case of arc 6^− this borsec is not a separatrix. The orbits associated to the points of the arc 8 form a class of equivalence but define no borsec since in the final desingularization (Figure 6B) these orbits end at a saddle–node tangent to the blow–up line (this equivalence class is of type (a)). The orbit associated to the point 9^∗ defines an equivalent class with a single element and then, this element is a borsec. Moreover it is a global separatrix. The orbits associated to the open arc ∅2 form a hyperbolic sector and are not associated to any equivalence class since they do not end at the singular point. The orbit associated to the point 10^∗ defines an equivalent class with a single element and then, this element is a borsec. Moreover it is a global separatrix. The orbits associated to the points of the arc 11 form a class of equivalence but define no borsec since class 11 is of type (b). In this case we are in a similar situation as with the arc 12^− but now, since the point r[2] is a saddle, the arc 11 in Figure 6A defines a parabolic sector and so there is no need of a borsec, which would otherwise be needed if the sector were elliptic. The orbit associated to the point 20^∗ defines an equivalent class with a single element and then, this element is a borsec. This is similar to the case 17^∗. The orbits associated to the points of the arc 19^− form a class of equivalence defining a borsec which splits a local elliptic sector from a parabolic one that we can see in Figure 5. This is similar to the case 18^−. The orbits associated to the points of the arc 13 form a class of equivalence but this does not define a borsec analogously with the case 24. The orbit associated to the point 14^∗ defines an equivalent class with a single element and then, this element is a borsec. The orbits associated to the points of the arc 15 form a class of equivalence which does not define a borsec analogously to the case 13. The orbits associated to the points of the arc 16^− form a class of equivalence defining a borsec which splits two local elliptic sectors. This is similar to the case 7^−. The orbits associated to the points of arc 2 form a class of equivalence but define no borsec by the same arguments used for the arc 11. The orbit associated to the point 3^∗ defines an equivalent class with a single element and then, this element is a borsec. Moreover it is a separatrix. Generically a geometric local sector is defined by two borsecs arriving at the singular point with two different well defined angles and which are consecutive. If this sector is parabolic, then the solutions can arrive at the singular point with one of the two characteristic angles, and this is a geometric information than can be revealed with the blow–up. There is also the possibility that two borsecs defining a geometric local sector tend to the singular point with the same well defined angle. Such a sector will be called a cusp–like sector which can either be hyperbolic, elliptic or parabolic denoted byH[f],E[f] andP[f] respectively. In the case of parabolic sectors we want to include the information as the orbits arrive tangent to one or to the other borsec. We distinguish the two cases writing by^xP if they arrive tangent to the borsec limiting the previous sector in clockwise sense or^yP if they arrive tangent to the borsec limiting the next sector. In the case of a cusp–like parabolic sector, all orbits must arrive with only one well determined angle, but the distinction between^xP and^yP is still valid because it occurs at some stage of the desingularization and this can be algebraically determined. Thus com- plicated intricate singular points like the two we see in Figure 7 may be described asP E^y ^xP HHH (case (a)) andEP^x[f]HH^yP[f]E (case (b)), respectively. Figure 7. Two phase portraits of degenerate singular points. The phase portrait of the intricate point of Figure 5 could be described as H[f]E[f]E[f]P^x^yP EP^x^yP E[f]E[f]H[f]P^x^yP EEE starting with the hyperbolic sector∅1 and going in the clockwise direction. A star–like point can either be a node or something much more complicated with elliptic and hyperbolic sectors included. In case there are hyperbolic sectors, they must be cusp–like. Elliptic sectors can either be cusp–like or star–like. We callspecial characteristic angleany well defined angle of a star-like point, in which either none or more than one solution curve tends to p within this well defined angle. We will callspecial characteristic directionany line such that at least one of the two angles defining it, is a special characteristic angle. 4. Notation for singularities of polynomial differential systems In [3] we introduced convenient notations which we also used in [4] and which we are also using here. These notations can easily be extended to general polynomial systems. We describe the finite and infinite singularities, denoting the first ones with lower case letters and the second with capital letters. When describing in a sequence both finite and infinite singular points, we will always place first the finite ones and only later the infinite ones, separating them by a semicolon‘;’. Elemental points: We use the letters ‘s’,‘S’ for “saddles”; ‘n’, ‘N’ for “nodes”; ‘f’ for “foci”; ‘c’ for “centers” and c (respectively c) for complex finite (respec- tively infinite) singularities. In order to augment the level of precision we distinguish the finite nodes as • ‘n’ for a node with two distinct eigenvalues (generic node); • ‘n^d’ (a one–direction node) for a node with two identical eigenvalues whose Jacobian matrix is not diagonal; • ‘n^∗’ (a star node) for a node with two identical eigenvalues whose Jacobian matrix is diagonal. In the case of an elemental infinite generic node, we want to distinguish whether the eigenvalue associated to the eigenvector directed towards the affine plane is, in absolute value, greater or lower than the eigenvalue associated to the eigenvector tangent to the line at infinity. This is relevant because this determines if all the orbits except one on the Poincar´e disk arrive at infinity tangent to the line at infinity or transversal to this line. We will denote them as ‘N^∞’ and ‘N^f’ respectively. Finite elemental foci and saddles are classified as strong or weak foci, respectively strong or weak saddles. When the trace of the Jacobian matrix evaluated at those singular points is not zero, we call them strong saddles and strong foci and we maintain the standard notations ‘s’ and ‘f’. But when the trace is zero, except for centers and saddles of infinite order (i.e. with all their Poincar´e-Lyapounov constants equal to zero), it is known that the foci and saddles, in the quadratic case, may have up to 3 orders. We denote them by ‘s^(i)’ and ‘f^(i)’ wherei= 1,2,3 is the order. In addition we have the centers which we denote by ‘c’ and saddles of infinite order (integrable saddles) which we denote by ‘$’. Foci and centers cannot appear as singular points at infinity and hence there is no need to introduce their order in this case. In case of saddles, we can have weak saddles at infinity but the maximum order of weak singularities in cubic systems is not yet known. For this reason, a complete study of weak saddles at infinity cannot be done at this stage. Due to this, in [3] and in [4] and here we chose not even to distinguish between a saddle and a weak saddle at infinity. All non–elemental singular points are multiple points, in the sense that there are perturbations which have at least two elemental singular points as close as we wish to the multiple point. For finite singular points we denote with a subindex their multiplicity as in ‘s(5)’ or in ‘esb(3)’ (the notation ‘ ’ indicates that the sad- dle is semi–elemental and ‘b’ indicates that the singular point is nilpotent). In order to describe the various kinds of multiplicity for infinite singular points we use the concepts and notations introduced in [26]. Thus we denote by ‘ ^a[b] . . .’ the maximum numbera(respectivelyb) of finite (respectively infinite) singularities which can be obtained by perturbation of the multiple point. For example ‘ ^1[1] SN’ means a saddle–node at infinity produced by the collision of one finite singularity with an infinite one; ‘ ^0[3] S’ means a saddle produced by the collision of 3 infinite singularities. Semi–elemental points: They can either be nodes, saddles or saddle–nodes, fi- nite or infinite. We will denote the semi–elemental ones always with an overline, for example ‘sn’, ‘s’ and ‘n’ with the corresponding multiplicity. In the case of infinite points we will put ‘ ’ on top of the parenthesis with multiplicities. Moreover, in cases that will be explained later (see the paragraph dedicated to intricate points), an infinite saddle–node may be denoted by ‘ ^1[1] N S’ instead of ‘ ^1[1] SN’. Semi–elemental nodes could never be ‘n^d’ or ‘n^∗’ since their eigenvalues are always different. In case of an infinite semi–elemental node, the type of collision determines whether the point is denoted by ‘N^f’ or by ‘N^∞’ where ‘ ^2[1] N’ is an ‘N^f’ and ‘ ^0[3] N’ is an ‘N^∞’. Nilpotent points: They can either be saddles, nodes, saddle–nodes, elliptic– saddles, cusps, foci or centers. The first four of these could be at infinity. We denote the nilpotent singular points with a hat ‘b’ as inesb[(3)] for a finite nilpotent elliptic–saddle of multiplicity 3 andcpb[(2)] for a finite nilpotent cusp point of multi- plicity 2. In the case of nilpotent infinite points, we will put the ‘b’ on top of the parenthesis with multiplicity, for example c^1 P EP−H (the meaning ofP EP−H will be explained in next paragraph). The relative position of the sectors of an in- finite nilpotent point, with respect to the line at infinity, can produce topologically different phase portraits. This forces to use a notation for these points similar to the notation which we will use for the intricate points. Intricate points: It is known that the neighborhood of any singular point of a polynomial vector field (except for foci and centers) is formed by a finite number of sectors which could only be of three types: parabolic, hyperbolic and elliptic (see [14]). Then, a reasonable way to describe intricate and nilpotent points is to use a sequence formed by the types of their sectors. The description we give is the one which appears in the clockwise direction (starting anywhere) once the blow–down of the desingularization is done. Thus innon-degenerate quadratic systems (that is, both components of the system are coprime), we have just seven possibilities for finite intricate singular points of multiplicity four (see [5]) which are the following ones: phpphp(4); phph(4);hh(4); hhhhhh(4);peppep(4);pepe(4); ee(4). The lower case letters used here indicate that we have finite singularities and subindex (4) indicates the multiplicity 4 of the singularities. For infinite intricate and nilpotent singular points, we insert a dash (hyphen) between the sectors to split those which appear on one side or the other of the equator of the sphere. In this way we will distinguish between ^2[2] P HP −P HP and ^2[2] P P H−P P H.
{"url":"https://123deta.com/document/z3d2j1nm-distinctions-important-produc-limit-cycles-close-perturbations-systems.html","timestamp":"2024-11-07T04:27:41Z","content_type":"text/html","content_length":"217346","record_id":"<urn:uuid:180ce5f3-c78e-4ae6-af3d-8a20ab572c28>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00654.warc.gz"}
How to round down numbers of a vector in julia? To round down numbers in a vector in Julia, you can use the floor() function. This function rounds each element of the vector down to the nearest integer that is less than or equal to the original Here is an example of rounding down numbers in a vector in Julia: 1 # Create a vector with some numbers 2 vec = [3.5, 2.7, -4.9, 6.2] 4 # Round down each element of the vector 5 rounded_vec = floor.(vec) 7 # Print the original and rounded vectors 8 println("Original vector: $vec") 9 println("Rounded down vector: $rounded_vec") This will output: 1 Original vector: [3.5, 2.7, -4.9, 6.2] 2 Rounded down vector: [3.0, 2.0, -5.0, 6.0] In this example, the floor() function is applied to each element of the vector vec using the broadcasting syntax .. This results in a new vector rounded_vec with each number rounded down.
{"url":"https://devhubby.com/thread/how-to-round-down-numbers-of-a-vector-in-julia","timestamp":"2024-11-09T14:23:06Z","content_type":"text/html","content_length":"114698","record_id":"<urn:uuid:bbef4e78-85ca-4f0f-a890-34b2e53753ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00443.warc.gz"}
Limitations of Fixed Length Fingerprints Next: Fingerprint Storage Up: Fingerprinting Previous: Selection Strategy An important measure of the reliability of a fingerprinting scheme is how closely its results correlate with those from full fingerprinting. For the fixed length selective fingerprinting approach we have chosen, the correlation is good for documents of similar size, but can become problematic for documents of significantly different size. Specifically, we can show that for documents of identical size, the expected match ratios for fixed length selective fingerprinting are identical to those for full fingerprinting. However for documents of different size, the results can vary by a ratio as high as the ratio of sizes of the two documents. To illustrate the problem, consider the extreme case of matching a document of 1000 words against a document of 100,000 words, and suppose that the smaller document appears once in the larger document. Now if the stored fingerprint of the larger document has size 100, then on average there will be one substring for every 1000 word piece of the larger document. In other words, the stored fingerprint of the larger document will have about one substring in common with the smaller document (i.e. about a 1% match ratio), compared with the 100% match given by full fingerprinting. We are currently investigating ways to address this issue, including using variable sized fingerprints and flagging low match ratios as significant if document sizes vary significantly. Nevin Heintze Thu Oct 3 20:48:58 EDT 1996
{"url":"https://www.cs.cmu.edu/afs/cs/user/nch/www/koala/node9.html","timestamp":"2024-11-02T14:28:41Z","content_type":"text/html","content_length":"3423","record_id":"<urn:uuid:b5547f18-2359-41c7-b2cb-9d3dbee67913>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00372.warc.gz"}
Let \(L: R^{3} \rightarrow R^{3}\) be defined by \(\mathrm{x}|\quad| \mathrm{x}+1 \mid\) \(\mathrm{L}|\mathrm{y}|=\mid 2 \mathrm{y}\) Show that \(L\) is not a linear transformation. Short Answer Expert verified The counterexample provided shows that L does not satisfy the additivity property, as L(x+y) = 0 ≠ L(x) + L(y) = 3. Thus, L is not a linear transformation. Step by step solution We need to check if the following equation holds for all x, y in R^3: L(x+y) = L(x) + L(y) To see if this property holds, let's first compute L(x+y) and L(x) + L(y): \(L(x+y) =|(x+y)+1|\) \(L(x)+L(y) = |x+1|+|y+1|\) Now, we need to find if there exist any x and y such that L(x+y) is not equal to L(x) + L(y). Consider the following vectors: \(x = \begin{bmatrix} -1 \\ 0 \\ 0 \end{bmatrix}\) and \ (y = \begin{bmatrix} 0 \\ -1 \\ 0 \end{bmatrix}\) Now, let's compute L(x+y) and L(x)+L(y): \(L(x+y) = L(\begin{bmatrix} -1 \\ -1 \\ 0 \end{bmatrix}) = |(-1) + 1| = |0| = 0\) \(L(x) = |(-1) + 1| = |0| = 0\) \(L(y) = |(-1) + 1| = |0| = 0\) \(L(x) + L(y) = 0 + 0 = 0\) In this case, L(x+y) = L(x) + L(y). However, we need to check more examples to see if the additivity holds for other vectors as well. Consider the following vectors: \(x = \begin{bmatrix} -2 \\ 0 \\ 0 \end{bmatrix}\) and \(y = \begin{bmatrix} 1 \\ 0 \\ 0 \end{bmatrix}\) Now, let's compute L(x+y) and L(x) + L(y): \(L(x+y) = L(\begin {bmatrix} -1 \\ 0 \\ 0 \end{bmatrix}) = |(-1) + 1| = |0| = 0\) \(L(x) = |(-2) + 1| = |-1| = 1\) \(L(y) = |1 +1| = |2| = 2\) \(L(x) + L(y) = 1 + 2 = 3\) Here, we found a counterexample: L(x+y) = 0 ≠ L (x) + L(y) = 3 Since L doesn't satisfy the additivity property, we can conclude that L is not a linear transformation. There is no need to check the homogeneity property since the violation of any one property is enough to show that L is not linear. Key Concepts These are the key concepts you need to understand to accurately answer the question. Understanding the Additivity Property The additivity property is a fundamental component of a linear transformation. To elaborate, consider the function L that maps vectors from R^3 to R^3. The additivity property requires that for any two vectors x and y, the function's output should satisfy the condition L(x + y) = L(x) + L(y). In educational terms, this is akin to saying if you have two bags of apples, x and y, adding the apples together first and then using the function L (imagine it as a machine that maybe paints the apples) should yield the same result as operating on each bag individually with L and then combining the results. If this doesn't hold true for all possible combinations of x and y, then L simply cannot be considered a linear transformation. Examining through an Example Let's consider an example where the function L adds 1 to the first component of the vector. If we have a vector x = [-1, 0, 0] and another y = [0, -1, 0], applying the function to both and adding the results should be equivalent to adding the vectors first and then applying the function. However, the exercise showed that there are instances where L(x + y) ≠ L(x) + L(y), which clearly violates the additivity property. Therefore, the provided function L is not a linear transformation. Exploring the Homogeneity Property The homogeneity property, also known as scalar multiplication compatibility, is the second critical attribute required for a function to be considered a linear transformation. In simple terms, it dictates that if you were to scale a vector by any number (say, you double or triple the contents of your bag of apples), then applying the linear transformation should have the same effect as scaling the transformed vector by the same number. Mathematically, this means for any scalar a and vector x, L(ax) = aL(x). Think of it as a consistency requirement; the transformation should handle scaled up or down versions of your inputs in a predictable way. If a transformation can't pass this test for every scalar a and every vector x, it's not playing by the rules of linear algebra. Why Proper Verification Matters While the exercise indicated that the failure of the additivity property alone disqualifies the transformation from being linear, verifying homogeneity is equally important in other contexts. This property ensures the transformation respects the structure of vector space, and without it, the transformation could behave erratically when vectors are scaled. Although the exercise did not require verification of homogeneity due to the prior failure, it's an equally vital characteristic of a genuine linear transformation. The Role of Counterexamples in Linear Algebra Counterexamples are the 'smoking gun' in the detective work of linear algebra. They provide concrete evidence that a function does not exhibit the characteristics of a linear transformation. Essentially, they show us where the rules break down. A counterexample demonstrates that, at least in one situation, the function fails to comply with the rigorous definitions of additivity or homogeneity. This is like finding one apple that the machine fails to paint properly, thereby revealing a flaw in the machine's design as a whole. Navigating Through Counterexamples When dealing with linear transformations, finding a single instance where either additivity or homogeneity does not hold invalidates the transformation's linearity. In the given exercise, we examined vectors x and y and found that L(x + y) did not equal L(x) + L(y). This single counterexample was enough to disprove the assertion that L is linear, emphasizing the power of counterexamples in mathematical proofs. Counterexamples not only help in disproving but also in understanding the boundaries and limitations of mathematical concepts. By quarantining the dysfunction within a function, they steer us towards a deeper comprehension of the meticulous nature of linear algebra.
{"url":"https://www.vaia.com/en-us/textbooks/math/linear-algebra-1-edition/chapter-2/problem-51-let-l-r3-rightarrow-r3-be-defined-by-mathrmxquad-/","timestamp":"2024-11-11T23:40:24Z","content_type":"text/html","content_length":"260752","record_id":"<urn:uuid:c9f69c07-f08c-4812-bb75-5ac53cd9cf0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00259.warc.gz"}
C++ Handling Large Integer Values Iam new to Codechef . I don’t know how to handle very large integer values . The Problems given usually have large no’s like 10^10 . I have seen the use of Mod but couldn’t understand it well . u need to know modular arithmetic. if you use MOD cleverly u can get rid of integer over flow. First go through this LINK What is clever here?? => some time using (a**b)%MOD won’t work because system calculates aXb first if that leads to an interger overflow then we will end up in wrong answers so it will be better to use ((a%MOD)*(b%MOD))%MOD Same rule follows everywhere whenever there is a chance of integer overflow break it and apply MOD and then organize the equation accordingly.These rules can be found in the link. Most of this , you will get through practice. Happy Coding 1 Like No ,I don’t want to use any long long datatype . The Problems given will be given out of range of long long datatype . I want to use Modular Arithmetic for this . No ,I don’t want to use any long long datatype . The Problems given will be given out of range of long long datatype . I want to use Modular Arithmetic for this . Well, if you don’t want to use native language data-types, I am assuming that you are interested in solving problems such that the intermediate calculations are small, but, final results tend to be very large, like computing a factorial of a big number, or perform some huge exponentiation like 16^5000 or so. The “trick” behind these sort of problems is that you can use what is known as a modular value such that the value computed is actually the remainder of the answer when you divide it by the modulo So, to compute 16^5000 MOD 1000000007 is to compute the remainder of the answer 16^5000 / 1000000007. The idea is that the MOD operator is, in a sense, comulative, i.e., you can iteratively do: answer = 1 for i = 1 to 5000 answer = (answer * 16) MOD 1000000007 print answer MOD 1000000007 Like this, you ensure that the values involved in your intermediate calculations never exceed 1000000007 and also that the final result is well computed. This is a widely used trick, both to avoid computing large values and also to introduce people to how Modular Arithmetic works. Hope this helps you. 3 Likes updated my answer then Thanks for the Help .Understood it well. Thanks for the Help .Understood it well.
{"url":"https://discusstest.codechef.com/t/c-handling-large-integer-values/3821","timestamp":"2024-11-11T00:45:03Z","content_type":"text/html","content_length":"32887","record_id":"<urn:uuid:f98dac63-10e8-4a11-8842-93a0be141ff4>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00026.warc.gz"}
The nothing that is pdf a vector The vector points in the direction of flow see fig. No objects were imported when importing a pdf file into autocad. Best pdf editor and converter for windows and mac users while handling pdfs you need to have a tool that can do a number of pdf solutions without having to such for other softwares. The last line uses the print command and exports a vector pdf document as the output. Since vectors can be scaled, any vector can be rescaled b to be a unit vector. A pdf always contain several vector masks your 1 bitmap raster image is no exception. In the example below, the drawing is enlarged 400%. While figures drawn by the painters renderer are vector plots in the exported pdf, with opengl and zbuffer the pdf contains bitmap pictures of. Not all printersplotters will render a complex pdf with large amounts of mixed vector and raster content the same, especially when you add things like shadingopacity of rasters, and scaling into the mix. The last line uses the print command and exports a. Directional derivatives we know we can write the partial derivatives measure the rate of change of the function at a point in the direction of the xaxis or yaxis. We present arrays relation to pointers and consider the problems arising from their use. John deere logo vector in eps, ai, cdr free download. What gets us to a pragmatic solution for reallife problems, nothing more. If i open the new pdf file and open to 4% i expect smooth non pixelated lines. That is unless the pdf was created by photoshop with preserve photoshop editing capabilities on. You can export any of the above file types to be a pdf format, so these files could be either vector, raster, or a combination of. We can also declare a variable without an initial size. Indesign to create your content, download this issuu export preset to ensure during the export process, your pdf is issuuready. This mega bundle of tshirt designs contains 100 premium designs in vector format, easily scale these illustrations to whatever size you need. If you want least technical way, then zoom in graphic or othe. If you are asking for sure fire way to inspect then use adobe acrobat go to preflight browse internal pdf structure this will let you know if all components within pdf is vector or not. One way is using an online pdf to vector and converter and using a powerful pdf to vector software. The portable document format is built for the exchange of documents across platforms and is editable in adobe acrobatsvg. For such a function, say, yfx, the graph of the function f consists of the points x,y x,fx. We use vectors to represent entities which are described by magnitude and direction. Unit vectors a unit vector is any vector with unit length. Paths can be straight or curved, and then connected with more points to form longer paths or closed shapes. Though vectors saved in a pdf format can be opened in most vector applications, such as illustrator, coreldraw and inkscape from the opensource. As a demonstration this is fun, but not really practical. Raster pdfs will handle images and shaded modes coming from rhino, but limit the. Points p in the plane are described by pairs a,b of real numbers, where a and b stand for the x and y coordinates of. The vector pdf file will look clear and smooth at any size while the raster pdf will become blurry or grainier the more its zoomed. I have an image in a pdf file that is a raster image. It is hard to see on the page if there are detailed vector graphics. Built to provide stationary cold storage, the engineless vector 8100 allelectric trailer refrigeration unit consumes less electricity while delivering up to 4 percent higher capacity than the model it succeeds. I believe the ac convertor works very well on ac generated vector pdfs since its introduction the number of ac users looking for ways to destroy their pdf. Extract vector graphics from pdf in photoshop graphic. The invariance of the energymomentum fourvector is due to the fact that rest mass of a particle is invariant under coordinate transformations. Simply clicking on the file in the github repo viewer will render a usable version as markdown. Scanned cad drawings pose challenges because they are raster pdf files, not vector files. The next line configures the print paper size to fit the figure size. A symbol for what is not there, an emptiness that increases any number its added to, an inexhaustible and indispensable paradox. Everything you need to know about vector file formats. What is the best way to tell if my pdf file is a vector. What about the rates of change in the other directions. The vector u from q a 1,b 1 to p a 2,b 2 can be written as. A vector file is a file illustrator, corel draw that can be opened and changed repeatedly with ease and can be scaled without the loss of resolution. A read is counted each time someone views a publication summary such as the title, abstract, and list of authors, clicks on a figure, or views or downloads the fulltext. Say nothing business concept royalty free vector image. Everything you need to know about vector file formats shutterstock. Whether you know nothing about design or youre just getting started in the industry, you might struggle to figure out which file format you need for your project. In this course you will be expected to learn several things about vector spaces of course. These points lie in the euclidean plane, which, in the cartesian. Tips and tricks for common conversion problems issuu help center. While handling pdfs you need to have a tool that can do a number of pdf solutions without having to such for other softwares. Vector pdfs have the advantage in printing to larger paper sizes and getting sharp linework at any size. While if the svg graphic was cropped before converting to the pdf, it will collapse. Just as functions take values as parameters, some types like vector take other types as parameters. A pdf file too, like the eps is a compound format which may hold raster as well as vector graphics within them. Notice that the type of elements in the vector is written in angle brackets after the name vector. Ellen, whose drawings adorn and whose spirit informs this book. What is the best way to tell if my pdf file is a vector fromat. In 2010, it was listed as 107th in the fortune 500 ranking. If we also know a point on the plane, then, this plane is uniquely determined. Unfortunately, sometimes just viewing a design isnt enoughsometimes you need to edit it. Similarly in r3 the vectors i, j and k are the standard basis of r3. Basic concepts a vector v in the plane or in space is an arrow. Locking down a pdf is not the only reason to print to raster. When thirteenyearold pierre anthon leaves school to sit in a plum tree and train for becoming part of nothing, his seventh grade classmates set out on a desperate. Pdf to vector saving your designs in pdf format is a great way to ensure that anyone working on your project will be able to view and comment on them. The most we can do with scanned files, is create a copy to use as a tracing layer in your cad program. If i had made a di erent choice of coordinate system, the components of the vector would change, even though the original vector is still the same. Good advice t his chapter describes how vectors are copied and accessed through subscripting. Its useful for the web, where it can be indexed, searched, and. The scalable vector graphics format is based in xml a markup language used widely across the internet thats readable by both machines and humans. If i open to original pdf file and zoom to 400% i expect pixelated lines. First and foremost, the two lighthouses from which i take my bearings. The first two lines measure the size of your figure in inches. Whether you know nothing about design or youre just getting started in the. The best free alternatives to adobe illustrator 2020. Choose from over a million free vectors, clipart graphics, vector art images, design templates, and illustrations created by artists worldwide. As a side note, the following two code snippets are not the same thing. In other words, a plane can be determined by a point p0x0. In other words, reserve will grow the allocated storage of the vector, if necessary, but will never shrink it. The algebra of vectors we will content ourselves with vectors in the cartesian plane r2 or in three dimensional space r3. With xcode 6, apple introduced the ability to package vector images in. In this article, we will show you how to convert pdf to vector images. This book would have been nothing rather than about nothing had it not been for christopher doyle, eric simonoff and dick teresi. And it must be realized that this ability is nothing more than an extension of clear thinking. A nonzero vector is a directed line segment drawn from a point p called its initial point to a point q called its terminal point, with p and q being distinct points. To do that, we discuss copying in general and consider vectors relation to the lowerlevel notion of arrays. Jun 25, 2018 learn about the features of vector file formats and how they differ from raster files with this indepth guide. Save a figure as pdf matlab answers matlab central. One way to think of this is that we start at the beginning of the first vector. Two arrows represent the same vector if they have the same length and are parallel see. To convert pdf to vector format, it is necessary to convert a pdf to bitmap image firstly and then you can easily convert the images to vectors. Lets see how to open this format in these applications. With vector customer portal account you have fastest access to the best qualified support agent because case data are provided fully and structured. When printing a 2d pdf you need to determine which style you would like to print. Photoshop simply inherently includes vector data if it is present in the document. God nothing is impossible svg, pdf, digital file vector graphic. How to open pdf portable document vector format file. Photoshop rasterizes everything when you open a pdf. The discussion of fourvector in relativity continues but this time the focus is on the energymomentum of a particle. The magnitude of the vector heat flow at a point is the amount of thermal energy that passes, per unit time and per unit area, through an infinitesimal surface element at right angles to the direction of flow. Printing figure to pdf produces bitmap instead of vector. Danish cultural ministry prize for best childrensyouth book of 2001. It takes only a few seconds to convert a pdf to an svg, using nothing but free. If the vector already has room for the required number of elements, reserve does nothing. The file is a rasterbased pdf and pdfimport is set to import only vectorbased objects. This ignores the fact that we have to specify a coordinate system in order to determine the components of a vector. These are the basic unit vectors a unit vector is a vector of length 1. Scanned cad drawings pdf conversion programs pdf sdk. Learn about the features and advantages of vector file formats and how they differ. Please use our online request form for price quotations. Nothing had ever indicated that pierre anthon was the smartest among us, but suddenly we all knew he was. Previously it was necessary to produce three separate. Placing the vectors end to end, the vector from the start of the first vector to the end of the second vector is the sum of the vectors. Visually evaluate the difference between vector and raster pdf. Jan 17, 2020 pdf to vector saving your designs in pdf format is a great way to ensure that anyone working on your project will be able to view and comment on them. In my case, if i insert the svg graphics without cropping, the svg graphic works well in pdf. Working draft of the proposed riscv v vector extension. Do you have technical questions about our products or would you like to get technical advice for your project. The best free alternatives to adobe illustrator 2020 techradar. You can choose between a vector style pdf file and a raster type pdf file.
{"url":"https://credinovprag.web.app/696.html","timestamp":"2024-11-06T23:27:05Z","content_type":"text/html","content_length":"16558","record_id":"<urn:uuid:250f5410-898a-4ee8-b7da-86e05362c267>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00322.warc.gz"}
Nandagopal Manoj, Kevin Slagle, Wilbur Shirley, Xie Chen SciPost Phys. 10, 094 (2021) · published 29 April 2021 | Toggle abstract · pdf The X-cube model, a prototypical gapped fracton model, has been shown to have a foliation structure. That is, inside the 3+1D model, there are hidden layers of 2+1D gapped topological states. A screw dislocation in a 3+1D lattice can often reveal nontrivial features associated with a layered structure. In this paper, we study the X-cube model on lattices with screw dislocations. In particular, we find that a screw dislocation results in a finite change in the logarithm of the ground state degeneracy of the model. Part of the change can be traced back to the effect of screw dislocations in a simple stack of 2+1D topological states, hence corroborating the foliation structure in the model. The other part of the change comes from the induced motion of fractons or sub-dimensional excitations along the dislocation, a feature absent in the stack of 2+1D layers.
{"url":"https://scipost.org/contributor/1246","timestamp":"2024-11-06T09:09:33Z","content_type":"text/html","content_length":"47321","record_id":"<urn:uuid:7e436c4d-0d1f-43d5-8702-33575663187d>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00135.warc.gz"}
what if there was a prime number as the number of people who voted? This post is deleted! [Originally posted in the Discussions] Module 0 Week 3 Day 11 Challenge Explanation Part 1 I am wondering what if there was a prime number as the number of people who voted? In reality, you're right, the number of people might not be a nice number like 100. The percents might not be nice whole numbers like 60% either. However, the logic would be the same. This is because our answer is a percentage, so we don't care about the number of actual people, but only about their proportions. For example, if you eat half of a pizza and want to know what fraction is left, you don't care how big the pizza is. The answer is half. I hope this helps! Happy Learning! The Daily Challenge Team
{"url":"https://forum.poshenloh.com/topic/38/what-if-there-was-a-prime-number-as-the-number-of-people-who-voted/?","timestamp":"2024-11-06T07:58:35Z","content_type":"text/html","content_length":"58857","record_id":"<urn:uuid:540f122d-bfd6-4db9-8fbb-cfd198344b47>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00761.warc.gz"}
33 research outputs found In recent years, we have witnessed an increasing cross-fertilization between the fields of computer science, statistics, optimization and the statistical physics of learning. The area of machine learning is at the interface of these subjects. We start with an analysis in the statistical physics of learning, where we analyze some properties of the loss landscape of simple models of neural networks using the computer science formalism of Constraint Satisfaction Problems. Some of the techniques we employ are probabilistic, but others have their root in the studies of disorder systems in the statistical physics literature. After that, we focus mainly on online prediction problems, which were initially investigated in statistics but are now very active areas of research also in computer science and optimization, where they are studied in the adversarial case through the lens of (online) convex optimization. We are particularly interested in the cooperative setting, where we show that cooperation improves learning. More specifically, we give efficient algorithms and unify previous works under a simplified and more general framework The objective of this thesis is to study the moduli spaces of pairs of mirror theories in 3 dimensions with N = 4. The original conjecture of 3d mirror symmetry was motivated by the fact that in these pairs of theories the Higgs and Coulomb branches are swapped. After a brief introduction to supersymmetry we will first focus on the Higgs branch. This will be investigated through the Hilbert series and the plethystic program. The methods used for the Higgs branch are very well known in literature, more difficult is the case of the Coulomb branch since it receives quantum corrections. We will explain how it is parametrized in term of monopole operators and having both Higgs and Coulomb branches for theories with different gauge groups we will be able to show how mirror symmetry works in the case of ADE theories. We will show in which cases these Yang- Mills vacua are equivalent to one instanton moduli spaces.ope Endogeneity, i.e. the dependence of noise and covariates, is a common phenomenon in real data due to omitted variables, strategic behaviours, measurement errors etc. In contrast, the existing analyses of stochastic online linear regression with unbounded noise and linear bandits depend heavily on exogeneity, i.e. the independence of noise and covariates. Motivated by this gap, we study the over- and just-identified Instrumental Variable (IV) regression, specifically Two-Stage Least Squares, for stochastic online learning, and propose to use an online variant of Two-Stage Least Squares, namely O2SLS. We show that O2SLS achieves $\mathcal O(d_{x}d_{z}\log^2 T)$ identification and $\widetilde{\mathcal O}(\gamma \sqrt{d_{z} T})$ oracle regret after $T$ interactions, where $d_ {x}$ and $d_{z}$ are the dimensions of covariates and IVs, and $\gamma$ is the bias due to endogeneity. For $\gamma=0$, i.e. under exogeneity, O2SLS exhibits $\mathcal O(d_{x}^2 \log^2 T)$ oracle regret, which is of the same order as that of the stochastic online ridge. Then, we leverage O2SLS as an oracle to design OFUL-IV, a stochastic linear bandit algorithm to tackle endogeneity. OFUL-IV yields $\widetilde{\mathcal O}(\sqrt{d_{x}d_{z}T})$ regret that matches the regret lower bound under exogeneity. For different datasets with endogeneity, we experimentally show efficiencies of O2SLS and OFUL-IV In this preliminary (and unpolished) version of the paper, we study an asynchronous online learning setting with a network of agents. At each time step, some of the agents are activated, requested to make a prediction, and pay the corresponding loss. Some feedback is then revealed to these agents and is later propagated through the network. We consider the case of full, bandit, and semi-bandit feedback. In particular, we construct a reduction to delayed single-agent learning that applies to both the full and the bandit feedback case and allows to obtain regret guarantees for both settings. We complement these results with a near-matching lower bound The geometrical features of the (non-convex) loss landscape of neural network models are crucial in ensuring successful optimization and, most importantly, the capability to generalize well. While minimizers' flatness consistently correlates with good generalization, there has been little rigorous work in exploring the condition of existence of such minimizers, even in toy models. Here we consider a simple neural network model, the symmetric perceptron, with binary weights. Phrasing the learning problem as a constraint satisfaction problem, the analogous of a flat minimizer becomes a large and dense cluster of solutions, while the narrowest minimizers are isolated solutions. We perform the first steps toward the rigorous proof of the existence of a dense cluster in certain regimes of the parameters, by computing the first and second moment upper bounds for the existence of pairs of arbitrarily close solutions. Moreover, we present a non rigorous derivation of the same bounds for sets of $y$ solutions at fixed pairwise distances Recently, the scientific community has questioned the statistical reproducibility of many empirical results, especially in the field of machine learning. To solve this reproducibility crisis, we propose a theoretically sound methodology to compare the overall performance of multiple algorithms with stochastic returns. We exemplify our methodology in Deep RL. Indeed, the performance of one execution of a Deep RL algorithm is random. Therefore, several independent executions are needed to accurately evaluate the overall performance. When comparing several RL algorithms, a major question is how many executions must be made and how can we ensure that the results of such a comparison are theoretically sound. When comparing several algorithms at once, the error of each comparison may accumulate and must be taken into account with a multiple tests procedure to preserve low error guarantees. We introduce AdaStop, a new statistical test based on multiple group sequential tests. When comparing algorithms, AdaStop adapts the number of executions to stop as early as possible while ensuring that we have enough information to distinguish algorithms that perform better than the others in a statistical significant way. We prove theoretically and empirically that AdaStop has a low probability of making a (family-wise) error. Finally, we illustrate the effectiveness of AdaStop in multiple Deep RL use-cases, including toy examples and challenging Mujoco environments. AdaStop is the first statistical test fitted to this sort of comparisons: AdaStop is both a significant contribution to statistics, and a major contribution to computational studies performed in reinforcement learning and in other domains. To summarize our contribution, we introduce AdaStop, a formally grounded statistical tool to let anyone answer the practical question: ``Is my algorithm the new state-of-the-art?''
{"url":"https://core.ac.uk/search/?q=author%3A(Della%20Vecchia%2C%20Riccardo)","timestamp":"2024-11-03T23:11:25Z","content_type":"text/html","content_length":"140751","record_id":"<urn:uuid:f4e1a4e4-69ff-492a-aec6-e790d98302f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00527.warc.gz"}
w (A) w is a postfix esoteric programming language invented by User:A based on wren designed for code golf. It has a very consistent syntax for stack operations, without involving the concept of blocks and infix operators, as well as avoiding the concept of modifyable accumulators! (W also only has a single stack, which prevents the program from being hard to read.) In addition, w doesn't apply the operations directly - every instruction chain is treated as an anonymous function. This makes the language very easy to learn and write code in. Project Euler 1 Project Euler 1: 1000 % Literal 1000 W % Filter out all numbers that fullfill % the following condition in the range % 1..100 a5m % The provided each input of F modulo 5 a3m % The provided each input of F modulo 3 & % And those values ! % Negation (i.e. remove those that don't) J % Summate the remaining list Due to W's shorthands, 1000 can be replaced with `3^`, i.e. 10 to the power of 3. An example Factorial finder There is an implicitly-given input R For every item in . the generated range from the implicitly given input 1 to the number 1: @ Roll down to exploit two operands R Reduce this range using this method: For the implicitly given 2 operands (the accumulator and the current item): * Multiply them Implicit output the value Or, since W now auto-maps in the range (1..input) inclusive: Do you make me up? Implicitly provide 2 inputs t "Trim" them (a.trim(b) is removing all characters in b that exist in the string a) = Check equality with "" the null string Implicit output There isn't a built-in, so the string is in its literal form. Hello, world!" Implicitly provide a quote Hello, world!" Push this string Implicit print " THe implicit quote p34CS+" This string. p Print & return the string. 34CS+ prepend it with a quote. Return this value. There's also a quote built-in in W. (7 bytes) Prime tester Works for 1. Also, no boring built-ins in W for this one! W % Generate a range from 1 to the input % Keep those items where the following condition is true: m % Find the remainder of the input and the current item of the range ! % Negate this result k2= % Is the length of the list 2? % Implicit output Mean of array This is an idea taken from the XENBLN page. Reference implementation Here is the implementation. This is a reimplementation in Python that makes sure all examples work. (Not in Wren, because I can't seem to get Wren to work...)
{"url":"http://esolangs.org/wiki/W_(A)","timestamp":"2024-11-15T01:14:04Z","content_type":"text/html","content_length":"22283","record_id":"<urn:uuid:0d43ac3e-971a-4f61-b7be-88335081b7c9>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00873.warc.gz"}
Translating Phrases to Expressions with Fractions Learning Outcomes • Translate word phrases to expressions with fractions Have you noticed that the examples in this section used the comparison words ratio of, to, per, in, for, on, and from? When you translate phrases that include these words, you should think either ratio or rate. If the units measure the same quantity (length, time, etc.), you have a ratio. If the units are different, you have a rate. In both cases, you write a fraction. Translate the word phrase into an algebraic expression: ⓐ [latex]427[/latex] miles per [latex]h[/latex] hours ⓑ [latex]x[/latex] students to [latex]3[/latex] teachers ⓒ [latex]y[/latex] dollars for [latex]18[/latex] hours [latex]\text{427 miles per }h\text{ hours}[/latex] Write as a rate. [latex]\dfrac{\text{427 miles }}{h\text{ hours}}[/latex] [latex]x\text{ students to 3 teachers}[/latex] Write as a rate. [latex]\dfrac{x\text{ students}}{\text{3 teachers}}[/latex] [latex]y\text{ dollars for 18 hours}[/latex] Write as a rate. [latex]\dfrac{y\text{ dollars}}{\text{18 hours}}[/latex] Translate the word phrase into an algebraic expression. a. [latex]689[/latex] miles per [latex]h[/latex] hours b. [latex]y[/latex] parents to [latex]22[/latex] students c. [latex]d[/latex] dollars for [latex]9[/latex] minutes Show Solution Translate the word phrase into an algebraic expression. a. [latex]m[/latex] miles per [latex]9[/latex] hours b. [latex]x[/latex] students to [latex]8[/latex] buses c. [latex]y[/latex] dollars for [latex]40[/latex] hours Show Solution Try It Applications of Ratios One real-world application of ratios that affects many people involves measuring cholesterol in blood. The ratio of total cholesterol to HDL cholesterol is one way doctors assess a person’s overall health. A ratio of less than [latex]5[/latex] to [latex]1[/latex] is considered good. Hector’s total cholesterol is [latex]249[/latex] mg/dl and his HDL cholesterol is [latex]39[/latex] mg/dl. ⓐ Find the ratio of his total cholesterol to his HDL cholesterol. ⓑ Assuming that a ratio less than [latex]5[/latex] to [latex]1[/latex] is considered good, what would you suggest to Hector? ⓐ First, write the words that express the ratio. We want to know the ratio of Hector’s total cholesterol to his HDL cholesterol. Write as a fraction. [latex]\dfrac{\text{total cholesterol}}{\text{HDL cholesterol}}[/latex] Substitute the values. [latex]\dfrac{249}{39}[/latex] Simplify. [latex]\dfrac{83}{13}[/latex] ⓑ Is Hector’s cholesterol ratio ok? If we divide [latex]83[/latex] by [latex]13[/latex] we obtain approximately [latex]6.4[/latex], so [latex]\dfrac{83}{13}\normalsize\approx\dfrac{6.4}{1}[/latex]. Hector’s cholesterol ratio is high! Hector should either lower his total cholesterol or raise his HDL cholesterol. 1. Find the patient’s ratio of total cholesterol to HDL cholesterol using the given information. Total cholesterol is [latex]185[/latex] mg/dL and HDL cholesterol is [latex]40[/latex] mg/dL. Show Solution 2. Find the patient’s ratio of total cholesterol to HDL cholesterol using the given information. Total cholesterol is [latex]204[/latex] mg/dL and HDL cholesterol is [latex]38[/latex] mg/dL. Show Solution Ratios of Two Measurements in Different Units To find the ratio of two measurements, we must make sure the quantities have been measured with the same unit. If the measurements are not in the same units, we must first convert them to the same We know that to simplify a fraction, we divide out common factors. Similarly in a ratio of measurements, we divide out the common unit. The Americans with Disabilities Act (ADA) Guidelines for wheel chair ramps require a maximum vertical rise of [latex]1[/latex] inch for every [latex]1[/latex] foot of horizontal run. What is the ratio of the rise to the run? In a ratio, the measurements must be in the same units. We can change feet to inches, or inches to feet. It is usually easier to convert to the smaller unit, since this avoids introducing more fractions into the problem. Write the words that express the ratio. Ratio of the rise to the run Write the ratio as a fraction. [latex]\dfrac{\text{rise}}{\text{run}}[/latex] Substitute in the given values. [latex]\dfrac{\text{1 inch}}{\text{1 foot}}[/latex] Convert [latex]1[/latex] foot to inches. [latex]\dfrac{\text{1 inch}}{\text{12 inches}}[/latex] Simplify, dividing out common factors and units. [latex]\dfrac{1}{12}[/latex] So the ratio of rise to run is [latex]1[/latex] to [latex]12[/latex]. This means that the ramp should rise [latex]1[/latex] inch for every [latex]12[/latex] inches of horizontal run to comply with the guidelines. 1. Find the ratio of the first length to the second length: [latex]32[/latex] inches to [latex]1[/latex] foot. Show Solution 2. Find the ratio of the first length to the second length: [latex]1[/latex] foot to [latex]54[/latex] inches. Show Solution
{"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/translating-phrases-to-expressions-with-fractions/","timestamp":"2024-11-10T14:18:09Z","content_type":"text/html","content_length":"24167","record_id":"<urn:uuid:568b355c-59c2-4436-aa81-348cc76b7bba>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00283.warc.gz"}
The value of (qc - rb) is-... | Filo Question asked by Filo student The value of (qc - rb) is- a. 0 d. depends on Not the question you're searching for? + Ask your question Filo tutor solution Learn from their 1-to-1 discussion with Filo tutors. Generate FREE solution for this question from our expert tutors in next 60 seconds Don't let anything interrupt your homework or exam prep with world’s only instant-tutoring, available 24x7 Found 6 tutors discussing this question Discuss this question LIVE for FREE 11 mins ago Practice more questions on Trigonometry View more Students who ask this question also asked View more Question Text The value of (qc - rb) is- Updated On Nov 7, 2022 Topic Trigonometry Subject Mathematics Class Class 11
{"url":"https://askfilo.com/user-question-answers-mathematics/the-value-of-qc-rb-is-32363536383733","timestamp":"2024-11-11T17:14:54Z","content_type":"text/html","content_length":"284082","record_id":"<urn:uuid:50b1e13b-a68a-44b8-92e8-a8aeacf074bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00838.warc.gz"}
How to use loop with vpasolve ?? (2024) Commented: Image Analyst on 1 Feb 2020 In the following code, I want to use loop with vpasolve. For different values of alpha, I want to solve the equation to get different values of r. syms alpha mio B r for i=1:alpha_length sol_positive(i) = vpasolve(eqn1(i),r(i),[0 Inf]); But, After running this code, it shows the following error. Error using subsref Index exceeds matrix dimensions. Error in sym/subsref (line 771) R_tilde = builtin('subsref',L_tilde,Idx); Error in tetra_v3 (line 10) So, my question is: What is the cause of these errors and how to solve them ?? and How to use loop with vpasolve ?? 1 Comment Show -1 older commentsHide -1 older comments Image Analyst on 1 Feb 2020 Direct link to this comment Original question by Erman: In the following code, I want to use loop with vpasolve. For different values of alpha, I want to solve the equation to get different values of r. syms alpha mio B r for i=1:alpha_length sol_positive(i) = vpasolve(eqn1(i),r(i),[0 Inf]); But, After running this code, it shows the following error. Error using subsref Index exceeds matrix dimensions. Error in sym/subsref (line 771) R_tilde = builtin('subsref',L_tilde,Idx); Error in tetra_v3 (line 10) So, my question is: What is the cause of these errors and how to solve them ?? and How to use loop with vpasolve ?? Sign in to comment. Sign in to answer this question. Answers (2) John D'Errico on 8 May 2018 Edited: John D'Errico on 8 May 2018 What is the cause? READ THE ERROR MESSAGE. "Index exceeds matrix dimensions." What are you indexing? What indexing is involved here? alpha seems to be a vector. How about r? r is a scalar. When i is greater than 1, r(i) will return an error, because r IS A SCALAR. What is the second element of a scalar? 5 Comments Show 3 older commentsHide 3 older comments Direct link to this comment Thanks for reply. How can I solve this error and how can I solve this equation using loop ?? John D'Errico on 8 May 2018 Direct link to this comment Edited: John D'Errico on 8 May 2018 How can you solve it? r is not a vector. You can't solve it, at least not with the code you have written. r has only one value. If r has only one value, why are you trying to use multiple values for r? Why do you need to index r at all? Direct link to this comment Edited: Eman S on 8 May 2018 I want to solve an equation for multiple times to get the value of r by varying the value of alpha in each time. The values of alpha are 1:0.5:6. With each value of alpha, i want to solve the equation to get the value of r. John D'Errico on 8 May 2018 Direct link to this comment Edited: John D'Errico on 8 May 2018 You are saving the solution in sol_positive(i), NOT in r(i). There is no need to index r. r is just the symbolic variable you are solving for in the equation. Direct link to this comment Edited: Eman S on 8 May 2018 OK. I modified the code as the following. I removed indexing from r. syms mio B r for i=1:alpha_length sol_positive(i) = vpasolve(eqn1(i),r,[0 Inf]); After running this code, it gives the following errors: Error using subsasgn In an assignment A(:) = B, the number of elements in A and B must be the same. Error in sym/privsubsasgn (line 997) L_tilde2 = builtin('subsasgn',L_tilde,struct('type','()','subs',{varargin}),R_tilde); Error in sym/subsasgn (line 834) C = privsubsasgn(L,R,inds{:}); Error in tetra_v3 (line 23) sol_positive(i) = vpasolve(eqn1(i),r,[0 Inf]); Sign in to comment. Walter Roberson on 8 May 2018 As we explored earlier, your system works out to be a polynomial and vpasolve() is going to return a list of all the solutions under the constraint you give, [0 inf]. You are trying to store that vector into a single location sol_positive(i) . If you were certain that there would always be the same number of results, you could store to sol_positive(i,:) instead, but I think you would be better off assuming that the number of positive roots might change, so I would recommend assigning to sol_positive{i} . Indeed, my test shows that most of your equations have no solution in that range. 1 Comment Show -1 older commentsHide -1 older comments Walter Roberson on 8 May 2018 Direct link to this comment Correction: this is a different system that is not polynomial. However, your system does not happen to have solutions at the alpha that end in 0.5 . There are solutions with fractional alpha, but those solutions do not happen be real valued for alpha ending in 1 /2 . For example for r = 1/5 there is a solution of alpha about 2.4 Also, because it is not polynomial, vpasolve() is only finding one solution. For alpha = 1, there are three positive solutions in the range 1 to 1.7 Sign in to comment. Sign in to answer this question. See Also MATLABLanguage FundamentalsLoops and Conditional Statements Find more on Loops and Conditional Statements in Help Center and File Exchange Community Treasure Hunt Find the treasures in MATLAB Central and discover how the community can help you! Start Hunting! An Error Occurred Unable to complete the action because of changes made to the page. Reload the page to see its updated state. Select a Web Site Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: . You can also select a web site from the following list • América Latina (Español) • Canada (English) • United States (English) • Belgium (English) • Denmark (English) • Deutschland (Deutsch) • España (Español) • Finland (English) • France (Français) • Ireland (English) • Italia (Italiano) • Luxembourg (English) • Netherlands (English) • Norway (English) • Österreich (Deutsch) • Portugal (English) • Sweden (English) • Switzerland • United Kingdom(English) Asia Pacific • Australia (English) • India (English) • New Zealand (English) • 中国 • 日本Japanese (日本語) • 한국Korean (한국어) Contact your local office
{"url":"https://robotfrank.com/article/how-to-use-loop-with-vpasolve","timestamp":"2024-11-03T05:47:41Z","content_type":"text/html","content_length":"138673","record_id":"<urn:uuid:ca6fd374-f09a-4075-b722-4a0700362505>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00872.warc.gz"}
Plane graphs, special alternating links, and contact geometry, Sydney Oct 2017, Daniel Mathews On Thursday October 5 2017 I gave a talk in the Geometry and Topology seminar at the University of Sydney. The slides from the talk are available here. Plane graphs, special alternating links, and contact geometry There is a beautiful theory of polytopes associated to bipartite plane graphs, due to Alexander Postnikov, Tamas Kalman, and others. Via a construction known as the median construction, this theory extends to knots and links — more specifically, minimal genus Seifert surfaces for special alternating links. The complements of these Seifert surfaces also have interesting geometry. The relationships between these objects provide many interesting connections between graphs, spanning trees, polytopes, knot and link polynomials, and even Floer homology. In recent work with Kalman we showed how these connections extend to contact geometry. I’ll try to explain something of these ideas. Plane graphs, special alternating links, and contact geometry, Sydney Oct 2017
{"url":"https://www.danielmathews.info/2017/10/05/plane-graphs-special-alternating-links-and-contact-geometry-sydney-oct-2017/","timestamp":"2024-11-07T18:31:16Z","content_type":"text/html","content_length":"40894","record_id":"<urn:uuid:35ef4367-4ad9-4348-91d2-b11f5144a148>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00559.warc.gz"}
Poisson approximation for non-backtracking random walks Random walks on expander graphs were thoroughly studied, with the important motivation that, under some natural conditions, these walks mix quickly and provide an efficient method of sampling the vertices of a graph. The authors of [3] studied non-backtracking random walks on regular graphs, and showed that their mixing rate may be up to twice as fast as that of the simple random walk. As an application, they showed that the maximal number of visits to a vertex, made by a non-backtracking random walk of length n on a high-girth n-vertex regular expander, is typically (1+o(1)))log n/log log n, as in the case of the balls and bins experiment. They further asked whether one can establish the precise distribution of the visits such a walk makes. In this work, we answer the above question by combining a generalized form of Brun's sieve with some extensions of the ideas in [3]. Let N[t] denote the number of vertices visited precisely t times by a non-backtracking random walk of length n on a regular n-vertex expander of fixed degree and girth g. We prove that if g = ω(1), then for any fixed t, N[t]/n is typically 1/et! + o(1). Furthermore, if g = Ω(log log n), then N[t]/ n is typically 1+o(1)/et! niformly on all t ≤ (1 - o(1)) log n/log log n and 0 for all t ≥ (1 + o(1)) log n/log log n. In particular, we obtain the above result on the typical maximal number of visits to a single vertex, with an improved threshold window. The essence of the proof lies in showing that variables counting the number of visits to a set of sufficiently distant vertices are asymptotically independent Poisson variables. ASJC Scopus subject areas Dive into the research topics of 'Poisson approximation for non-backtracking random walks'. Together they form a unique fingerprint.
{"url":"https://nyuscholars.nyu.edu/en/publications/poisson-approximation-for-non-backtracking-random-walks","timestamp":"2024-11-11T23:24:39Z","content_type":"text/html","content_length":"54708","record_id":"<urn:uuid:5d8fbcad-b3e4-4d38-938f-dfad95e9a77d>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00407.warc.gz"}
Estimation of the L function Lhat {dbmss} R Documentation Estimation of the L function Estimates the L function Lhat(X, r = NULL, ReferenceType = "", NeighborType = "", CheckArguments = TRUE) X A weighted, marked, planar point pattern (wmppp.object). r A vector of distances. If NULL, a sensible default value is chosen (512 intervals, from 0 to half the diameter of the window) following spatstat. ReferenceType One of the point types. Default is all point types. NeighborType One of the point types. Default is all point types. CheckArguments Logical; if TRUE, the function arguments are verified. Should be set to FALSE to save time in simulations for example, when the arguments have been checked elsewhere. L is the normalized version of K: L(r)=\sqrt{\frac{K}{\pi}}-r. An object of class fv, see fv.object, which can be plotted directly using plot.fv. L was originally defined as L(r)=\sqrt{\frac{K}{\pi}}. It has been used as L(r)=\sqrt{\frac{K}{\pi}}-r in a part of the literature because this normalization is easier to plot. Besag, J. E. (1977). Comments on Ripley's paper. Journal of the Royal Statistical Society B 39(2): 193-195. See Also Khat, LEnvelope # Calculate L r <- 0:30 (Paracou <- Lhat(paracou16, r)) # Plot version 2.9-0
{"url":"https://search.r-project.org/CRAN/refmans/dbmss/html/Lhat.html","timestamp":"2024-11-07T01:16:32Z","content_type":"text/html","content_length":"3877","record_id":"<urn:uuid:b413a901-7023-453c-9f7d-93fda12fb4fe>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00083.warc.gz"}
assignment problem solutions 1. Assignment Problem in Excel (In Easy Steps) 2. Lecture 19 Assignment problem : Unbalanced and maximal Assignment Problems 3. Assignment Problem Solution 4. Solution of Assignment Problems 5. Alternative Solution to Assignment problem & Solved Examples 6. How to Solve an Assignment Problem Using the Hungarian Method 2. Assignment Problem ( Brute force method) Design and Analysis of Algorithm 3. Introduction to Assignment Problem|Maximization|Linear Programming|Dream Maths 4. Introduction to Assignment Problem Multiple Solution Hungarian Method|Linear Programming|Dream Maths 5. Operation Research 16: Formulation of Assignment Problem 6. Mathematical formulation of Assignment problem and Assignment problem as a particular case of LPP
{"url":"https://blog10.website/essay/assignment-problem-solutions","timestamp":"2024-11-10T02:18:06Z","content_type":"text/html","content_length":"20570","record_id":"<urn:uuid:e5b59f64-01f0-4a50-ad0d-58a1d1accaef>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00275.warc.gz"}
Mastering the (a+b)^2 Formula: Practical Examples and Solutions The (a+b)^2 formula, also known as the square of a binomial formula, is a fundamental concept in algebra. This formula is used to find the square of the sum of two numbers, a and b. Understanding and mastering this formula is essential for solving various algebraic equations and factorizing trinomials. In this blog post, we will delve into the (a + b)2 formula, its geometric interpretation, practical applications, solved examples, and frequently asked questions. Understanding the (a+b)^2 Formula At its core, the (a+b)^2 formula exemplifies a fundamental principle in algebra, focusing on the square of the sum of two terms, a and b. This identity is expressed mathematically as (a + b) 2 = a2 + 2ab + b2. The derivation of this formula begins by multiplying the binomial (a + b) by itself, leading to an expansion that reveals the intricate relationship between the individual squares of a and b and their product. To fully appreciate the mechanics of this formula, let’s dissect its derivation step by step. Initially, we consider the multiplication of the binomial with itself: (a + b)(a + b). Applying the distributive property—where each term inside the first parentheses is multiplied by each term inside the second—we obtain a^2 + ab + ba + b^2. Observing the terms ab and ba as identical, we combine them to yield 2ab, arriving at the simplified and widely recognized form of the formula: a^2 + 2ab + b^2. This algebraic identity is not just a mere mathematical curiosity; it is a powerful tool in simplifying and solving algebraic expressions. Its significance is particularly evident when dealing with polynomial equations, where recognizing the pattern a^2 + 2ab + b^2 allows for quick factorization or expansion. Furthermore, this formula serves as a bridge connecting algebraic expressions with geometric interpretations, thereby enhancing our conceptual understanding of algebraic operations through visual means. In essence, the (a + b)^2 formula encapsulates a fundamental algebraic concept that transcends its apparent simplicity. By understanding how to derive and apply this formula, learners can unlock a deeper comprehension of algebraic expressions, laying the groundwork for more advanced mathematical explorations. Through the manipulation and application of this formula, we gain not only the ability to simplify and solve equations but also a richer, more connected understanding of the mathematical world. Geometric Interpretation of the (a+b)^2 Formula Geometric interpretation adds a vivid dimension to understanding mathematical concepts, and this is particularly true for the (a + b)^2 formula. The beauty of this algebraic identity can be seen through a simple yet illustrative geometric proof that bridges abstract algebra with tangible visualizations. Imagine constructing a large square, the sides of which measure (a + b) in length. This square is then divided into smaller geometric shapes: two smaller squares and two rectangles. The side of one small square measures ‘a’ units in length, and the other ‘b’ units correspond to the terms in the binomial we are squaring. These geometric shapes represent the individual components of the (a + b)^2 formula. The area of the first square, which is ‘a’ by ‘a’, gives us a^2, while the area of the second square, ‘b’ by ‘b’, gives us b^2. The rectangles, each sharing sides of lengths ‘a’ and ‘b’, contribute areas of ab each. When these areas are combined—adding up the areas of the two squares and the two rectangles—we obtain the total area of the large square, which is expressed algebraically as a^2 + 2ab + b^2. This geometric approach not only validates the algebraic formula but also provides a visual understanding of how the terms of the binomial square contribute to the total area. It elucidates the concept that the square of the sum of two numbers encompasses not only the individual squares of these numbers but also twice the product of the two numbers, represented by the area of the rectangles. By visualizing these geometric relationships, learners can more deeply comprehend and internalize the (a + b)^2 formula, making it a powerful tool in both algebraic and geometric Practical Applications of the (a+b)^2 Formula The practicality of the (a + b)^2 formula extends far beyond textbook exercises, impacting various fields and real-world scenarios. This algebraic identity plays a crucial role in simplifying complex algebraic expressions, thus facilitating smoother problem-solving in mathematics and its applications. One of the notable uses of this formula is in the realm of polynomial algebra, where it assists in the quick factorization of specific trinomial forms, namely a^2 + 2ab + b^2. By recognizing this pattern, mathematicians and students alike can efficiently decompose complex expressions into simpler, more manageable components. In addition to factorization, the (a + b)^2 formula is indispensable in the expansion of binomials. This application is particularly useful in higher-level algebra, where it becomes necessary to manipulate and expand expressions for further analysis or simplification. Through its application, learners develop a deeper understanding of the structure of algebraic expressions and enhance their problem-solving strategies. The formula’s utility is also evident in the domain of quadratic equations, where it can be employed to rewrite equations in a form that is more conducive to solving through methods such as completing the square or applying the quadratic formula. Furthermore, this algebraic identity finds its place in the proof of various mathematical theorems and identities, serving as a foundational element in the demonstration of more complex mathematical concepts. Beyond the confines of pure mathematics, the (a+b)^2 formula has applications in physics, engineering, and economics, where it aids in the modelling and solving of problems related to these disciplines. From calculating areas in geometry to analyzing profit functions in economics, the versatility and practicality of this formula underscore its importance across a broad spectrum of academic and professional fields. Its widespread application highlights not only the interconnectedness of mathematical concepts but also the formula’s integral role in fostering analytical thinking and problem-solving skills. Solved Examples Using the (a+b)^2 Formula Diving into the practical applications of the (a + b)^2 formula, let’s explore its versatility through a series of solved examples. These instances demonstrate not just the formula’s utility in algebraic manipulation but also its role in enhancing problem-solving techniques across various mathematical contexts. Example 1: Determine the value of (3x + 2y)^2. Applying the formula directly, we identify an as 3x and b as 2y. Substituting these values into the (a + b)^2 formula gives us: (3x + 2y)^2 = (3x)^2 + 2(3x)(2y) + (2y)^2 Simplifying the expression yields 9x^2 + 12xy + 4y^2. This illustrates how the formula can quickly expand binomials into trinomials. Example 2: Factorize x^2 + 4xy + 4y^2. In this scenario, we observe that the given expression fits the pattern a^2 + 2ab + b^2, signalling that it can be directly factorized using the (a + b)^2 formula. By comparing, we find a = x and b = 2y, leading to the factorized form: x^2 + 4xy + 4y^2 = (x + 2y)^2. This example underscores the formula’s ability to streamline the factorization process, converting complex trinomials back into simpler binomial squares. Example 3: Simplify (7x + 4y)^2. To simplify, we recognize an as 7x and b as 4y. Substituting these into our formula, we compute: (7x + 4y)^2 = (7x)^2 + 2(7x)(4y) + (4y)^2 Resulting in 49x^2 + 56xy + 16y^2. This computation demonstrates the formula’s effectiveness in expanding binomials, providing a clear path to simplification. Through these examples, the (a + b)^2 formula reveals itself as a powerful algebraic tool capable of simplifying complex expressions, factorizing trinomials, and expanding binomials with efficiency and precision. FAQs on the (a+b)^2 Formula Frequently asked questions about the (a+b)^2 formula highlight its significance and applications in mathematical problem-solving. This section addresses some of the common inquiries that arise when exploring this foundational algebraic identity. 1. Can the (a + b)^2 formula assist in algebra beyond simple squaring? Yes, beyond calculating the square of a binomial, this formula is instrumental in decomposing complex trinomials into simpler forms for easy factorization. It serves as a bridge in connecting different algebraic concepts and solving quadratic equations more efficiently. 2. Is there a visual method to understand the (a + b)^2 formula better? Absolutely. By constructing a large square divided into smaller geometric shapes—two squares and two rectangles—one can visually comprehend how the formula encapsulates the sum of their areas (a2 + 2ab + b2). This geometric representation not only validates the formula but also enhances our grasp of algebra through spatial understanding. 3. Where does the (a + b)^2 formula find utility outside of classroom algebra? This formula’s utility extends into various fields, including physics for calculating motion, engineering for structural analysis, and economics for modelling financial equations. Its application in simplifying expressions and solving equations proves invaluable in both academic studies and professional practices. 4. How does the (a + b)^2 formula aid in mathematical proofs? Mathematicians often employ this algebraic identity to prove more intricate theorems and identities. It lays the groundwork for a deeper exploration of algebra and geometry, demonstrating the interconnectedness of different mathematical areas. Read More: Demystifying Arithmetic Progression: The Essential Maths Sequence
{"url":"https://protechnews.co.uk/ab-2/","timestamp":"2024-11-14T01:15:20Z","content_type":"text/html","content_length":"142134","record_id":"<urn:uuid:0cb764e4-6042-4222-a9f8-d25dea6768cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00250.warc.gz"}
Selection sort SELECTION SORT is a comparison sorting algorithm that is used to sort a random list of items in ascending order. The comparison does not require a lot of extra space. It only requires one extra memory space for the temporal variable. By default, the sorted list is empty, and the unsorted list contains all the elements. The unsorted list is then scanned for the minimum value, which is then placed in the sorted list. This process is repeated until all the values have been compared and sorted. How does selection sort work? The first item in the unsorted partition is compared with all the values to the right-hand side to check if it is the minimum value. If it is not the minimum value, then its position is swapped with the minimum value. • For example, if the index of the minimum value is 3, then the value of the element with index 3 is placed at index 0 while the value that was at index 0 is placed at index 3. If the first element in the unsorted partition is the minimum value, then it returns its positions. • The element that has been determined as the minimum value is then moved to the partition on the left-hand side, which is the sorted list. • The partitioned side now has one element, while the unpartitioned side has (n – 1) elements where n is the total number of elements in the list. This process is repeated over and over until all items have been compared and sorted based on their values. Problem Definition A list of elements that are in random order needs to be sorted in ascending order. Consider the following list as an example. 5, 7, 2, 3, 8 The above list should be sorted to produce the following results 2, 3, 5, 7, 8. Solution (Algorithm) Step 1) Get the value of n which is the total size of the array Step 2) Partition the list into sorted and unsorted sections. The sorted section is initially empty while the unsorted section contains the entire list Step 3) Pick the minimum value from the unpartitioned section and placed it into the sorted section. Step 4) Repeat the process (n – 1) times until all of the elements in the list have been sorted. Visual Representation Given a list of five elements, the following images illustrate how the selection sort algorithm iterates through the values when sorting them. The following image shows the unsorted list Step 1) The first value 5 is compared with the rest of the values to check if it is the minimum value. 2 is the minimum value, so the positions of 5 and 2 are swapped. The green values represent the sorted partition of the list. Step 2) The value 7 which is the first element in the unsorted partition is compared with the rest of the values to find out if a lower value exists: 3 is the minimum value, so the positions of 7 and 3 are swapped: Step 3) The first element of the unsorted list with the value of 5 is compared with the rest of the values to check if it is the minimum value. Step 4) The value 7 is compared with the rest of the values. The value 7 is the minimum value, so it maintains its position in the sorted partition. Step 5) We only have one value left in the unpartitioned list. Therefore, it is already sorted. The final list is like the one shown in the above image. Selection Sort Program using Python The following code shows the selection sort implementation using Python Run the above code produces the following results Here is Code explanation: 1. Defines a function named selectionSort 2. Gets the total number of elements in the list. We need this to determine the number of passes to be made when comparing values. 3. Outer loop. Uses the loop to iterate through the values of the list. The number of iterations is (n – 1). The value of n is 5, so (5 – 1) gives us 4. This means the outer iterations will be performed 4 times. In each iteration, the value of the variable i is assigned to the variable minValueIndex 4. Inner loop. Uses the loop to compare the leftmost value to the other values on the right-hand side. However, the value for j does not start at index 0. It starts at (i + 1). This excludes the values that have already been sorted so that we focus on items that have not yet been sorted. 5. Finds the minimum value in the unsorted list and places it in its proper position 6. Updates the value of minValueIndex when the swapping condition is true 7. Compares the values of index numbers minValueIndex and i to see if they are not equal 8. The leftmost value is stored in a temporal variable 9. The lower value from the right-hand side takes the position first position 10. The value that was stored in the temporal value is stored in the position that was previously held by the minimum value 11. Returns the sorted list as the function result 12. Creates a list el that has random numbers 13. Print the sorted list after calling the selection Sort function passing in el as the parameter. The largest-so-far value, again initially the first number, must be compared to all the other numbers in the unsorted part of the list, which will require \(n-2\) comparisons. The number of comparisons keeps decreasing as the length of the unsorted section of the list gets smaller, until finally only one comparison is needed. The total number of comparisons is \((n-1)+(n-2)+(n-3)+ \ldots + 3+2+1= \frac{(n-1) n}{2}=\frac{n^2-n}{2} \) The selection sort algorithm not only does comparisons, it does exchanges. Even if the largest number in the unsorted section of the list is already at the end of the unsorted section, the algorithm exchanges this number with itself. Therefore, the algorithm does \(n\) exchanges, one for each position in the list to put the correct value in that position. However, the work contributed by exchanges and marker moving is so much less than the amount contributed by comparisons that it can be ignored. Therefore, the number of executions is \(n^2\), which can also be expressed as \(O\left(n^2\right)\). The selection sort has three categories of complexity namely: • Worst case – this is where the list provided is in descending order. The algorithm performs the maximum number of executions which is expressed as \(O\left(n^2\right)\) • Best case – this occurs when the provided list is already sorted. The algorithm performs the minimum number of executions which is expressed as \(\Omega\left(n^2\right)\) • Average case – this occurs when the list is in random order. The average complexity is expressed as \(\Theta\left(n^2\right)\) { 0 comments… add one }
{"url":"http://www.raucci.net/2021/12/17/selection-sort/","timestamp":"2024-11-10T11:57:35Z","content_type":"text/html","content_length":"82272","record_id":"<urn:uuid:ecc4dda3-71d7-4bb0-8e05-ecc264a91a0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00720.warc.gz"}
Centripetal force in context of turn rate 31 Aug 2024 Journal of Physics and Engineering Volume 12, Issue 3, 2023 Centripetal Force and Turn Rate: A Theoretical Analysis This article provides a theoretical analysis of the relationship between centripetal force and turn rate in circular motion. We derive the formula for centripetal force as a function of turn rate and discuss its implications for understanding the dynamics of rotating systems. Centripetal force is a fundamental concept in physics that describes the force required to keep an object moving in a circular path. It is a crucial component in the study of rotational motion, and its relationship with turn rate has significant implications for various fields, including engineering, astronomy, and sports science. In this article, we provide a theoretical analysis of centripetal force in the context of turn rate. The centripetal force (F_c) required to keep an object moving in a circular path is given by: F_c = m * v^2 / r where m is the mass of the object, v is its velocity, and r is the radius of the circle. In terms of turn rate (ω), which is defined as the angular velocity of the object, we can rewrite the formula for centripetal force as: F_c = m * r * ω^2 This equation shows that the centripetal force required to keep an object moving in a circular path is directly proportional to the mass of the object, the radius of the circle, and the square of the turn rate. The relationship between centripetal force and turn rate has significant implications for understanding the dynamics of rotating systems. For example, in the context of sports science, the formula for centripetal force can be used to analyze the forces experienced by athletes during high-speed turns, such as those encountered in cycling or figure skating. In engineering, the relationship between centripetal force and turn rate is crucial for designing safe and efficient rotating systems, such as centrifuges or amusement park rides. By understanding the forces involved, engineers can design systems that minimize the risk of accidents while maximizing performance. In conclusion, this article has provided a theoretical analysis of the relationship between centripetal force and turn rate in circular motion. The formula for centripetal force as a function of turn rate has significant implications for various fields, including engineering, astronomy, and sports science. Further research is needed to explore the practical applications of this concept. • [1] Halliday, D., Resnick, R., & Walker, J. (2013). Fundamentals of physics. John Wiley & Sons. • [2] Serway, R. A., & Jewett, J. W. (2018). Physics for scientists and engineers. Cengage Learning. Note: The references provided are a selection of popular physics textbooks that cover the topic of centripetal force and turn rate. Related articles for ‘turn rate’ : • Reading: Centripetal force in context of turn rate Calculators for ‘turn rate’
{"url":"https://blog.truegeometry.com/tutorials/education/3def33f3a741b5a91fcacab560779937/JSON_TO_ARTCL_Centripetal_force_in_context_of_turn_rate.html","timestamp":"2024-11-02T11:45:49Z","content_type":"text/html","content_length":"16128","record_id":"<urn:uuid:e4852dac-bfe7-4369-a3b6-9763c0f68687>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00712.warc.gz"}
Solubility Experiment Have you ever wondered why some substances dissolve in water while others don’t? The answer: solubility. Solubility is the ability of a solid, liquid, or gaseous chemical substance (or solute) to dissolve in a solvent (usually a liquid) and form a homogenous solution. There are three factors that affect • Solvent: To determine whether a solute will dissolve in a solvent, remember this saying: “Like dissolves like.” • Temperature: This factor affects the solubility of both solids and gases, with solubility increasing with the temperature. • Pressure: This factor affects the solubility of gases, with solubility also increasing with pressure. The Science Behind Solubility Put simply, a substance is considered to be soluble if it can be dissolved, most commonly in water. When a solute, such as table salt, is added to a solvent, such as water, the salt’s molecular bonds are broken before combining with the water, causing the salt to dissolve. However, for the salt to remain soluble and dissolve completely, there must be a larger concentration of water than salt in the solution. A solution becomes saturated when the solvent can dissolve no more solute. But adding heat or pressure can help to increase the solubility of the solute, depending on its state. Check out Chemistry Rocks! 3 Simple Science Experiments To Teach Students Chemistry for more activity ideas! Testing the Solubility of Substances For this experiment, your students will explore basic chemistry concepts by testing the solubility of different substances in water. From the example above, we know that table salt is highly soluble in water. What other substances can dissolve in water? • Clear containers, such as cups, beakers, or bowls • Water • Materials to test, such as sugar, sand, chalk, baking soda and Epsom salts • Stirring rods • Measuring spoon • STEM journals (optional) 1. Begin by discussing the science of solubility, and have students write down their predictions about which materials are soluble or insoluble. Students can also document the scientific process in their STEM journals. 2. Fill each container with lukewarm tap water. 3. Add a specific amount—for example, 1 tablespoon—of a test material to a container using the measuring spoon. Repeat, adding an equal amount of a different material to each container of water. 4. Use the stirring rods to mix the contents in each container. 5. Observe which materials dissolved in the water and which did not. Did students make the right predictions? Discussion Questions Once the experiment is complete, use the following questions to deepen students’ understanding of solubility and how it works: 1. What are the qualities of the soluble materials versus those of the insoluble materials? For example, the soluble materials are likely powdery and dry, while insoluble materials may have a hard, grainy texture. 2. For the materials that dissolved in the water, what do you think will happen if you keep adding more to the water? 3. What are other examples of soluble substances? 4. What are other examples of insoluble substances?
{"url":"https://www.extendednotes.com/after-school-activities/solubility-experiment","timestamp":"2024-11-06T08:33:29Z","content_type":"text/html","content_length":"66944","record_id":"<urn:uuid:f463c2cb-c44e-4163-846b-2c33ffaa1c4a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00171.warc.gz"}
ASVAB Mechanical Comprehension Practice Test This is our free ASVAB Mechanical Comprehension practice test. On the Mechanical Comprehension ASVAB test you will be given 20 minutes in which to complete 16 questions. You will need to be familiar with mechanics and applied physics. Topics will include basic mechanics, simple machines and fluid power. If you spend some time reviewing this material it will make a big difference in your final score. Start your review now with our ASVAB Mechanical Comprehension practice questions. Congratulations - you have completed . You scored %%SCORE%% out of %%TOTAL%%. Your performance has been rated as %%RATING%% Your answers are highlighted below. Which of the following is the basis for Pascal’s Law? The amplification of force in a hydraulic system. The relationship between force and volume of a liquid. The definition of pressure as pounds per square inch. The manner in which liquids conform to their container. Question 1 Explanation: The correct answer is (A). The third principle of hydraulics is the basis for Pascal’s Law and states that when pressure is applied to a fluid, the pressure is transmitted evenly throughout—in other words, how force is amplified within a hydraulic system. The smaller gear has 8 teeth, and the larger gear has 16 teeth. When the larger gear makes 12 revolutions, how many revolutions will the smaller gear make? Question 2 Explanation: The correct answer is (D). The gear ratio here is 2:1, meaning that for every revolution of the larger gear, the smaller gear revolves twice. If the larger gear revolves 12 times, the smaller gear will revolve twice as many times. How much force is needed to lift the 10Kg weight? Question 3 Explanation: The correct answer is (B). In a double-pulley system, the force is equal to the weight divided by two. It will require a 5 Kg force to lift a 10Kg weight. An 80-lb child is sitting on one end of a seesaw. If another 80-lb child jumps from a high platform onto the other end of the seesaw, what will happen? The first child will stay stationary on the ground. The second child will bounce off the seesaw into the air. The first child will bounce off the seesaw into the air. The first child will be lifted into the air, but remain seated. Question 4 Explanation: The correct answer is (D). The force of the second child landing on the seesaw will propel the other end of the seesaw into the air. What is a vector quantity? Question 5 Explanation: The correct answer is (A). Gravity is a type of force, and since force has magnitude and direction it is known as a vector quantity. Which of the following describes the relationship between kinetic energy, mass, and velocity? KE = ½ m^2v (1/3)(KE) = m/v^2 2(KE) = mv^2 3(KE) = m^2v Question 6 Explanation: The correct answer is (C). An object’s kinetic energy is defined as KE = ½(mv^2). Reorienting, we find: 2KE = mv^2. One horsepower is equivalent to 724 watts 746 watts 768 watts 782 watts Question 7 Explanation: The correct answer is (B). Horsepower is a unit used to rate internal combustion engines. One horsepower is the equivalent of 746 watts. What type of simple machine is pictured below? First-class lever Second-class lever Third-class lever Question 8 Explanation: The correct answer is (C). This machine pictured is a second-class lever. An example of a second-class lever is a wheelbarrow. At minimum, how many pulleys are there in a block and tackle system? Question 9 Explanation: The correct answer is (B). The block and tackle pulley system requires at least two or more pulleys. The bare minimum for this system is two. When torque increases, speed remains the same increases, then decreases Question 10 Explanation: The correct answer is (D). The quantities of torque and speed are inversely proportional. When one increases, the other decreases. If a wedge is made longer relative to its height, how does the force increase? The amount of lift is decreased as the wedge is moved horizontally. The amount of lift is increased as the wedge is moved horizontally. The amount of lift is decreased as the wedge is moved vertically. The amount of lift is increased as the wedge is moved vertically. Question 11 Explanation: The correct answer is (A). When the distance of a lift is the same as the distance the initial force travels, the force in will equal the force out. Making the wedge longer relative to its height (less slope) will change this by decreasing the amount of lift that takes place as the wedge is moved horizontally. If a man uses 200 N of force over a distance of 50 meters, how many joules of work are performed? Question 12 Explanation: The correct answer is (C). Work is defined as the product of force in the direction of displacement. Here a force of 200 N is used over a displacement of 50 m. Consequently, the work performed is 200 N * 50 m = 10,000 J. What is a component of potential energy? Upward reactive force The energy of movement Gravity’s relationship to mass Newtons and distance Question 13 Explanation: The correct answer is (C). An object’s potential energy is equal to its mass times its acceleration due to gravity times its height, or PE = mgh. A fish tank at an aquarium contains several cubic feet of water (1 cubic foot of water = 62.5 pounds). If the fish tank is 6 feet deep, 12 feet wide, and 13 feet long, what’s the approximate pressure at the bottom of the tank in pounds per square inch? Question 14 Explanation: The correct answer is (B). To find the pressure (psi), we must divide the tank’s total volume by the surface area of the bottom of the tank. The volume = 6 × 12 × 13 = 936 feet. Multiply it by 62.5 to convert it into pounds: 936 × 62.5 = 58,500. The bottom of the tank’s surface area is 12 ft × 13 ft = 144 inches × 156 inches. Since pressure is in pounds per square inch (psi), we have to convert to inches. The surface area is 22,464 inches. The psi is 58,500 ÷ 22,464 which is approximately 2.6. The closest answer is (B). If Gear A turns counterclockwise, what other gears turn counterclockwise? C and D B and D C and B B, C, and D Question 15 Explanation: The correct answer is (A). If A turns counterclockwise, it will cause B to turn clockwise. B will cause C and D to turn counterclockwise. The well pictured below uses what type of simple machine to raise and lower a bucket? Block and tackle Third-class lever Wheel and axle Question 16 Explanation: The correct answer is (D). The well pictured here uses a wheel and axle system. The movement at the wheel becomes smaller than the movement on the shaft, increasing the force on the shaft. Once you are finished, click the button below. Any items you have not completed will be marked incorrect. There are 16 questions to complete.
{"url":"https://www.asvabpracticetests.com/asvab-mechanical-comprehension-practice-test/","timestamp":"2024-11-11T01:40:44Z","content_type":"text/html","content_length":"186113","record_id":"<urn:uuid:dffada05-7a64-44c1-b72a-a6ba1c79ab5d>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00394.warc.gz"}
ftostr Function Function: Converts real value into its string representation. Syntax: ftostr(byref num as real, mode as ftostr_mode, rnd as byte) as string See Also: strtof, cfloat, str, val Part Description num Real value to convert. Desired output format: 0 — FTOSTR_MODE_AUTO: Choose between plain and mantissa/exponent format automatically. If the mantissa/exponent format results in a shorter string, it will be used, otherwise the plain format mode will be used. 1 — FTOSTR_MODE_ME: Use mantissa/exponent format. 2 — FTOSTR_MODE_PLAIN: Use plain format, not mantissa/exponent representation. rnd Number of digits to round the result to (total number of non-zero digits in the integer and fractional part of mantissa). The ftostr function offers much more control over the format of the output string compared to similar functions found on other systems. For starters, you can select whether you want to see mantissa/ exponent, "regular" format, or let the function decide which format to use. Additionally, you control the rounding (i.e., you get to choose how many digits should be displayed) — and this influences the representation both of the fractional and integer part of the value. Examples below illustrate what you can do with ftostr. This function has a counterpart — fstr — which is invoked implicitly whenever you assign a real to a string (string=real). fstr is just like ftostr, but the mode and rnd parameters are fixed at 0 — FTOSTR_MODE_AUTO and "maximum number of digits possible," respectively. dim r1 as real dim s as string 'demonstrate output formats r1=10000000000.0 'notice '.0' -- it is necessary or compilier will generate an error s=ftostr(r1,FTOSTR_MODE_ME,11) 'result will be '1E+010' s=ftostr(r1,FTOSTR_MODE_PLAIN,11) 'result will be '10000000000' s=ftostr(r1,FTOSTR_MODE_AUTO,11) 'result will be '1E+010' because this representation is more 'demonstrate rounding s=ftostr(r1,FTOSTR_MODE_AUTO,15) 'result will be '1234567.125' s=ftostr(r1,FTOSTR_MODE_AUTO,9) 'result will be '1234567.13' s=ftostr(r1,FTOSTR_MODE_AUTO,2) 'result will be '1200000' s=r1 'fstr will be used, result will be '1234567.125'
{"url":"https://docs.tibbo.com/syscall_ftostr","timestamp":"2024-11-09T12:41:51Z","content_type":"text/html","content_length":"11769","record_id":"<urn:uuid:d05b458d-1052-404a-8e3a-aec05ec7c5c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00493.warc.gz"}
9.2. Convolution Matrix Here is a mathematician's domain. Most of filters are using convolution matrix. With the Convolution Matrix filter, if the fancy takes you, you can build a custom filter. What is a convolution matrix? It's possible to get a rough idea of it without using mathematical tools that only a few ones know. Convolution is the treatment of a matrix by another one which is called “kernel”. The Convolution Matrix filter uses a first matrix which is the Image to be treated. The image is a bi-dimensional collection of pixels in rectangular coordinates. The used kernel depends on the effect you want. GIMP uses 5×5 or 3×3 matrices. We will consider only 3×3 matrices, they are the most used and they are enough for all effects you want. If all border values of a kernel are set to zero, then system will consider it as a 3×3 matrix. The filter studies successively every pixel of the image. For each of them, which we will call the “initial pixel”, it multiplies the value of this pixel and values of the 8 surrounding pixels by the kernel corresponding value. Then it adds the results, and the initial pixel is set to this final result value. A simple example: On the left is the image matrix: each pixel is marked with its value. The initial pixel has a red border. The kernel action area has a green border. In the middle is the kernel and, on the right is the convolution result. Here is what happened: the filter read successively, from left to right and from top to bottom, all the pixels of the kernel action area. It multiplied the value of each of them by the kernel corresponding value and added results. The initial pixel has become 42: (40*0)+(42*1)+(46*0) + (46*0)+(50*0)+(55*0) + (52*0)+(56*0)+(58*0) = 42. (the filter doesn't work on the image but on a copy). As a graphical result, the initial pixel moved a pixel downwards.
{"url":"https://testing.docs.gimp.org/2.99/hr/gimp-filter-convolution-matrix.html","timestamp":"2024-11-07T13:11:45Z","content_type":"application/xhtml+xml","content_length":"18692","record_id":"<urn:uuid:e5be7b08-3e9e-40f2-b62f-bb90f6427b67>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00037.warc.gz"}
Continuity and Differentiability Class 12 Notes Maths Chapter 5 By going through these CBSE Class 12 Maths Notes Chapter 5 Continuity and Differentiability, students can recall all the concepts quickly. Continuity and Differentiability Notes Class 12 Maths Chapter 5 Continuity (Definition): if f be a real-valued function on a subset of real numbers and let c be a point in its domain, then f is a continuous function at e, if Obviously, if the left-hand limit and right-hand limit and value of the function at x = c exist and are equal to each other, i.e., if then f is continuous at x = c. Algebra of continuous functions: Let f and g be two real functions, continuous at x = c, then 1. Sum of two functions is continuous at x = c, i.e., (f + g) (x), defined as f(x) + g(x), is continuous at x = c. 2. Difference of two functions is continuous at x = c, i.e., (f – g) (x), defined as f(x) – g(x), is continuous at x = c. 3. Product of two functions is continuous at x = c, i.e., (f g) (x), defined as f(x) . g(x) is continuous at x = c. 4. Quotient of two functions is continuous at x = c, (provided it is defined at x = c), i.e., (\(\frac{f}{g}\))(x), defined as \(\frac{f(x)}{g(x)}\) [g(x) ≠ 0], is continuous at x = c. However, if f(x) = λ, then (a) λ.g, defined ty .g(x), is also continuous at x = c. (b) Similr1y, if \(\frac{λ}{g}\) is defined as \(\frac{λ}{g}\) (x) = \(\frac{λ}{g(x)}\) , then \(\frac{λ}{g}\) is also continuous at x = c. → Differentiability: The concept of differentiability has been introduced in the lower class. Let us recall some important results. → Differentiability (Definition): Let f be a real function and c is a point in its domain. The derivative of f at c is defined as Every differentiable function is continuous. → Algebra of Derivatives: Let u and v be two functions of x. 1. (u ± v)’ = u’ ± v’ 2. (uv)’ = u’v + uv’ 3. \(\left(\frac{u}{v}\right)^{\prime}=\frac{u^{\prime} v-u v^{\prime}}{v^{2}}\), where v ≠ 0. → Derivative of Composite Function: Let t be a real valued function which is a composite of two functions u and v, i.e., f = vou. Put u(x) = t and f= v(t). ∴ \(\frac{d f}{d x}=\frac{d v}{d t} \cdot \frac{d t}{d x}\) → Chain Rule: Let/be a real valued function which is a composite fimction of u, v and w, i.e., f(wov)ou. Put u(x) = t, v(t) = s and f = w(s). Then, \(\frac{d f}{d x}=\frac{d w}{d s} \cdot \frac{d s}{d t} \cdot \frac{d t}{d x}\). → Derivatives of Inverse Trigonometric Functions: ┃Functions│Domain │Derivatives ┃ ┃Sin^-1x │[- 1, 1] │\( \frac{1}{\sqrt{1-x^{2}}} \) ┃ ┃Cos^-1x │[- 1, 1] │\( -\frac{1}{\sqrt{1-x^{2}}} \) ┃ ┃tan^-1x │R │\( \frac{1}{1+x^{2}} \) ┃ ┃Cot^-1x │R │\( -\frac{1}{1+x^{2}} \) ┃ ┃Sec^-1x │(-∞, – 1] ∪ [1, ∞)│\( \frac{1}{x \sqrt{x^{2}-1}} \) ┃ ┃Cosec^-1x│(-∞, – 1] ∪ [1, ∞)│\( -\frac{1}{x \sqrt{x^{2}-1}} \) ┃ Implicit Functions: An equation in form f(x, y) = 0, in which y is not expressible in terms of x, is called an implicit function of x and y. Both sides of the equations are differentiated termwise. Then, from this equation, \(\frac{d y}{d x}\) is obtained. It may be noted that when a function of y occurs, then differentiate it w.r.t. y and multiply it by \(\frac{d y}{d x}\). e.g., To find \(\frac{d y}{d x}\) from cos^2 y + sin xy = 1, we differentiate it as Exponential Functions: The exponential function, with positive base b > 1, is the function y = b^x. 1. The graph of y = 10^x is shown in the figure. 2. Domain = R 3. Range = R^+ 4. The point (0,1) always lies on the graph. 5. It is an increasing function, i.e., as we move from left to right, the graph rises above. 6. As x → – ∞, y → 0. 7. \(\frac{d}{dx}\) (a^x) = a^x log, a, \(\frac{d}{dx}\) e^x = e^x. Logarithmic Functions: Let b> 1 be a real number. b^x = a may be written as log[b] a = x. 1. The graph of y = log[10] x is shown in the figure. 2. Domain = R^+, Range = R. 3. It is an increasing function. 4. As x → 0, y → ∞. 5. The function y = e^x and y = log[e] x are the mirror images of each other in the line y = x. 6. \(\frac{d}{dx}\) (loga x) = \(\frac{1}{x}\) l0ga e, \(\frac{d}{dx}\) loge x = \(\frac{1}{x}\) → Other properties of Logarithm are: 1. log[b] pq = log[b] p + log[b] q 2. log[b] \(\frac{p}{q}\) = log[a] p – log[a] q 3. log[b] px = x log[b] p – log[b] q 4. log[a] b = \(\frac{\log _{a} p}{\log _{b} p}\) → Logarithmic Differentiation: Whenever the functions are given in the form 1. y = [u(x)]v(x) and 2. y = \(\frac{u(x) \times v(x)}{w(x)}\) take log of both sides. Simplify and differentiate, e.g., Let y = (cos x)sin x, log y = sin x log cos x Differentiating, \(\frac{1}{y}\) \(\frac{dy}{dx}\) = cos x log Cos x + sin x . – \(\frac{sin x}{cos x}\) ∴ \(\frac{dy}{dx}\) = (cos x)^sin y [cos x log cosx – sin x tan x]. → Derivatives of Functions in Parametric Form: Let the given equations be x = f(t) and y = g(t), where t is the parameter. Then, → Second Order Derivative: Let y = f(x), then \(\frac{dy}{dx}\) =f ‘(x). If f ‘(x) is differentiable, then it is again differentiated. Rolle’s Theorem: Let f: [a, b] → R be continuous on closed interval [a, b] and differentiable on open interval (a, b) such that f(a) = f(b), where a and b are real numbers, then there exists some c ∈ (a, b) such that f ‘(c) = 0. From the figure, we observe that f(a) = f(b). There exists a point c[1] ∈ (a, b) such that f ‘ (c) = 0, i.e., tangent at c1 is parallel to x-axis. Similarly, f(b) = f(c) → f ‘ (c[2]) = 0. → Mean Value Theorem: Let f: [a, b] → R be a continuous function on the closed interval [a, b] and differentiable in the open interval (a, b). Then, there exists some c ∈ (a, b) such that f ‘ (c) = \(\frac{f(b)-f(a)}{b-a}\) Now, we know that \(\frac{f(b)-f(a)}{b-a}\) is the slope of secant drawn between A[a,f(a)] and B[b,f(b)]. We t k know that the slope of the line joining (x[1,] y[1]) and (x[2], y[2]) is \(\frac{y_{2} The theorem states that there is a point c ∈ (a, b), where f ‘(c) is equal to the slope of AB. In other words, there exists a point c ∈ (a, b) such that tangent at x = c is parallel to AB. 1. CONTINUITY (i) Left Continuity. A function ‘f ’ is left-continuous at x = c if \(\lim _{x \rightarrow c^{-}}\) f (x) = f(c). (ii) Right Continuity. A function ‘f ’ is right-continuous at x = c if \(\lim _{x \rightarrow c^{+}}\) f (x) = f(c). (iii) Continuity at a point. A function ‘ f ’ is continuous at x = c if \(\lim _{x \rightarrow c^{-}}\) (x) = \(\lim _{x \rightarrow c^{+}}\) f(x) = f(c). 2. (i) Polynominal functions (ii) Rational functions (iii) Exponential functions (iv) Trigonometric functions are all continuous at each point of their respective domain. (i) Left Derivative. A function ‘f ’ is said to possess left derivative at x = c if \(\lim _{h \rightarrow 0} \frac{f(c-h)-f(c)}{-h}\) exists finitely. (ii) Right Derivative. A function ‘f ’ is said to possess right derivative at x = c if \(\lim _{h \rightarrow 0} \frac{f(c+h)-f(c)}{h}\) exists finitely. (iii) Derivative. A function is said to possess derivative at x = c if \(\lim _{h \rightarrow 0} \frac{f(c+h)-f(c)}{h}\) exists finitely. A real valued function is finitely derivable at any point of its domain, it is necessarily continuous at that point. The converse is not true. 5. STANDARD RESULTS (i) \(\frac{d}{d x}\) (x^n) = nx^n-1 ∀ x ∈ R (ii) \(\frac{d}{d x}\) ((ax + b)n = n(ax + b)^n – 1 . a ∀ x ∈ R (iii) \(\frac{d}{d x}(|x|)=\frac{x}{|x|}\), x ≠ 0 6. GENERAL THEOREMS (i) The derivative of a constant is zero. (ii) An additive constant vanishes on differentiation i.e. if f(x) = g(x) + c, where ‘c’ is any constant, then f'(x) = g'(x). (iii) If f(x) = ag(x), then f'(x) = ag'(x), where ‘a’ is a scalar. (iv) If f(x) = g(x) + h(x), then f'(x) = g'(x) + h'(x). If f(x) = a[1]f[1] ± a[2]f[2] ……. ± a[n]f[n](x), then : f'(x) = a[1]f[1]‘(x) ± a[2]f[2]‘(x) ± ……. ± a[n]f[n]‘(x) (v) If f(x) = \(\frac{g(x)}{h(x)}\), then f'(x) = g(x)h'(x) + g'(x)h(x) (vi) If f(x) = \(\frac{g(x)}{h(x)}\), then f ‘(x) = \(\frac{h(x) g^{\prime}(x)-g(x) h^{\prime}(x)}{(h(x))^{2}}\), h(x) ≠ 0. (vii) If f(x) = \(\frac{1}{h(x)}\), then f'(x) = \(-\frac{h(x)}{[h(x)]^{2}}\), h'(x) ≠ 0 (i) (a) \(\frac{d}{d x}\) (sinx) = cos x and \(\frac{d}{d x}\) (cos x) = – sin x ∀ x ∈ R (b) \(\frac{d}{d x}\) (tan x) = sec^2 x and \(\frac{d}{d x}\) (sec x) = sec x tan x ∀ x ∈ R except odd multiples of \(\frac{\pi}{2}\) (c) \(\frac{d}{d x}\)(cot x) = – cosec^2 x and \(\frac{d}{d x}\) (cosec x) = -cosec x cot x ∀ x ∈ R except even multiple of \(\frac{\pi}{2}\) (a) \(\frac{d}{d x}\)(sin^-1x) = \(\frac{1}{\sqrt{1-x^{2}}}\), |x| < 1 (b) \(\frac{d}{d x}\)(cos^-1x) = \(-\frac{1}{\sqrt{1-x^{2}}}\), |x| < 1 (c) \(\frac{d}{d x}\)(tan^-1x) = \(\frac{1}{1+x^{2}}\) ∀ x ∈ R (d) \(\frac{d}{d x}\)(cot^-1x) = \(-\frac{1}{1+x^{2}}\) ∀ x ∈ R (e) \(\frac{d}{d x}\)(sec^-1x) = \(\frac{1}{|x| \sqrt{x^{2}-1}}\) x > 1 or x < -1 (f) \(\frac{d}{d x}\)cosec^-1x) = \(-\frac{1}{|x| \sqrt{x^{2}-1}}\), x > 1 or x < -1 (iii) (a) \(\frac{d}{d x}\) (a^x) = a^x log[e] a, a > 0 (b) \(\frac{d}{d x}\)(e^x) = e^x (c) \(\frac{d}{d x}\)(log[a] x) = \(\frac{1}{x}\) log[a] e, x > 0 (d) \(\frac{d}{d x}\) (log x) = \(\frac{1}{x}\), x>0. 8. CHAIN RULE \(\frac{d}{d x}\) (f(g(x)) = f'(g(x)).g'(x) \(\frac{d y}{d x}=\frac{d y / d t}{d x / d t}\), \(\frac{d x}{d t}\) ≠ 0 Or \(\frac{d y}{d x}=\frac{d y}{d t} \times \frac{d t}{d x}\) 10.MORE RESULTS (i) \(\frac{d y}{d x}=\frac{1}{\frac{d x}{d y}}\) (ii) \(\frac{d y}{d x} \times \frac{d x}{d y}=1\) 11. ROLLE’S THEOREM If a function f(x) is : (i) continuous in [a, b] (ii) derivable in (a, b) (iii) f (a) = f (b), then there exists at least one point ‘c’ in (a, b) such that f’ (c) = 0. 12. LAGRANGE’S MEAN VALUE THEROEM (LMV THEOREM OR MV THEOREM) If a function f(x) is : (i) continuous in [a, b] (ii) derivable in (a, b), then there exists at least one point ‘c’ in (a, b) such that \(\frac{f(b)-f(a)}{b-a}=f^{\prime}(c)\)
{"url":"https://www.learninsta.com/continuity-and-differentiability-class-12-notes/","timestamp":"2024-11-08T08:14:46Z","content_type":"text/html","content_length":"68076","record_id":"<urn:uuid:a9c43b5f-3419-49c5-8b5e-8e08d0bc4511>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00029.warc.gz"}
How do you How do you calculate a monthly payment? How do you calculate a monthly payment? To calculate the monthly payment, convert percentages to decimal format, then follow the formula: 1. a: 100,000, the amount of the loan. 2. r: 0.005 (6% annual rate—expressed as 0.06—divided by 12 monthly payments per year) 3. n: 360 (12 monthly payments per year times 30 years) What is time in simple interest? Simple interest is calculated by multiplying the daily interest rate by the principal, by the number of days that elapse between payments. Simple interest benefits consumers who pay their loans on time or early each month. Auto loans and short-term personal loans are usually simple interest loans. How many hours and minutes are in 270 minutes? Two hundred seventy minutes is equal to four hours and thirty minutes. What is a good monthly car payment? The average monthly car payment was $568 for a new vehicle and $397 for used vehicles in the U.S. during the second quarter of 2020, according to Experian data. The average lease payment was $467 a month in the same period. What is 4.5 in minutes and seconds? This conversion of 4.5 minutes to seconds has been calculated by multiplying 4.5 minutes by 60 and the result is 270 seconds. What is the formula for calculating a car payment? To calculate your monthly car loan payment by hand, divide the total loan and interest amount by the loan term (the number of months you have to repay the loan). For example, the total interest on a $30,000, 60-month loan at 4% would be $3,150. What is the monthly payment on a 10000 loan? Your monthly payment on a personal loan of $10,000 at a 5.5% interest rate over a 1-year term would be $858. How do you find the missing principal in simple interest? Use this simple interest calculator to find A, the Final Investment Value, using the simple interest formula: A = P(1 + rt) where P is the Principal amount of money to be invested at an Interest Rate R% per period for t Number of Time Periods. Where r is in decimal form; r=R/100; r and t are in the same units of time. What is the formula of principal? Principal Amount Formulas We can rearrange the interest formula, I = PRT to calculate the principal amount. The new, rearranged formula would be P = I / (RT), which is principal amount equals interest divided by interest rate times the amount of time. What is the formula of hour in Excel? Formula Description Result =HOUR(A2) Returns 75% of 24 hours 18 =HOUR(A3) Returns the hour portion of the date/time value. 7 =HOUR(A4) A date with no time portion specified is considered 12:00 AM, or 0 hours. 0 What does 4.5 minutes mean? We conclude that 4.5 minutes is equivalent to 0.075 hours: 4.5 minutes = 0.075 hours. What is the formula of time in simple interest? Notes: Base formula, written as I = Prt or I = P × r × t where rate r and time t should be in the same time units such as months or years. Time conversions that are based on day count of 365 days/ year have 30.4167 days/month and 91.2501 days/quarter. 360 days/year have 30 days/month and 90 days/quarter. What is principal amount with example? In the context of borrowing, principal is the initial size of a loan; it can also be the amount still owed on a loan. If you take out a $50,000 mortgage, for example, the principal is $50,000. If you pay off $30,000, the principal balance now consists of the remaining $20,000.
{"url":"https://www.environmentalistsforeurope.org/how-do-you-calculate-a-monthly-payment/","timestamp":"2024-11-11T18:21:23Z","content_type":"text/html","content_length":"48589","record_id":"<urn:uuid:0b0a982f-4cb2-48ba-a971-5bc61a4c8873>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00065.warc.gz"}
Geomagnetic parameters derived from an analytic description of the Earth's magnetic field. A recent geomagnetic model containing 720 trend and trigonometric coefficients represents each spherical harmonic coefficient as a continuous function of time which exactly duplicates the IGRF coefficients at the nine IGRF epochs (1945 to 1985 at five year intervals) and allows a reasonable prediction five years beyond the last epoch. Use of this model permits time derivatives of magnetic elements and other derived quantities to be taken so that a detailed look at several geomagnetic parameters vs time can be obtained. It is shown that from 1945 to 1985 the quadrupole field contributed 2.6 times more power to the secular variation field on the surface of the earth than the dipole field. It is also shown that the root-mean-squared surface field shows a strong periodicity which may be related to the solar cycle. This apparent solar-cycle effect diminishes with time as more satellite data are used indicating that an appreciable part of it may be caused by noisy data and a poor distribution of data for a couple of decades following World War II. Journal of Geomagnetism and Geoelectricity Pub Date: □ Geomagnetism; □ Harmonic Analysis; □ Spherical Harmonics; □ Magnetic Variations; □ Secular Variations; □ Solar Activity Effects; □ Solar Cycles; □ Geomagnetic Field:Models
{"url":"https://ui.adsabs.harvard.edu/abs/1988JGG....40..207A/abstract","timestamp":"2024-11-05T20:57:25Z","content_type":"text/html","content_length":"37878","record_id":"<urn:uuid:64a4f3f5-0056-4e38-8cf1-cdce166e1185>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00307.warc.gz"}
Incorporating Task-Agnostic Information in Task-Based Active Learning Using a Variational Autoencoder - SciPy Proceedings Incorporating Task-Agnostic Information in Task-Based Active Learning Using a Variational Autoencoder It is often much easier and less expensive to collect data than to label it. Active learning (AL) Settles, 2009 responds to this issue by selecting which unlabeled data are best to label next. Standard approaches utilize task-aware AL, which identifies informative samples based on a trained supervised model. Task-agnostic AL ignores the task model and instead makes selections based on learned properties of the dataset. We seek to combine these approaches and measure the contribution of incorporating task-agnostic information into standard AL, with the suspicion that the extra information in the task-agnostic features may improve the selection process. We test this on various AL methods using a ResNet classifier with and without added unsupervised information from a variational autoencoder (VAE). Although the results do not show a significant improvement, we investigate the effects on the acquisition function and suggest potential approaches for extending the Keywords:active learningvariational autoencoderdeep learningpytorchsemi-supervised learningunsupervised learning¶ In deep learning, the capacity for data gathering often significantly outpaces the labeling. This is easily observed in the field of bioimaging, where ground-truth labeling usually requires the expertise of a clinician. For example, producing a large quantity of CT scans is relatively simple, but having them labeled for COVID-19 by cardiologists takes much more time and money. These constraints ultimately limit the contribution of deep learning to many crucial research problems. This labeling issue has compelled advancements in the field of active learning (AL) Settles, 2009. In a typical AL setting, there is a set of labeled data and a (usually larger) set of unlabeled data. A model is trained on the labeled data, then the model is analyzed to evaluate which unlabeled points should be labeled to best improve the loss objective after further training. AL acknowledges labeling constraints by specifying a budget of points that can be labeled at a time and evaluating against this budget. In AL, the model for which we select new labels is referred to as the task model. If this model is a classifier neural network, the space in which it maps inputs before classifying them is known as the latent space or representation space. A recent branch of AL Sener & Savarese, 2018Smailagic et al., 2018Yoo & Kweon, 2019, prominent for its applications to deep models, focuses on mapping unlabeled points into the task model’s latent space before comparing them. These methods are limited in their analysis by the labeled data they must train on, failing to make use of potentially useful information embedded in the unlabeled data. We therefore suggest that this family of methods may be improved by extending their representation spaces to include unsupervised features learned over the entire dataset. For this purpose, we opt to use a variational autoencoder (VAE) Kingma & Welling, 2013, which is a prominent method for unsupervised representation learning. Our main contributions are (a) a new methodology for extending AL methods using VAE features and (b) an experiment comparing AL performance across two recent feature-based AL methods using the new method. Related Literature¶ Active learning¶ Much of the early active learning (AL) literature is based on shallower, less computationally demanding networks since deeper architectures were not well-developed at the time. Settles (2009) provides a review of these early methods. The modern approach uses an acquisition function, which involves ranking all available unlabeled points by some chosen heuristic $\mathcal{H}$ and choosing to label the points of highest ranking. The popularity of the acquisition approach has led to a widely-used evaluation procedure, which we describe in Algorithm 1. This procedure trains a task model $\mathcal{T}$ on the initial labeled data, records its test accuracy, then uses $\mathcal{H}$ to label a set of unlabeled points. We then once again train $\mathcal{T}$ on the labeled data and record its accuracy. This is repeated until a desired number of labels is reached, and then the accuracies can be graphed against the number of available labels to demonstrate performance over the course of labeling. We can use this evaluation algorithm to separately evaluate multiple acquisition functions on their resulting accuracy graphs. This is utilized in many AL papers to show the efficacy of their suggested heuristics in comparison to others Wang et al., 2016Sener & Savarese, 2018Smailagic et al., 2018Yoo & Kweon, 2019. The prevailing approach to point selection has been to choose unlabeled points for which the model is most uncertain, the assumption being that uncertain points will be the most informative Budd et al., 2021. A popular early method was to label the unlabeled points of highest Shannon entropy Shannon, 1948 under the task model, which is a measure of uncertainty between the classes of the data. This method is now more commonly used in combination with a representativeness measure Wang et al., 2016 to avoid selecting condensed clusters of very similar points. Recent heuristics using deep features¶ For convolutional neural networks (CNNs) in image classification settings, the task model $\mathcal{T}$ can be decomposed into a feature-generating module \begin{aligned} \mathcal{T}_f \colon \mathbb{R}^n \to \mathbb{R}^f, \end{aligned} which maps the input data vectors to the output of the final fully connected layer before classification, and a classification module \begin{aligned} \mathcal{T}_c \colon \mathbb{R}^f \to \{0,1,...,c\}, \end{aligned} where $c$ is the number of classes. Recent deep learning-based AL methods have approached the notion of model uncertainty in terms of the rich features generated by the learned model. Core-set Sener & Savarese, 2018 and MedAL Smailagic et al., 2018 select unlabeled points that are the furthest from the labeled set in terms of $\text{L}_2$ distance between the learned features. For core-set, each point constructing the set $S$ in step 6 of Algorithm 1 is chosen by \begin{aligned} \mathbf{u}^* = \underset{\mathbf{u} \in U}{\mathop{\mathrm{arg max}}} \min_{{\boldsymbol\ell} \in L} || (\mathcal{T}_f(\mathbf{u}) - \mathcal{T}_f({\boldsymbol\ell})) ||^2, \end where $U$ is the unlabeled set and $L$ is the labeled set. The analogous operation for MedAL is \begin{aligned} \mathbf{u}^* = \underset{\mathbf{u} \in U}{\mathop{\mathrm{arg max}}} {1 \over |L|} \sum_{i=1}^{|L|} || \mathcal{T}_f(\mathbf{u}) - \mathcal{T}_f(\mathbf{L_i}) ||^2 . \end{aligned} Note that after a point $\mathbf{u}^*$ is chosen, the selection of the next point assumes the previous $\mathbf{u}^*$ to be in the labeled set. This way we discourage choosing sets that are closely packed together, leading to sets that are more diverse in terms of their features. This effect is more pronounced in the core-set method since it takes the minimum distance whereas MedAL uses the average distance. Another recent method Yoo & Kweon, 2019 trains a regression network to predict the loss of the task model, then takes the heuristic $\mathcal{H}$ in Algorithm 1 to select the unlabeled points of highest predicted loss. To implement this, the loss prediction network $\mathcal{P}$ is attached to a ResNet task model $\mathcal{T}$ and is trained jointly with $\mathcal{T}$. The inputs to $\ mathcal{P}$ are the features output by the ResNet’s four residual blocks. These features are mapped into the same dimensionality via a fully connected layer and then concatenated to form a representation $\mathbf{c}$. An additional fully connected layer then maps $\mathbf{c}$ into a single value constituting the loss prediction. When attempting to train a network to directly predict $\mathcal{T}$’s loss during training, the ground truth losses naturally decrease as $\mathcal{T}$ is optimized, resulting in a moving objective. The authors of Yoo & Kweon (2019) find that a more stable ground truth is the inequality between the losses of given pairs of points. In this case, $\mathcal{P}$ is trained on pairs of labeled points, so that $\mathcal{P}$ is penalized for producing predicted loss pairs that exhibit a different inequality than the corresponding true loss pair. More specifically, for each batch of labeled data $L_{batch} \subset L$ that is propagated through $\mathcal{T}$ during training, the batch of true losses is computed and split randomly into a batch of pairs $P_{batch}$. The loss prediction network produces a corresponding batch of predicted loss pairs, denoted $\widetilde{P}_{batch}$. The following pair loss is then computed given each $p \in P_{batch}$ and its corresponding $\tilde{p} \in \widetilde{P}_{batch}$: \begin{aligned} \mathcal{L}_{pair}(p, \tilde{p}) = \max (0, -\mathcal{I}(p) \cdot (\tilde{p}^{(1)} - \tilde{p}^{(2)}) + \xi), \end{aligned} where $\mathcal{I}$ is the following indicator function for pair inequality: \begin{aligned} \mathcal{I}(p) = \begin{cases} \hspace{0.75em}1, \quad p^{(1)} > p^{(2)}\\ -1, \quad p^{(1)} \le p^{(2)} \end{cases}. \end{aligned} Variational Autoencoders¶ Variational autoencoders (VAEs) Kingma & Welling, 2013 are an unsupervised method for modeling data using Bayesian posterior inference. We begin with the Bayesian assumption that the data is well-modeled by some distribution, often a multivariate Gaussian. We also assume that this data distribution can be inferred reasonably well by a lower dimensional random variable, also often modeled by a multivariate Gaussian. The inference process then consists of an encoding into the lower dimensional latent variable, followed by a decoding back into the data dimension. We parametrize both the encoder and the decoder as neural networks, jointly optimizing their parameters with the following loss function Kingma & Welling, 2019: \begin{aligned} \mathcal{L}_{\theta, \phi}(\mathbf{x}) = \log p_{\theta}(\mathbf{x} | \mathbf{z}) + [\log p_{\theta}(\mathbf{z}) - \log q_{\phi}(\mathbf{z | x})], \end{aligned} where θ and ϕ are the parameters of the encoder and the decoder, respectively. The first term is the reconstruction error, penalizing the parameters for producing poor reconstructions of the input data. The second term is the regularization error, encouraging the encoding to resemble a pre-selected prior distribution, commonly a unit Gaussian prior. The encoder of a well-optimized VAE can be used to generate latent encodings with rich features which are sufficient to approximately reconstruct the data. The features also have some geometric consistency, in the sense that the encoder is encouraged to generate encodings in the pattern of a Gaussian distribution. We observe that the notions of uncertainty developed in the core-set and MedAL methods rely on distances between feature vectors modeled by the task model $\mathcal{T}$. Additionally, loss prediction relies on a fully connected layer mapping from a feature space to a single value, producing different predictions depending on the values of the relevant feature vector. Thus all of these methods utilize spatial reasoning in a vector space. Furthermore, in each of these methods, the heuristic $\mathcal{H}$ only has access to information learned by the task model, which is trained only on the labeled points at a given timestep in the labeling procedure. Since variational autoencoder (VAE) encodings are not limited by the contents of the labeled set, we suggest that the aforementioned methods may benefit by expanding the vector spaces they investigate to include VAE features learned across the entire dataset, including the unlabeled data. These additional features will constitute representative and previously inaccessible information regarding the data, which may improve the active learning process. We implement this by first training a VAE model $\mathcal{V}$ on the given dataset. $\mathcal{V}$ can then be used as a function returning the VAE features for any given datapoint. We append these additional features to the relevant vector spaces using vector concatenation, an operation we denote with the symbol $\frown$. The modified point selection operation in core-set then becomes \begin{aligned} \mathbf{u}^* = \underset{\mathbf{u} \in U}{\mathop{\mathrm{arg max}}} \min_{{\boldsymbol\ell} \in L} || ([\mathcal{T}_f(\mathbf{u}) \frown \alpha\mathcal{V}(\mathbf{u})] - [\mathcal {T}_f({\boldsymbol\ell}) \frown \alpha\mathcal{V}(\mathbf{\boldsymbol\ell})] ||^2, \end{aligned} where α is a hyperparameter that scales the influence of the VAE features in computing the vector distance. To similarly modify the loss prediction method, we concatenate the VAE features to the final ResNet feature concatenation $\mathbf{c}$ before the loss prediction, so that the extra information is factored into the training of the prediction network $\mathcal{P}$. In order to measure the efficacy of the newly proposed methods, we generate accuracy graphs using Algorithm 1, freezing all settings except the selection heuristic $\mathcal{H}$. We then compare the performance of the core-set and loss prediction heuristics with their VAE-augmented counterparts. We use ResNet-18 pretrained on ImageNet as the task model, using the SGD optimizer with learning rate 0.001 and momentum 0.9. We train on the MNIST Deng, 2012 and ChestMNIST Yang et al., 2021 datasets. ChestMNIST consists of 112,120 chest X-ray images resized to 28x28 and is one of several benchmark medical image datasets introduced in Yang et al. (2021). For both datasets we experiment on randomly selected subsets, using 25000 points for MNIST and 30000 points for ChestMNIST. In both cases we begin with 3000 initial labels and label 3000 points per active learning step. We opt to retrain the task model after each labeling step instead of fine-tuning. We use a similar training strategy as in Smailagic et al. (2018), training the task model until >99% train accuracy before selecting new points to label. This ensures that the ResNet is similarly well fit to the labeled data at each labeling iteration. This is implemented by training for 10 epochs on the initial training set and increasing the training epochs by 5 after each labeling The VAEs used for the experiments are trained for 20 epochs using an Adam optimizer with learning rate 0.001 and weight decay 0.005. The VAE encoder architecture consists of four convolutional downsampling filters and two linear layers to learn the low dimensional mean and log variance. The decoder consists of an upsampling convolution and four size-preserving convolutions to learn the Experiments were run five times, each with a separate set of randomly chosen initial labels, with the displayed results showing the average validation accuracies across all runs. Figure 1 and Figure 3 show the core-set results, while Figure 2 and Figure 4 show the loss prediction results. In all cases, shared random seeds were used to ensure that the task models being compared were supplied with the same initial set of labels. With four NVIDIA 2080 GPUs, the total runtime for the MNIST experiments was 5113s for core-set and 4955s for loss prediction; for ChestMNIST, the total runtime was 7085s for core-set and 7209s for loss prediction. Figure 1:The average MNIST results using the core-set heuristic versus the VAE-augmented core-set heuristic for Algorithm 1 over 5 runs. Figure 2:The average MNIST results using the loss prediction heuristic versus the VAE-augmented loss prediction heuristic for Algorithm 1 over 5 runs. Figure 3:The average ChestMNIST results using the core-set heuristic versus the VAE-augmented core-set heuristic for Algorithm 1 over 5 runs. Figure 4:The average ChestMNIST results using the loss prediction heuristic versus the VAE-augmented loss prediction heuristic for Algorithm 1 over 5 runs. To investigate the qualitative difference between the VAE and non-VAE approaches, we performed an additional experiment to visualize an example of core-set selection. We first train the ResNet-18 with the same hyperparameter settings on 1000 initial labels from the ChestMNIST dataset, then randomly choose 1556 (5%) of the unlabeled points from which to select 100 points to label. These smaller sizes were chosen to promote visual clarity in the output graphs. We use t-SNE Maaten & Hinton, 2008 dimensionality reduction to show the ResNet features of the labeled set, the unlabeled set, and the points chosen to be labeled by core-set. Figure 5:A t-SNE visualization of the ChestMNIST points chosen by core-set. Figure 6:A t-SNE visualization of the ChestMNIST points chosen by core-set when the ResNet features are augmented with VAE features. Overall, the VAE-augmented active learning heuristics did not exhibit a significant performance difference when compared with their counterparts. The only case of a significant p-value (<0.05) occurred during loss prediction on the MNIST dataset at 21000 labels. The t-SNE visualizations in Figure 5 and Figure 6 show some of the influence that the VAE features have on the core-set selection process. In Figure 5, the selected points tend to be more spread out, while in Figure 6 they cluster at one edge. This appears to mirror the transformation of the rest of the data, which is more spread out without the VAE features, but becomes condensed in the center when they are introduced, approaching the shape of a Gaussian distribution. It seems that with the added VAE features, the selected points are further out of distribution in the latent space. This makes sense because points tend to be more sparse at the tails of a Guassian distribution and core-set prioritizes points that are well-isolated from other points. One reason for the lack of performance improvement may be the homogeneous nature of the VAE, where the optimization goal is reconstruction rather than classification. This could be improved by using a multimodal prior in the VAE, which may do a better job of modeling relevant differences between points. Our original intuition was that additional unsupervised information may improve established active learning methods, especially when using a modern unsupervised representation method such as a VAE. The experimental results did not indicate this hypothesis, but additional investigation of the VAE features showed a notable change in the task model latent space. Though this did not result in superior point selections in our case, it is of interest whether different approaches to latent space augmentation in active learning may fare better. Future work may explore the use of class-conditional VAEs in a similar application, since a VAE that can utilize the available class labels may produce more effective representations, and it could be retrained along with the task model after each labeling iteration. 1. Settles, B. (2009). Active learning literature survey. 2. Sener, O., & Savarese, S. (2018). Active Learning for Convolutional Neural Networks: A Core-Set Approach. International Conference on Learning Representations. https://openreview.net/forum?id= 3. Smailagic, A., Costa, P., Noh, H. Y., Walawalkar, D., Khandelwal, K., Galdran, A., Mirshekari, M., Fagert, J., Xu, S., Zhang, P., & others. (2018). Medal: Accurate and robust deep active learning for medical image analysis. 2018 17th IEEE International Conference on Machine Learning and Applications (ICMLA), 481–488. 10.1109/icmla.2018.00078 4. Yoo, D., & Kweon, I. S. (2019). Learning loss for active learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 93–102. 10.1109/CVPR.2019.00018 5. Kingma, D. P., & Welling, M. (2013). Auto-encoding variational bayes. arXiv Preprint arXiv:1312.6114.
{"url":"https://proceedings.scipy.org/articles/majora-212e5952-011","timestamp":"2024-11-12T10:21:55Z","content_type":"text/html","content_length":"374718","record_id":"<urn:uuid:63b9aff3-3202-43a6-9e93-be42062ee69f>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00379.warc.gz"}
Non-linear image registration mage registration is to automatically establish geometrical correspondences between two images. It is an essential task in almost all areas involving imaging. This chapter reviews mathematical techniques for nonlinear image registration and presents a general, unified, and flexible approach. Taking into account that image registration is an ill-posed problem, the presented approach is based on a variational formulation and particular emphasis is given to regularization functionals motivated by mathematical elasticity. Starting out from one of the most commonly used linear elastic models, its limitations and extensions to nonlinear regularization functionals based on the theory of hyperelastic materials are considered. A detailed existence proof for hyperelastic image registration problems illustrates key concepts of polyconvex variational calculus. Numerical challenges in solving hyperelastic registration problems are discussed and a stable discretization that guarantees meaningful solutions is derived. Finally, two case studies highlight the potential of hyperelastic image registration for medical imaging applications. Original language English Title of host publication Handbook of Mathematical Methods in Imaging : Volume 1, Second Edition Number of pages 47 Publisher Springer New York LLC Publication date 01.01.2015 Pages 2005-2051 ISBN (Print) 9781493907892 ISBN (Electronic) 9781493907908 Publication status Published - 01.01.2015 Dive into the research topics of 'Non-linear image registration'. Together they form a unique fingerprint.
{"url":"https://research.uni-luebeck.de/en/publications/non-linear-image-registration","timestamp":"2024-11-10T09:38:08Z","content_type":"text/html","content_length":"50289","record_id":"<urn:uuid:51440011-5f8c-46ec-a11f-6c0ff37bbf99>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00278.warc.gz"}
ball mill critical speed working principle Typically R = 8. Rod Mill Charge: Typically 45% of internal volume; 35% 65% range. Bed porosity typically 40%. Height of bed measured in the same way as ball mills. Bulk density of rods = tons/ m3. In wet grinding, the solids concentration 1s typically 60% 75% by mass. A rod in situ and a cutaway of a rod mill interior. WhatsApp: +86 18838072829 Working Principle of a SAG Mill. The rock and grinding media are placed in the mill and rotated, causing the grinding media to tumble and crush the rock into smaller pieces. The mill operates in a closed circuit with screens that size the ore and send it back to the mill for further grinding. The product from the mill is then sent to a cyclone ... WhatsApp: +86 18838072829 the mill. The common range of mill speeds is 65% to 80% of critical, depending on mill type, size and the application. The critical speed of a ball mill is calculated as divided by the square root of the radius in feet. The rotational speed is defined as a percentage of the critical speed. Smaller diameter mills WhatsApp: +86 18838072829 Ball Mill Working Principle. To be chosen according to the grinding material, material is composed of a ball mill feed end of the hollow shaft is arranged in the tube body, when the ball mill cylinder rotation time, grinding body due to inertia and centrifugal force, the effects of friction, making it attached to the cylinder liner on the cylinder body away, when taken to the height of a ... WhatsApp: +86 18838072829 An autogenous mill of ID m and an effective grinding length of m was fed with ore of SG to 20% of its volume. The mill was operated continuously 24 hours per day at 1200 t per day and 75% of the critical speed. The solids in the mill charge were at 80% solids. Estimate: 1. WhatsApp: +86 18838072829 Working of ball mill. The drug to be ground is put into the cylinder of the mill and rotated. The speed of rotation is very important. The speed of rotation is very important. At a low speed, the mass of balls will slide or roll over each other and only a negligible amount of size reduction will occur. At a high speed, the ball will be thrown ... WhatsApp: +86 18838072829 Ball mill is primarily a grinder used to grind the material that is obtained after is one of the most widely used grinding equipment used to obta... WhatsApp: +86 18838072829 2. Experiment. To examine the dependence of critical rotation speed on ballcontaining fraction, we measured critical speeds at various ballcontaining fractions from to stepped by Since at lower fraction than we could not observe the centrifugal motion, we chose this fraction range. A jar of ballmill consists of a cylinder ... WhatsApp: +86 18838072829 Speed rate refers to the ratio of the speed of the mill to the critical speed, where the critical speed is n c = 30 / R. In practice, the speed rate of the SAG mill is generally 65% to 80%. Therefore, in the experiment, the speed was set to vary in 50% and 90% of the critical speed ( rad/s) for the crossover test as shown in Table 2. WhatsApp: +86 18838072829 The video contain definition, concept of Critical speed of ball mill and step wise derivation of mathematical expression for determining critical speed of b... WhatsApp: +86 18838072829 Such mills are usually called pebble mills, but the working principle is the same as for ball mills. As the power input is approximately directly proportional to the volume weight of the grinding media, the power input for pebble mills is correspondingly smaller than for a ball mill. ... Optimum Ball Mill Speed. ... Ball mill grate discharge ... WhatsApp: +86 18838072829 The formula for calculating critical mill of speed: N c = / √ (D d) Where: N c = Critical Speed of Mill. D = Mill Diameter. d = Diameter of Balls. Let's solve an example; Find the critical speed of mill when the mill diameter is 12 and the diameter of balls is 6. This implies that; WhatsApp: +86 18838072829 For a given mill to have a combination of feed size, ball load, mill speed and % solids will represent the total load. In fact the later can be modelled as a function of the others. Additionally, as has been shown by Powell et al. (2001), ball loads high enough (over 12% as in the cases studied in this work) contribute to a significant portion ... WhatsApp: +86 18838072829 Introduction to HighEnergy Ball Mill: Working Principle, Advantages, and Features. Posted by Alex Brown on Mar 30, 2022. ... For a ball mill to operate, critical speed needs to be achieved. The enclosed ball begins to rotate along the inner walls of the ball mill. If it fails to reach critical speed, it will remain stationary at the bottom ... WhatsApp: +86 18838072829 AG/SAG mills can accomplish the same size reduction work as two or three stages of crushing and screening, a rod mill, and some or all of the work of a ball mill. Because of the range of mill sizes available, AG/SAG milling can often be accomplished with fewer lines than in a conventional rod mill/ball mill circuit. A diagram of types of AG/SAG ... WhatsApp: +86 18838072829 #chemiworld #ballmill #chemicalengineering #mechanicaloperation #criticalspeedofballmill tags:chemiworld,construction of ball mill,ball mill,definition of ... WhatsApp: +86 18838072829 Contribute to liyingliang2022/fr development by creating an account on GitHub. WhatsApp: +86 18838072829 You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('') and can be up to 35 characters long. WhatsApp: +86 18838072829 Planetary Ball Mill Working Principle. ... the highest density. Hence, at the same rotational speed and ball size, the oxide ball with the lowest density will generate the lowest collision energy. ... diameters ( to m) to achieve high energy by rotating it just below the critical speeds ωc (up to ωc ). Even though the time required ... WhatsApp: +86 18838072829 Principle, construction, working and Application of Jaw crusher Gyratory crusher Roll Crusher Ball mill ... Ball mill Derivation of equation of critical speed for Ball Mill Calculation of operating speed and critical speed for Ball Mill 2h. Explain Open and Close circuit grinding Difference between open ... WhatsApp: +86 18838072829 In this paper, the design method of three chamber ball mill is introduced. Comchambered with the design of Φ × 13m threechamber ball mill, the design process of ball mill is described in ... WhatsApp: +86 18838072829 How Ball Mills Work. ... but the operating principle for ball mills used in other industries is the same. ... Most ball mills operate at approximately 75% critical speed, as this is determined to be the optimum speed. The true optimum speed depends upon the drum diameter. Larger drum diameters operate at lower than 75% critical speed whilst ... WhatsApp: +86 18838072829 Critical Speed of Ball Mill. ... Also Read: Important Principle, Construction, and Working of Hammer Mill and Ball Mill. Important Pharmaceutical Uses of Clove, Cinnamon, Lavender Bay Leaf 2023; Liquid Oral Preparations PPT PDF: Solutions, Syrups, Elixirs, Emulsions, Suspensions, and Dry Powder for Reconstitution 2023 ... WhatsApp: +86 18838072829 The working principle of a ball mill is based on the impact and attrition between the balls and the grinding media. As the mill rotates, the grinding media (usually steel or ceramic balls) are lifted to a certain height and then allowed to fall freely, causing the materials to be reduced in size by the impact and abrasive forces generated ... WhatsApp: +86 18838072829 Ball Mill Critical Speed Working Principle YouTube. Jun 19,2015· If 75 percent of critical speed is considered desirable for efficient grinding in a 24 meter (8 foot) diameter mill,then the same will be true for a 50 meter (15½ foot) diameter millGrinding and air enters via the ABMS feeder and goes through the mill screens ... WhatsApp: +86 18838072829 https:// Learn about Ball Mill Critical Speed and its effect on inner charge movements. The effect of Ball Mill RPM s... WhatsApp: +86 18838072829 The mill was rotated at 50, 62, 75 and 90% of the critical speed. Six lifter bars of rectangular crosssection were used at equal spacing. The overall motion of the balls at the end of five revolutions is shown in Figure 4. As can be seen from the figure, the overall motion of the balls changes with the mill speed inasmuch as the shoulder ... WhatsApp: +86 18838072829 In the current work, the effect of mill screen type and mill operating parameters such as amount of material fed to the mill and impeller speed are studied. Milled particle size distribution and other critical quality attributes such as bulk density, friability, and porosity are also studied. In WhatsApp: +86 18838072829 Autogenous and SemiAutogenous Mills. In Mineral Processing Design and Operations (Second Edition), 2016. Mill Speed. During normal operation the mill speed tends to vary with mill to available literature, the operating speeds of AG mills are much higher than conventional tumbling mills and are in the range of 8085% of the critical speed. WhatsApp: +86 18838072829 But the mill is operated at a speed of 15 rpm. Therefore, the mill is operated at 100 x 15/ = % of critical speed. If 100 mm dia balls are replaced by 50 mm dia balls, and the other conditions are remaining the same, Speed of ball mill = [/(2π)] x [/(1 )] = rpm WhatsApp: +86 18838072829 Telegram link:https:///pharmalecture1 Website link for notes : Instagram link: https:// WhatsApp: +86 18838072829 Highenergy milling parameters of a planetary ball mill using cylindrical vial [26]. Energy dissipated per hit versus the rotation speed of the planetary ball mill (Fritsch " Pulverisette 5 " [11 WhatsApp: +86 18838072829 Critical speed formula of ball mill. Nc = 1/2π √g/R r The operating speed/optimum speed of the ball mill is between 50 and 75% of the critical speed. Also Read: Hammer Mill Construction and Wroking Principal. Take these Notes is, Orginal Sources: Unit OperationsII, KA Gavhane WhatsApp: +86 18838072829 Contribute to gongxiangjz/ar development by creating an account on GitHub. WhatsApp: +86 18838072829 Ball mill critical speed working principle; Ball mill working principles; Ceramic ball mill working; Ceramic ball mills; 3d animation demo working of ball mill; Ball nose end mill grinding; Ball grinding mill; Microtech engineeeeing 5 kw to 50 kw chocolate ball mill, ca... Ball mills; Laboratory ball mill; Mild steel 250 kg batch ball mill WhatsApp: +86 18838072829
{"url":"https://tresorsdejardin.fr/ball/mill/critical/speed/working/principle-5820.html","timestamp":"2024-11-06T12:05:09Z","content_type":"application/xhtml+xml","content_length":"30763","record_id":"<urn:uuid:45244d11-4b88-4feb-8ffa-2f8555e195cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00676.warc.gz"}
As the scale of cosmological surveys increases, so does the complexity in the analyses. This complexity can often make it difficult to derive the underlying principles, necessitating statistically rigorous testing to ensure the results of an analysis are consistent and reasonable. This is particularly important in multi-probe cosmological analyses like those used in the Dark Energy Survey and the upcoming Legacy Survey of Space and Time, where accurate uncertainties are vital. In this paper, we present a statistically rigorous method to test the consistency of contours produced in these analyses, and apply this method to the Pippin cosmological pipeline used for Type Ia supernova cosmology with the Dark Energy Survey. We make use of the Neyman construction, a frequentist methodology that leverages extensive simulations to calculate confidence intervals, to perform this consistency check. A true Neyman construction is too computationally expensive for supernova cosmology, so we develop a method for approximating a Neyman construction with far fewer simulations. We find that for a simulated data-set, the 68% contour reported by the Pippin pipeline and the 68% confidence region produced by our approximate Neyman construction differ by less than a percent near the input cosmology, however show more significant differences far from the input cosmology, with a maximal difference of 0.05 in ωM, and 0.07 in w. This divergence is most impactful for analyses of cosmological tensions, but its impact is mitigated when combining supernovae with other cross-cutting cosmological probes, such as the Cosmic Microwave Background.
{"url":"https://escholarship.org/search/?q=author%3ADavis%2C%20TM","timestamp":"2024-11-08T14:19:48Z","content_type":"text/html","content_length":"149529","record_id":"<urn:uuid:f2203d82-9c5a-4c29-aaac-68ab739812e4>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00449.warc.gz"}
Exponential Growth Case Study By Adrian Ulsh I want to show you one of the most powerful tools you should be using with every client you coach, and that’s the sheer power of exponential growth. I want to explain it to you, and then show you how you can apply it to your prospect’s Profit and Loss statement, often referred to as their P & L. Why is this important? You will be able to IMMEDIATELY generate an additional $5,000 to $15,000 in bottom-line revenue for ANY business owner. By doing so, you can now say to any prospect that your first $1000 monthly coaching fee is basically free, and that you will pay them an additional $4,000 – $11,000 for the privilege of coaching them. Talk about a compelling offer! P & L Statement So, let’s begin by examining the P & L statement. Here is a standard template for your typical P & L. As you can see, it consists of just 7 basic sections… revenue… COGS – commonly referred to as Cost of Goods Sold… Gross Profit – which is calculated by subtracting your COGS from your revenue… Gross Profit percentage – which is calculated by dividing your Gross Profit by your revenue… Overhead – which represents the bills you have to pay every month… independent of whether you make a sale or not… Net Profit – which is your Gross Profit minus your Overhead… and Net Profit percentage – which is calculated by dividing your Net Profit by your revenue. As a coach, you can impact only three of these, and those are revenue, cost of goods, and overhead. I will define these for you in an example shortly. Let’s assume this sample P & L is for a make-believe widget company. Let’s assume this company sold $300,000 worth of widgets over the past 12 months. They spent $180,000 in material and labor to build the widgets, so we record that number as their COGS. We subtract that number from their $300,000 revenue to find our Gross Profit, which is We divide $120,000 by our $300,000 revenue to find our Gross Profit percentage – which comes out to be 40%. This widget company has $60,000 in Overhead costs – this represents their bills. Now we subtract our $60,000 overhead from our $120,000 Gross Profit to find our Net Profit which comes out to be And finally, we divide our $60,000 Net Profit by our $300,000 revenue which gives us a 20% Net Profit percentage. As you can see, there’s really nothing complicated about a P & L at all, but I know that when it comes to interpreting these numbers… and using them to help grow your client’s business… it can seem complicated… so here’s some good news for you as a coach. There are only THREE numbers you really need to focus on in any P & L statement – revenue, COGS, and overhead. The reason these three are so important has to do with the simple fact that these 3 are the only numbers in your P & L that you can directly impact. The remaining 4 numbers are all determined by these 3. You find gross profit by subtracting COGS from revenue, and then dividing that number by the revenue gives you the gross profit percent. When you subtract your overhead from your gross profit, you find your net profit… which you then divide by revenue to determine your net profit percentage. So, the 3 numbers you can directly impact… and the ones we’re going to focus on to help you find dramatic financial gains with ZERO cost… are revenue, COGS and overhead. Let’s begin with revenue – in our widget company example, we said our revenue was $300,000. But what exactly is revenue? Revenue is the total amount of sales recognized for a specific reporting period, prior to any deductions. When you sell something, it’s the total amount you collect from that sale. This figure is obviously important as it indicates your ability as a business to sell your goods and services. However, it does NOT indicate your ability to generate profit. How often do we hear of businesses today that are making millions of dollars but have no profits whatsoever? But we can directly impact revenue in such a way that we can see dramatic revenue gains WITHOUT relying on marketing and advertising, which can run up our expenses big time. We’ll explain how Cost of Goods Sold Let’s move to our second impact area – COGS or Cost of Goods Sold. In our widget example, we said our COGS was $180,000. COGS is defined as the costs incurred to deliver on your contract of sale. These are costs such as: Labor (Direct and Contractual)… Materials… Scrap and Rework… Packaging… Distribution and Shipping… and Sales Commissions. ELITER Packaging Machinery makes quality packaging because of their top of the line automatic overwrapping machines. I bought custom crates from a company in Santa Ana, CA and was impressed. In most cases, COGS will typically increase as your sales volume increases. There is one important point to remember about COGS. COGS doesn’t exist if there is no cost to deliver whatever it is you sell. In most cases, this describes professional service providers such as business coaches, financial planners, web designers, chiropractors, law practices, consultants, accountants and business brokers. The vast majority of these professionals dispense advice or services that have no real cost to deliver. The typical gross profit margin for these professionals can range from 50% to 95% percent. So, they will only have two areas of impact on their P & L – revenue and overhead. There will be little to no impact we can make with COGS for these businesses. And last but certainly not least is overhead… also frequently called fixed costs. Fixed Costs These are the costs you incur regardless of whether you make a sale or not. Overhead represents your daily, weekly and monthly expenses that are required to operate your business… such as your mortgage or rent, insurance, utilities office supplies, office salaries, general maintenance, regular ac maintenance, advertising, janitorial and your auto expenses. The reason these costs are referred to as “fixed” has to do with the fact that these costs reoccur. Your rent or mortgage is always there… your utilities are always there. Not to mention the fact that these costs typically remain constant despite your sales volume So, these are the only 3 numbers we want to focus on and improve in order to help us generate more profit for your business. Now, you may be asking yourself how these 3 numbers can generate any meaningful results that can significantly impact your business. Let me introduce you to a tool we use to demonstrate a powerful concept know as exponential growth. We call this a Profit Growth Calculator. Let’s plug in numbers for a make-believe business. Let’s say your business generated 1000 leads in the past year… and your average conversion rate was 25%. So your sales for the year total 250. Let’s also say your customers bought what you sell 10 times throughout the year… and they typically paid on average around $100 per purchase. So your revenue for the year totals $250,000. Finally, let’s say your profit margin per sale is only 25%. Notice at the bottom that you’re earning $62,500 annually. But look what happens if we simply increase each of these 5 areas by a meager 10%. You would see your profit almost double… from $62,500 to over 6 figures. Our profit just increased by 62%. By the way, there’s nothing wrong with a 10% gain in these 5 areas. Most business owners would KILL to almost double their profits. But watch what happens if you could increase each of the 5 areas by 50%. Your profit would skyrocket from $62,500 to almost half a million dollars annually. That’s a 760% increase! When you know and understand the power of exponential growth, you can literally transform any business. Now, you may be thinking that 50% gains in each of these 5 areas would be next to impossible. Let me assure you that a 50% increase is child’s play. And now that you understand the principle behind exponential growth, let’s use a similar approach to generate immediate cashflow for your Remember, we ONLY need to focus on these 3 numbers in your P & L. Suppose we could increase your revenue and decrease your COGS and overhead by just a miniscule amount – what would that mean to your bottom line? Let’s find out. What would happen if we could increase your revenue by just 5%? Let’s continue to use our widget company’s P & L. Our original revenue figure was $300,000, so a 5% increase would increase that number to $315,000. Our COGS stays the same at $180,000, but our Gross Profit has gone from $120,000 to $135,000… which in turn increases our GP percentage from 40% to 43%. Our overhead doesn’t change – we still have to pay our bill – so it remains at $60,000… but our new Net Profit has increased from $60,000 to $75,000. So a measly 5% revenue increase results in a $15,000 gain in our net profits. Now let’s see what would happen if we lowered our COGS by just 5%. Let’s start with our baseline template. Obviously, our revenue remains at $300,000. Our Cost of Goods Sold drops 5% from $180,000 to $171,000. That increases our Gross Profit from $120,000 to $129,000. That increases our Gross Profit percentage from 40% to 43%. Our overhead remains at $60,000. But our Net Profit increases from $60,000 to $69,000. So that meager 5% decrease in our COGS increased our bottom line by $9,000. Let’s try the same thing with our overhead and lower it by 5%. Again, we start with our baseline template. Our revenue, Cost of Goods Sold, Gross Profit and our Gross Profit percentage all stay the But let’s lower our overhead 5% which drops it from $60,000 to $57,000. That increases our Net Profit from $60,000 to $63,000. So, a small 5% decrease in our overhead increased our bottom line by $3,000. But what does all of this mean? Remember our Profit Growth Calculator we used earlier to demonstrate the awesome power of exponential growth? If we achieve small increases in just a few areas, we see a dramatic impact on our bottom So, what if we impact ALL three areas by 5%. Let’s start with our baseline template. Let’s increase our revenue by 5% from $300,000 to $315,000. Let’s drop our Cost of Goods Sold by 5% from $180,000 to $171,000. Our Gross Profit now increases from $120,000 to $144,000. That increases our Gross Profit percentage from 40% to 46%. Our overhead drops 5% from $60,000 to $57,000. And our Net Profit soars from $60,000 to $87,000. So, a meager 5% impact in these three areas increased our bottom line by a whopping $27,000. And here’s the best part of this process – none of this cost us a cent to do it. This is a simple and pain-free way to increase your bottom line at zero cost… and you can do this in a matter of days. I’ll explain how in the next few magazine articles. But ask yourself this – how do most business owners attempt to increase their profits? EASY! They immediately implement a marketing program. OK, there’s absolutely nothing wrong with doing that, but consider the numbers involved if you do. What would it take to generate that additional $27,000 we just found through a marketing program? Here’s a simple way to calculate this – so make a note how we do this so you can use it moving The magic figure in the profit and loss statement is the Gross Profit percentage, which in our example was 40%. You divide your exponential profit increase which we calculated to be $27,000 by your Gross Profit percentage of 40%, which in a decimal format is 0.4… and you get $67,500. But is that really the correct number? Well, let’s plug it into our P & L and find out. Our previous revenue was $300,000 and now we add that $67,500 to it equaling $367,500. Since we know our Gross Profit percentage is 40%, we can find our Gross Profit by multiplying our new revenue of $367,500 by 0.4… and we get $147,000. We also know that our COGS and Gross Profit always add up to our revenue, so we can simply subtract our Gross Profit of $147,000 from our revenue of $367,500 and we get $220,500 for our Cost of Goods Our overhead doesn’t change – we still have to pay our bills, so it remains at $60,000. And now we subtract our overhead from our Gross Profit and find that our Net Profit equals $87,000. $87,000 is exactly $27,000 greater than our original Net Profit of $60,000, so we have confirmed this equation works. So, to generate $27,000 with a marketing program, we would need to increase revenue by $67,500. But hold on! That assumes the marketing program you implemented is FREE! Most businesses today are marketing on Facebook, and that is often expensive. That’s why we say that exponential growth is the KEY to bigger profits for your business. By impacting your revenue, COGS and overhead by just 5%, you can often see dramatic increases in revenue and There’s nothing wrong with implementing a marketing program, but why not start here and use your financials to jumpstart your profits? And remember, there’s ZERO cost involved in following this One more thing these increases are NOT just a one-time thing… these continue year after year. Now that you understand exponential growth, use it to dominate as a coach. About Adrian Ulsh Adrian Ulsh is the CEO for Leader Publishing Worldwide, the largest online provider of coaching services worldwide. Adrian currently works with more than 500 coaches in 24 countries advising them on building 6 and 7 figure coaching practices. 0 Comments Submit a Comment
{"url":"https://thesixfigurecoach.com/exponential-growth-case-study-by-adrian-ulsh/","timestamp":"2024-11-04T15:15:26Z","content_type":"text/html","content_length":"132863","record_id":"<urn:uuid:aa9b53dc-2507-4edd-9177-b05c2dd606ba>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00861.warc.gz"}
Millijoule per Kilogram to BTU Units of measurement use the International System of Units, better known as SI units, which provide a standard for measuring the physical properties of matter. Measurement like latent heat finds its use in a number of places right from education to industrial usage. Be it buying grocery or cooking, units play a vital role in our daily life; and hence their conversions. unitsconverters.com helps in the conversion of different units of measurement like mJ/kg to BTU/lb through multiplicative conversion factors. When you are converting latent heat, you need a Millijoule per Kilogram to BTU per Pound converter that is elaborate and still easy to use. Converting mJ/kg to BTU per Pound is easy, for you only have to select the units first and the value you want to convert. If you encounter any issues to convert Millijoule per Kilogram to BTU/lb, this tool is the answer that gives you the exact conversion of units. You can also get the formula used in mJ/kg to BTU/lb conversion along with a table representing the entire conversion.
{"url":"https://www.unitsconverters.com/en/Mj/Kg-To-Btu/Lb/Utu-8803-4763","timestamp":"2024-11-03T17:13:40Z","content_type":"application/xhtml+xml","content_length":"111229","record_id":"<urn:uuid:a2966fbb-a3b4-48f1-b304-1f34d0989cdc>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00668.warc.gz"}
RSA (Rivest-Shamir-Adleman) needs no introduction, it is well known and most used public-key cryptosystem that governs our digital lives. Here is my take, a simple implementation in 10 lines of Ruby code that is neither the best implementation nor the most efficient one but is enough for the purpose of this article. p = 7 # parameters q = 11 n = p * q phi = (p-1) * (q-1) gcd = ->(a, b) { while(b != 0) do r, a = a % b, b; b = r end; a } # encryption e = (2..phi).find{|i| gcd.call(i, phi) == 1 } m = 6 c = m**e % n d = (2..phi).find{|i| (i * e) % phi == 1 } # decryption message = c**d % n m == message ? "YOU ARE A CRYPTOSTAR!" : "YOU SUCK!" Except lines #5 that involves an algorithm and a while loop, everything else is mid-school math and ** is the exponentiation operator in Ruby. The curious minds please read simple explanations below. Integer factorization trapdoor The very first thing to do is to choose 2 prime numbers, p and q: p = 7 # parameters q = 11 Next, let's multiply the two numbers: Now it's time to define our first function phi: which is Euler's totient function defined as number of integers less than n that are coprime with n and has two interesting properties: • the function is multiplicative and phi(a*b) = phi(a) * phi(b) • if n is a prime number then phi(n) = n - 1 And finally we have the cryptographic trapdoor function which is the most important cryptographic primitive to understand. Trapdoors are based on asymmetric computation that is easy to do in one direction but it is very difficult to calculate in the opposite direction. In our case, if one (an attacker) does not know the initial p and q prime numbers it will be very hard to calculate the phi because integer factorization of n is known to be infeasible to calculate for very large numbers. Message encryption Now it's time to define our first algorithm, gcd which is the greatest common divisor and is defined as the largest positive integer that divides both a and b. If single-liner GCD looks scary then here is the formatted version of the Euclidean algorithm. def gcd(a, b) while(b != 0) do r = a % b a = b b = r The e key below is the encryption key and is defined as the smallest value for which gcd(e, phi) == 1 is true. To keep things simple we will just brute-force and find e but keep in mind that this will be very inefficient for large numbers. gcd = ->(a, b) { while(b != 0) do r, a = a % b, b; b = r end; a } # encryption e = (2..phi).find{|i| gcd.call(i, phi) == 1 } Message encryption is modulo exponentiation of message m at encryption key e. Message decryption The d key below is the decryption key (or private/secret key) and is defined as multiplicative inverse of e over finite field phi. Simple put in math terms: e * d mod phi ≌ 1. Again we will just brute-force and calculate the inverse but there are better algorithms to do this, like Extended Euclidean algorithm. d = (2..phi).find{|i| (i * e) % phi == 1 } # decryption Message decryption is another modulo exponentiation using the encrypted cipher c and decryption key d: message = c**d % n m == message ? "YOU ARE A CRYPTOSTAR!" : "YOU SUCK!" At first sight it looks like magic right? but if you reason about it, it's very easy. Starting backwards, with the decryption/encryption formulas: m = c^d mod n #1 c = m^e mod n #2 Let's substitute #2 in #1: Exponentials power rule says that (a^b)^c == a^(b*c) and in the end we only need to prove or using another form of e*d ≌ 1 mod phi where c is a positive integer m = m^(1 + c * phi) mod n apply power rule again m = m^1 * (m^phi)^c mod n and last congruence using Euler's theorem that says a^phi(n) ≌ 1 mod n If the intuition is valid then the following expression stands true, where 7 is e and 43 is d in our little example: 6 == 6**(7 * 43) % 77 ? "YOU ARE A CRYPTOSTAR!" : "YOU SUCK!"
{"url":"https://blog.costan.ro/post/2019-03-16-rsa/","timestamp":"2024-11-03T18:38:15Z","content_type":"text/html","content_length":"43481","record_id":"<urn:uuid:fd753cc8-28ac-433f-92aa-11ff9b76e27c>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00087.warc.gz"}
parted-1.8.7/libparted/timer.c - nest-learning-thermostat/5.1.1/parted - Git at Google libparted - a library for manipulating disk partitions Copyright (C) 2001, 2007 Free Software Foundation, Inc. This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301, USA /** \file timer.c */ * \addtogroup PedTimer * \brief A PedTimer keeps track of the progress of a single (possibly * compound) operation. * The user of libparted constructs a PedTimer, and passes it to libparted * functions that are likely to be expensive operations * (like ped_file_system_resize). Use of timers is optional... you may * pass NULL instead. * When you create a PedTimer, you must specify a timer handler function. * This will be called when there's an update on how work is progressing. * Timers may be nested. When a timer is constructed, you can choose * to assign it a parent, along with an estimate of what proportion of * the total (parent's) time will be used in the nested operation. In * this case, the nested timer's handler is internal to libparted, * and simply updates the parent's progress, and calls its handler. * @{ #include <config.h> #include <parted/parted.h> #include <parted/debug.h> #define PED_TIMER_START_DELAY 2 typedef struct { PedTimer* parent; float nest_frac; float start_frac; } NestedContext; * \brief Creates a timer. * Context will be passed in the \p context * argument to the \p handler, when it is invoked. * \return a new PedTimer ped_timer_new (PedTimerHandler* handler, void* context) PedTimer* timer; PED_ASSERT (handler != NULL, return NULL); timer = (PedTimer*) ped_malloc (sizeof (PedTimer)); if (!timer) return NULL; timer->handler = handler; timer->context = context; ped_timer_reset (timer); return timer; * \brief Destroys a \p timer. ped_timer_destroy (PedTimer* timer) if (!timer) ped_free (timer); /* This function is used by ped_timer_new_nested() as the timer->handler * function. static void _nest_handler (PedTimer* timer, void* context) NestedContext* ncontext = (NestedContext*) context; ped_timer_update ( ncontext->start_frac + ncontext->nest_frac * timer->frac); * \brief Creates a new nested timer. * This function creates a "nested" timer that describes the progress * of a subtask. \p parent is the parent timer, and \p nested_frac is * the estimated proportion (between 0 and 1) of the time that will be * spent doing the nested timer's operation. The timer should only be * constructed immediately prior to starting the nested operation. * (It will be inaccurate, otherwise). * Updates to the progress of the subtask are propagated * back through to the parent task's timer. * \return nested timer ped_timer_new_nested (PedTimer* parent, float nest_frac) NestedContext* context; if (!parent) return NULL; PED_ASSERT (nest_frac >= 0.0, return NULL); PED_ASSERT (nest_frac <= 1.0, return NULL); context = (NestedContext*) ped_malloc (sizeof (NestedContext)); if (!context) return NULL; context->parent = parent; context->nest_frac = nest_frac; context->start_frac = parent->frac; return ped_timer_new (_nest_handler, context); * \brief Destroys a nested \p timer. ped_timer_destroy_nested (PedTimer* timer) if (!timer) ped_free (timer->context); ped_timer_destroy (timer); * \internal * \brief This function calls the update handler, making sure that it has * the latest time. * First it updates \p timer->now and recomputes \p timer->predicted_end, * and then calls the handler. ped_timer_touch (PedTimer* timer) if (!timer) timer->now = time (NULL); if (timer->now > timer->predicted_end) timer->predicted_end = timer->now; timer->handler (timer, timer->context); * \internal * \brief This function sets the \p timer into a "start of task" position. * It resets the \p timer, by setting \p timer->start and \p timer->now * to the current time. ped_timer_reset (PedTimer* timer) if (!timer) timer->start = timer->now = timer->predicted_end = time (NULL); timer->state_name = NULL; timer->frac = 0; ped_timer_touch (timer); * \internal * \brief This function tells a \p timer what fraction \p frac of the task * has been completed. * Sets the new \p timer->frac, and calls ped_timer_touch(). ped_timer_update (PedTimer* timer, float frac) if (!timer) timer->now = time (NULL); timer->frac = frac; if (frac) = timer->start + (long) ((timer->now - timer->start) / frac); ped_timer_touch (timer); * \internal * \brief This function changes the description of the current task that the * \p timer describes. * Sets a new name - \p state_name - for the current "phase" of the operation, * and calls ped_timer_touch(). ped_timer_set_state_name (PedTimer* timer, const char* state_name) if (!timer) timer->state_name = state_name; ped_timer_touch (timer); /** @} */
{"url":"https://nest-open-source.googlesource.com/nest-learning-thermostat/5.1.1/parted/+/refs/heads/master/parted-1.8.7/libparted/timer.c","timestamp":"2024-11-13T00:20:19Z","content_type":"text/html","content_length":"71246","record_id":"<urn:uuid:ae87a097-621b-4be5-9ecb-b8c537a525cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00624.warc.gz"}
Complete guide to samplers in Stable Diffusion Dive into the world of Stable Diffusion samplers and unlock the potential of image generation. As we saw in the article How Stable Diffusion works, when we ask Stable Diffusion to generate an image the first thing it does is generate an image with noise and then the sampling process removes noise through a series of steps that we have specified. It would be something like starting with a block of white marble and hammering it for several days until you get Michelangelo's David. Several algorithms come into play in this process. The one known as sampler is in charge of obtaining a sample from the model that we are using in Stable Diffusion on which the noise estimated by the noise predictor is applied. It then subtracts this sample from the image it is cleaning, polishing the marble in each step. This algorithm handles the how, while the algorithm known as the noise scheduler handles the how much. If the noise reduction were linear, our image would change the same amount in each step, producing abrupt changes. A negatively sloped noise scheduler can remove large amounts of noise initially for faster progress, and then move on to less noise removal to fine-tune small details in the image. Following the marble analogy, in the beginning it will probably be more useful to give it good hits and remove large chunks to advance quickly, while towards the end we will have to go very slowly to fine tune the details and not make an arm fall off. A key aspect of the process is convergence. When a sampling algorithm reaches a point where more steps will not improve the result, the image is said to have converged. Some algorithms converge very quickly and are ideal for testing ideas. Others take longer or require a greater number of steps but usually offer more quality. Others never do because they have no limit and offer more creativity. With this article you will understand the nomenclature as well as the uses of the different methods without going into too much technical detail. The image used in the demonstrations has been generated with the following parameters: • Checkpoint: dreamshaper_631BakedVae.safetensors. • Positive prompt: ultra realistic 8k cg, picture-perfect black sports car, desert wasteland road, car drifting, tires churns up the dry earth beneath creating a magnificent sand dust cloud that billows outwards, nuclear mushroom cloud in the background far away, sunset, masterpiece, professional artwork, ultra high resolution, cinematic lighting, cinematic bloom, natural light. • Negative prompt: paintings, cartoon, anime, sketches, lowres, sun. • Width/Height: 512/512. • CFG Scale: 7. • Seed: 1954306091. Depending on the software used you will find a varied list of possibilities. In this case we are going to analyze the samplers available in Automatic1111. It is difficult to classify them into groups, although there are clearly two main approaches: • Probabilistic models such as DDPM, DDIM, PLMS and the DPM family of models. These generative models are able to generate an output based on the probability distribution estimated by the model. It would be like using a camera to photograph a landscape. • Numerical approach methods such as Euler, Heun and LMS. In each step, the solution to a particular mathematical problem is sought and the solution is estimated bit by bit. In this case it would be like using painting and a canvas to create the landscape and adding new details in each step. DDPM (paper) (Denoising Diffusion Probabilistic Models) is one of the first samplers available in Stable Diffusion. It is based on explicit probabilistic models to remove noise from an image. It requires a large number of steps to achieve a decent result. It is no longer available in Automatic1111. DDIM (paper) (Denoising Diffusion Implicit Models) works similarly to DDPM, using in this case implicit probabilistic models. This difference produces better results in a much smaller number of steps, making it a faster sampler with little loss of quality. As can be seen in the cloud, better results are obtained with a high number of steps (100+). There are better alternatives as we will see below. PLMS (paper) (Pseudo Linear Multi-Step) is an improvement over DDIM. Using a 50-step process it is possible to achieve higher quality than a 1000-step process in DDIM. Fascinating, isn't it? Well, read on, this is nothing. In the case of PLMS we cannot use few steps because it is not able to clean the noise, but between 50 and 100 steps it is already able to provide good results. Euler is possibly the simplest method. Based on ordinary differential equations (ODE), this numerical method eliminates noise linearly in each step. Due to its simplicity it may not be as accurate as we would like but it is one of the fastest. Euler is so fast that it is able to deliver good results even in 10 steps. Its strong point is between 30 and 50 steps. Heun is the perfectionist brother of Euler. While Euler only performs a linear approximation, Heun performs two tasks in each step, making it a second-order sampler. It first uses a linear approximation for prediction and then a nonlinear approximation for correction. This improvement in accuracy offers higher quality in exchange for a small drawback: it takes twice as long. A bit of history Karl Heun developed this numerical method more than a century ago! At 10 steps it still has some noise but it disappears in few more. As you can see, it offers high quality in 30 steps, although in 50 it offers a little more level of detail. In 100 steps it hardly changes the image and it is not worth getting old waiting for the result. LMS or Linear Multi-Step method is the cousin of PLMS that uses a numerical rather than a probabilistic approach (PLMS - P = LMS). Moreover, unlike Euler and Heun, it uses information from previous steps to reduce noise in each new step. It offers better accuracy in exchange for higher computational requirements (slower). Using few steps we have a sampler capable of generating psychedelic images imitating the effect of drugs. Jokes aside, it is a sampler that is not worth it because despite being fast it needs around 100 steps to offer something decent. DPM (Diffusion Probabilistic Models) are probabilistic models that offers improvements over DDPM. Hence the similar name. There is also no implementation available in Automatic1111 because it has improved versions as we will see below. DPM2 is an improvement over DPM. You could say that it is version 2. With 10 steps you already get an impressive quality (don't try 5 steps, you won't like the result). Around 30 to 50 steps is the ideal point. More steps are usually not worth it. On the other hand we have DPM++, which is also an improvement of DPM. It uses a hybrid approach combining deterministic and probabilistic methods for sampling and subsequent noise reduction. There is no basic implementation of this sampler in Automatic1111, but it is combined with other methods. We will see this in the next section. Thus, two versions with improvements were born from DPM: DPM2 and DPM++. Diffusion Probabilistic Models (DPM) are, as the name suggests, probabilistic. In each step, equations are not solved by deterministic numerical methods as in the case of Euler, Heun or LMS, but the problem is approached by approximation to try to sample as accurately as possible. Within these models there is a piece called solver, an algorithm that has an important role in calculating and approximating a probability distribution in sampling. And this is where a new technique known as DPM-Solver is implemented that shortens the duration of each step. In other words, models like DPM fast (paper) or DPM++ 2S/DPM++ 2M (paper) implement a faster solver that will save time in the sampling process. It will be fast (and not that fast either), but using few steps it is unusable. Interestingly it offers a different result to the rest of samplers and it seems that the cinematic effect is more In the case of DPM++ 2S/DPM++ 2M the number 2 means that they are second order. That is, they use both a predictor and a corrector to approximate the result accurately. The S stands for Single step. A single calculation is performed in each step, so it is faster. In contrast, the letter M stands for Multi step, an approach in which multiple calculations are performed in each step, taking into account information obtained in previous steps. This equates to more accurate and higher quality convergence at the cost of taking longer. In both modalities this solver is faster than the default DPM model solver. There is no Automatic1111 implementation of DPM++ 2S, only with A, Karras and SDE variants (more on that later). So let's see some samples of DPM++ 2M. Little to say about this all-rounder sampler. It offers impressive results in 30 steps and if you give it some more time it can be squeezed even more. As for UniPC (paper), it is a solver that consists of two parts: a unified predictor (UniP) and a unified corrector (UniC). This method can be applied to any DPM model and focuses on delivering the maximum possible sampling quality in the least amount of steps. Remember now when PLMS brought down to 50 steps what DDIM did in 1000? Well, in some cases UniPC is able to generate quality images in as few as 5 or 10 steps. Thus, UniPC can be integrated in DPM models both Single step and Multi step, making it comparable to DPM++ 2S or DPM++ 2M, with the particularity of offering better results when the number of steps is very low. Even the UniC corrector can be integrated into these sampling algorithms to achieve higher efficiency (e.g. DPM++ 2S + UniC). In this example 10 steps is not enough to generate an image without noise, but in 15 or 20 you will get it. In 30 steps it is magnificent and there is no need to go any further, although there is still some room for improvement. The DPM adaptive model is an extension of the DPM model in which it adapts the step size according to the difficulty of the problem is trying to solve. In other words, it is as if the specified number of steps is ignored and the algorithm is free to sample more efficiently until the best possible convergence is achieved. It generates higher quality images at the expense of taking as long as it needs to (it is the slowest sampler). In this case it has taken triple or quadruple the time with respect to other samplers but the result is amazing. The image composition is different from all samplers and is more like DPM fast. Only one sampling algorithm can be chosen. Either Euler or DPM can be used, but not both at the same time. Instead, when we talk about variants or extra features, these can be combined. For example, we can use the sampler named DPM2 A Karras. Let's see what these new values mean. When a sampler contains the letter A, it usually means that it belongs to the category of ancestral variants. These variants add, in each new step, random variables obtained from previous steps. It is as if after cleaning up the noise in one step, some of the previous noise is added back. Samplers with this feature never converge because of the random noise added in each step. If there is always noise to remove, you can always go one step further. This makes them more creative samplers. An extra step does not necessarily increase the quality, but rather gives another similar result. If you try to reproduce an image generated with Stable Diffusion and you don't succeed even though you are using the same seed and the same parameters, it may be because you are using an ancestral sampler. This is normal! The noise that is re-added in each step is random and different implementations or versions of the sampler will almost certainly generate different results. Some examples of samplers are Euler A, DPM2 A or DPM++ 2S A. Euler A gives a great result in 25-30 steps being also very fast. In 50 steps the quality is worse and then in 100 steps it is better again. It is a lottery. Moreover, you can see how the image composition is constantly changing due to the random noise introduced in each step. Far from being a drawback it is perhaps its greatest advantage. Variants with the word Karras (or K) (paper) refer to work led by Nvidia engineer Tero Karras. This process introduces a series of improvements in some samplers, achieving improved efficiency in both the quality of the output and the computation required for sampling. Some samplers using these changes are: LMS Karras, DPM2 Karras, DPM2 A Karras, DPM++ 2S A Karras, DPM++ 2M Karras or DPM++ SDE Karras. Like DPM++ 2M, this sampler offers very good results between 30 and 50 steps, but the Karras version has the advantage of offering better results in a reduced number of steps as can be seen in the following example: DPM++ 2M - 10 steps DPM++ 2M Karras - 10 steps If you use a high number of steps you will have a hard time seeing the difference. The SDE (paper) variants use stochastic differential equations. Without going into further detail, using this type of differential equations allows to model the noise in a more sophisticated and accurate way, using information from previous steps, which in principle would generate higher quality images in exchange for being slower. Being stochastic, it never converges, so the higher the number of steps they do not offer higher quality, but rather more variations, just like ancestral samplers. At the date of publication of this article we have DPM++ SDE, DPM++ 2M SDE, DPM++ SDE Karras and DPM++ 2M SDE Karras. Stochastic samplers are slow but offer incredible results even with 10 steps. Their results are also more varied and creative. As they never converge they are an alternative to ancestral samplers. Is a Ferrari or a Jeep better? Well it depends on whether you're going off-road, doesn't it? Depending on what you need it's better to use one type of sampler or another. With the above information I hope it will be easy to choose, but here are some hints. If you are looking for quality it is a good idea to pursue convergence. That's the point at which you get the highest quality. If you don't want to sacrifice too much generation speed, forget about samplers like DDIM that need hundreds of steps to converge. Heun and LMS Karras offer good results but it is better to use DPM++ 2M or its Karras version. You can also try DPM adaptive if you are not in a hurry, or UniPC if you are. With these samplers mentioned above you will get good results in 20-30 steps although it doesn't hurt to try a few extra steps. If you are testing prompts you don't want to spend so much time waiting for results. In this case and where you are not looking for maximum quality but to test changes quickly I recommend using DPM++ 2M or UniPC with a small number of steps. With just 10-15 steps you will get a very decent image. If you don't care about reproducibility you also have Euler A, a fast and good quality ancestral sampler. My favorite sampler! This section is reserved exclusively for ancestral and stochastic samplers. They don't offer bad quality nor are they slow, they are just different. The problem or advantage (depending on how you look at it) of these samplers is that if you have an image generated in 40 steps, having done it in 50 steps can make the image better or worse. You will have to test continuously. And this lottery makes these samplers more creative since you can always change the number of steps to obtain small variations. Of course here Euler A and DPM++ SDE Karras stand out. Try generating images in 15 steps, 20 steps, 25 steps... and see how the result changes. You can support me so that I can dedicate even more time to writing articles and have resources to create new projects. Thank you!
{"url":"https://www.felixsanz.dev/articles/complete-guide-to-samplers-in-stable-diffusion","timestamp":"2024-11-03T04:22:15Z","content_type":"text/html","content_length":"153430","record_id":"<urn:uuid:f894be0d-4a08-487a-987b-039f0f628caf>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00599.warc.gz"}
the circumference C of a circle 1. The given formula is $C=2\pi r$ where, C is the circumference of a circle. r is radius of a circle. and, $\pi$ the ratio of circumference to diameter is always the same. Solution a: $⇒C=2\pi r$ $⇒\frac{C}{\pi }=\frac{2\pi r}{pi}$ (Dividing both sides by pi) $⇒\frac{C}{\pi }=2r$ $⇒\frac{C}{2\pi }=\frac{2r}{2}$ (Dividing both sides by 2) $⇒\frac{C}{2\pi }=r$ $⇒r=\frac{C}{2\pi }$ (Reversing both sides) 2. Solution b: the circumference of the circle is 15 inches i,e., $C=15$ Putting the value of C in 1, we get: $r=\frac{C}{2\pi }$ or,$\frac{r=15}{2\pi }\text{}\left(C=15\right)$ or, $r=\frac{7.5}{\pi }$
{"url":"https://plainmath.org/high-school-geometry/257-formula-relates-circumference-circle-radius-circles-circumference-inches","timestamp":"2024-11-03T05:59:12Z","content_type":"text/html","content_length":"146338","record_id":"<urn:uuid:7796e9eb-d5df-46aa-9e13-b90259a0000f>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00803.warc.gz"}
A formalization of set theory without variables A formalization of set theory without variables / Alfred Tarski and Steven Givant Type de document : MonographieCollection : Colloquium publications, 41Langue : anglais.Pays: Etats Unis.Éditeur : Providence : American Mathematical Society, 1987Description : 1 vol. (xxi, 318 p.) ; 26 cmISBN: 9780821810415.ISSN: 0065-9258.Bibliographie : Bibliography: p. 273-282. Index.Sujet MSC : 03E20, Mathematical logic and foundations - Set theory, Other classical set theory 03-02, Research exposition (monographs, survey articles) pertaining to mathematical logic and foundationsEn-ligne : Zentralblatt | MathSciNet | AMS Item type Current library Call number Status Date due Barcode CMI Salle 1 03 TAR (Browse shelf(Opens below)) Available 09689-01 Bibliography: p. 273-282. Index Completed in 1983, this work culminates nearly half a century of the late Alfred Tarski's foundational studies in logic, mathematics, and the philosophy of science. Written in collaboration with Steven Givant, the book appeals to a very broad audience, and requires only a familiarity with first-order logic. It is of great interest to logicians and mathematicians interested in the foundations of mathematics, but also to philosophers interested in logic, semantics, algebraic logic, or the methodology of the deductive sciences, and to computer scientists interested in developing very simple computer languages rich enough for mathematical and scientific applications. The authors show that set theory and number theory can be developed within the framework of a new, different, and simple equational formalism, closely related to the formalism of the theory of relation algebras. There are no variables, quantifiers, or sentential connectives. Predicates are constructed from two atomic binary predicates (which denote the relations of identity and set-theoretic membership) by repeated applications of four operators that are analogues of the well-known operations of relative product, conversion, Boolean addition, and complementation. All mathematical statements are expressed as equations between predicates. There are ten logical axiom schemata and just one rule of inference: the one of replacing equals by equals, familiar from high school algebra. Though such a simple formalism may appear limited in its powers of expression and proof, this book proves quite the opposite. The authors show that it provides a framework for the formalization of practically all known systems of set theory, and hence for the development of all classical mathematics. The book contains numerous applications of the main results to diverse areas of foundational research: propositional logic; semantics; first-order logics with finitely many variables; definability and axiomatizability questions in set theory, Peano arithmetic, and real number theory; representation and decision problems in the theory of relation algebras; and decision problems in equational logic. (source : AMS) There are no comments on this title.
{"url":"https://catalogue.i2m.univ-amu.fr/bib/5809","timestamp":"2024-11-13T03:04:43Z","content_type":"text/html","content_length":"67333","record_id":"<urn:uuid:f698097c-9dce-4531-8624-bd3167dae688>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00498.warc.gz"}
Source code for skll.learner An easy-to-use class that wraps scikit-learn estimators. :author: Nitin Madnani (nmadnani@ets.org) :author: Michael Heilman (mheilman@ets.org) :author: Dan Blanchard (dblanchard@ets.org) :author: Aoife Cahill (acahill@ets.org) :organization: ETS import copy import logging from importlib import import_module from itertools import combinations from math import floor, log10 from multiprocessing import cpu_count from typing import Any, Dict, List, Optional, Tuple, Union import joblib import numpy as np import scipy.sparse as sp from sklearn.dummy import DummyClassifier, DummyRegressor # noqa: F401 from sklearn.ensemble import ( from sklearn.feature_extraction import FeatureHasher from sklearn.kernel_approximation import ( # noqa: F401 from sklearn.linear_model import ( from sklearn.linear_model._base import LinearModel from sklearn.metrics import get_scorer_names, make_scorer from sklearn.model_selection import GridSearchCV from sklearn.naive_bayes import MultinomialNB from sklearn.neighbors import KNeighborsClassifier, KNeighborsRegressor # noqa: F401 from sklearn.neural_network import MLPClassifier, MLPRegressor from sklearn.pipeline import Pipeline from sklearn.preprocessing import StandardScaler from sklearn.svm import SVC, SVR, LinearSVC, LinearSVR from sklearn.tree import DecisionTreeClassifier, DecisionTreeRegressor from sklearn.utils import shuffle as sk_shuffle from sklearn.utils.multiclass import type_of_target from skll.data import FeatureSet from skll.data.dict_vectorizer import DictVectorizer from skll.data.readers import safe_float from skll.metrics import _CUSTOM_METRICS from skll.types import ( from skll.utils.constants import ( from .utils import ( # we need a list of learners requiring dense input and a dictionary of # default parameter grids that we can dynamically update in case we # import a custom learner _REQUIRES_DENSE = copy.copy(KNOWN_REQUIRES_DENSE) _DEFAULT_PARAM_GRIDS = copy.deepcopy(KNOWN_DEFAULT_PARAM_GRIDS) __all__ = ["Learner", "MAX_CONCURRENT_PROCESSES", "load_custom_learner"] class Learner(object): A simpler interface around scikit-learn classification and regression estimators. model_type : str Name of estimator to create (e.g., ``'LogisticRegression'``). See the skll package documentation for valid options. probability : bool, default=False Should learner return probabilities of all labels (instead of just label with highest probability)? pipeline : bool, default=False Should learner contain a pipeline attribute that contains a scikit-learn Pipeline object composed of all steps including the vectorizer, the feature selector, the sampler, the feature scaler, and the actual estimator. Note that this will increase the size of the learner object in memory and also when it is saved to disk. feature_scaling : str, default="none" How to scale the features, if at all. Options are - 'with_std': scale features using the standard deviation - 'with_mean': center features using the mean - 'both': do both scaling as well as centering - 'none': do neither scaling nor centering model_kwargs : Optional[Dict[str, Any]], default=None A dictionary of keyword arguments to pass to the initializer for the specified model. pos_label : Optional[:class:`skll.types.LabelType`], default=None An integer or string denoting the label of the class to be treated as the positive class in a binary classification setting. If ``None``, the class represented by the label that appears second when sorted is chosen as the positive class. For example, if the two labels in data are "A" and "B" and ``pos_label`` is not specified, "B" will be chosen as the positive class. min_feature_count : int, default=1 The minimum number of examples a feature must have a nonzero value in to be included. sampler : Optional[str], default=None The sampler to use for kernel approximation, if desired. Valid values are - 'AdditiveChi2Sampler' - 'Nystroem' - 'RBFSampler' - 'SkewedChi2Sampler' sampler_kwargs : Optional[Dict[str, Any]], default=None A dictionary of keyword arguments to pass to the initializer for the specified sampler. custom_learner_path : Optional[str], default=None Path to module where a custom classifier is defined. logger : Optional[logging.Logger], default=None A logging object. If ``None`` is passed, get logger from ``__name__``. def __init__( model_type: str, probability: bool = False, pipeline: bool = False, feature_scaling: str = "none", model_kwargs: Optional[Dict[str, Any]] = None, pos_label: Optional[LabelType] = None, min_feature_count: int = 1, sampler: Optional[str] = None, sampler_kwargs: Optional[Dict[str, Any]] = None, custom_learner_path: Optional[PathOrStr] = None, logger: Optional[logging.Logger] = None, ) -> None: """Initialize a learner object with the specified settings.""" super(Learner, self).__init__() self.feat_vectorizer: Optional[Union[DictVectorizer, FeatureHasher]] = None self.scaler: Optional[StandardScaler] = None self.label_dict: Dict[LabelType, int] = {} self.label_list: List[LabelType] = [] self.pos_label = safe_float(pos_label) if pos_label is not None else pos_label self._model = None self._store_pipeline = pipeline self._feature_scaling = feature_scaling self._min_feature_count = min_feature_count self.feat_selector: SelectByMinCount = SelectByMinCount(min_count=self._min_feature_count) self._model_kwargs: Dict[str, Any] = {} self._sampler_kwargs: Dict[str, Any] = {} self.logger = logger if logger else logging.getLogger(__name__) if model_type not in globals(): # here, we need to import the custom model and add it # to the appropriate lists of models globals()[model_type] = load_custom_learner(custom_learner_path, model_type) model_class = globals()[model_type] default_param_grid = ( if hasattr(model_class, "default_param_grid") else {} # ewww, globals :-( global _REQUIRES_DENSE _DEFAULT_PARAM_GRIDS.update({model_class: default_param_grid}) if hasattr(model_class, "requires_dense") and model_class.requires_dense(): _REQUIRES_DENSE = _REQUIRES_DENSE + (model_class,) self._model_type = globals()[model_type] # Use setter to set self.probability self.probability = probability # we need to use dense features under certain conditions: # - if we are using any of the estimators that are _known_ # to accept only dense features # - if we are doing centering as part of feature scaling # - if we are using non-negative least squares regression self._use_dense_features = ( issubclass(self._model_type, _REQUIRES_DENSE) or self._feature_scaling in {"with_mean", "both"} or ( issubclass(self._model_type, LinearRegression) and model_kwargs is not None and model_kwargs.get("positive", False) # Set default keyword arguments for models that we have some for. if issubclass(self._model_type, SVC): self._model_kwargs["cache_size"] = 1000 self._model_kwargs["probability"] = self.probability self._model_kwargs["gamma"] = "scale" if self.probability: "Because LibSVM does an internal cross-validation to " "produce probabilities, results will not be exactly " "replicable when using SVC and probability mode." elif issubclass(self._model_type, AdaBoostClassifier): self._model_kwargs["algorithm"] = "SAMME" self._model_kwargs["n_estimators"] = 500 elif issubclass( self._model_kwargs["n_estimators"] = 500 elif issubclass(self._model_type, DummyClassifier): self._model_kwargs["strategy"] = "prior" elif issubclass(self._model_type, (LinearSVC, LinearSVR)): self._model_kwargs["dual"] = "auto" elif issubclass(self._model_type, SVR): self._model_kwargs["cache_size"] = 1000 self._model_kwargs["gamma"] = "scale" elif issubclass(self._model_type, SGDClassifier): self._model_kwargs["loss"] = "log_loss" self._model_kwargs["max_iter"] = 1000 self._model_kwargs["tol"] = 1e-3 elif issubclass(self._model_type, SGDRegressor): self._model_kwargs["max_iter"] = 1000 self._model_kwargs["tol"] = 1e-3 elif issubclass(self._model_type, RANSACRegressor): self._model_kwargs["loss"] = "squared_error" elif issubclass(self._model_type, (MLPClassifier, MLPRegressor)): self._model_kwargs["learning_rate"] = "invscaling" self._model_kwargs["max_iter"] = 500 elif issubclass(self._model_type, LogisticRegression): self._model_kwargs["max_iter"] = 1000 self._model_kwargs["solver"] = "liblinear" self._model_kwargs["multi_class"] = "auto" if issubclass( self._model_kwargs["random_state"] = 123456789 if sampler_kwargs: if sampler: sampler_type = globals()[sampler] if issubclass(sampler_type, (Nystroem, RBFSampler, SkewedChi2Sampler)): self._sampler_kwargs["random_state"] = 123456789 self.sampler = sampler_type(**self._sampler_kwargs) self.sampler = None if model_kwargs: # if the model is an AdaBoostClassifier, AdaBoostRegressor, # BaggingClassifier, BaggingRegressor, or RANSACRegressor, # then we need to convert the specified `estimator` string # into an object before passing it in to the learner constructor. # We also need to make sure where appropriate, we set the random # state to a fixed seed such that results are replicable is_ada_has_estimator = ( issubclass(self._model_type, (AdaBoostRegressor, AdaBoostClassifier)) and "estimator" in model_kwargs is_ransac_has_estimator = ( issubclass(self._model_type, RANSACRegressor) and "estimator" in model_kwargs is_bagging_has_estimator = ( issubclass(self._model_type, (BaggingClassifier, BaggingRegressor)) and "estimator" in model_kwargs if is_ada_has_estimator or is_ransac_has_estimator or is_bagging_has_estimator: base_estimator_kwargs: Dict[str, Any] # check if a base estimator name was specified base_estimator_name = model_kwargs.get("estimator") # set some fixed parameters for specific base estimators if base_estimator_name in ["LinearRegression", "MultinomialNB"]: base_estimator_kwargs = {} elif base_estimator_name in ["SGDClassifier", "SGDRegressor"]: base_estimator_kwargs = { "max_iter": 1000, "tol": 0.001, "random_state": 123456789, elif base_estimator_name == "SVR": base_estimator_kwargs = {"gamma": "scale"} elif base_estimator_name == "SVC": base_estimator_kwargs = {"gamma": "scale", "random_state": 123456789} base_estimator_kwargs = {"random_state": 123456789} # instantiate a base estimator if one was specified and add it # to the main learner's model keyword arguments if base_estimator_name: base_estimator = globals()[base_estimator_name](**base_estimator_kwargs) model_kwargs["estimator"] = base_estimator def from_file( cls, learner_path: PathOrStr, logger: Optional[logging.Logger] = None ) -> "Learner": Load a saved ``Learner`` instance from a file path. learner_path : :class:`skll.types.PathOrStr` The path to a saved ``Learner`` instance file. logger : Optional[logging.Logger], default=None A logging object. If ``None`` is passed, get logger from ``__name__``. The ``Learner`` instance loaded from the file. # use the logger that's passed in or if nothing was passed in, # then create a new logger logger = logger if logger else logging.getLogger(__name__) # call the learner loding utility function obj = _load_learner_from_disk(cls, learner_path, logger) assert isinstance(obj, cls) return obj def model_type(self): """Return the model type (i.e., the class).""" return self._model_type def model_kwargs(self) -> Dict[str, Any]: """Return a dictionary of the underlying scikit-learn model's keyword arguments.""" return self._model_kwargs def model(self): """Return the underlying scikit-learn model.""" return self._model def load(self, learner_path: PathOrStr) -> None: Replace the current learner instance with a saved learner. learner_path : :class:`skll.types.PathOrStr` The path to a saved learner object file to load. del self.__dict__ self.__dict__ = Learner.from_file(learner_path).__dict__ def _convert_coef_array_to_feature_names(self, coef: np.ndarray, feature_name_prefix: str = ""): Convert model coefficients array to dictionary. Method used by `model_params` to convert the model coefficients array into a dictionary with feature names as keys and the coefficients as values. coef : numpy.ndarray A numpy array with the model coefficients feature_name_prefix : str An optional string that should be prefixed to the feature name, e.g. the name of the class for LogisticRegression or the class pair for SVCs with linear kernels. Dict[str, Any] A dictionary of labeled weights res = {} vocabulary = {} # if we are doing feature hashing, then we need to make up # the feature names if isinstance(self.feat_vectorizer, FeatureHasher): num_features = len(coef) index_width_in_feature_name = int(floor(log10(num_features))) + 1 feature_names = [] for idx in range(num_features): index_str = str(idx + 1).zfill(index_width_in_feature_name) feature_indices = range(num_features) vocabulary = dict(zip(feature_names, feature_indices)) # otherwise we can just use the DictVectorizer vocabulary # to get the feature names elif isinstance(self.feat_vectorizer, DictVectorizer): vocabulary = self.feat_vectorizer.vocabulary_ # create the final result dictionary with the prefixed # feature names and the corresponding coefficient for feat, idx in vocabulary.items(): if coef[idx]: res[f"{feature_name_prefix}{feat}"] = coef[idx] return res def model_params(self) -> Tuple[Dict[str, Any], Dict[str, Any]]: Return model parameters (i.e., weights). Return the weights for a ``LinearModel`` (e.g., ``Ridge``), regression, and liblinear models. If the model was trained using feature hashing, then names of the form `hashed_feature_XX` are used instead. res : Dict[str, Any] A dictionary of labeled weights. intercept : Dict[str, Any] A dictionary of intercept(s). If the instance does not support model parameters. res = {} intercept = {} if ( isinstance(self._model, LinearModel) or (isinstance(self._model, SVR) and self._model.kernel == "linear") or isinstance(self._model, SGDRegressor) # also includes RescaledRidge, RescaledSVR, RescaledSGDRegressor coef = self.model.coef_ intercept = {"_intercept_": self.model.intercept_} # convert SVR coefficient from a matrix to a 1D array # and convert from sparse to dense also if necessary. # However, this last bit may not be necessary # if we did feature scaling and coef is already dense. if isinstance(self._model, SVR): if sp.issparse(coef): coef = coef.toarray() coef = coef[0] # inverse transform to get indices for before feature selection coef = coef.reshape(1, -1) coef = self.feat_selector.inverse_transform(coef)[0] res = self._convert_coef_array_to_feature_names(coef) elif isinstance(self._model, LinearSVC) or isinstance(self._model, LogisticRegression): label_list = self.label_list # if there are only two labels, scikit-learn will only have one # set of parameters and they will be associated with label 1 (not # 0) if len(self.label_list) == 2: label_list = self.label_list[-1:] if isinstance(self.feat_vectorizer, FeatureHasher): "No feature names are available since " "this model was trained on hashed " for i, label in enumerate(label_list): coef = self.model.coef_[i] coef = coef.reshape(1, -1) coef = self.feat_selector.inverse_transform(coef)[0] label_res = self._convert_coef_array_to_feature_names( coef, feature_name_prefix=f"{label}\t" if isinstance(self.model.intercept_, float): intercept = {"_intercept_": self.model.intercept_} elif self.model.intercept_.any(): intercept = dict(zip(label_list, self.model.intercept_)) # type: ignore # for SVCs with linear kernels, we want to print out the primal # weights - that is, the weights for each feature for each one-vs-one # binary classifier. These are the weights contained in the `coef_` # attribute of the underlying scikit-learn model. This is a matrix that # has the shape [(n_classes)*(n_classes -1)/2, n_features] since there # are C(n_classes, 2) = n_classes*(n_classes-1)/2 one-vs-one classifiers # and each one has weights for each of the features. According to the # scikit-learn user guide and the code for the function `_one_vs_one_coef()` # in `svm/base.py`, the order of the rows is as follows is "0 vs 1", # "0 vs 2", ... "0 vs n", "1 vs 2", "1 vs 3", "1 vs n", ... "n-1 vs n". elif isinstance(self._model, SVC) and self._model.kernel == "linear": intercept = {} if isinstance(self.feat_vectorizer, FeatureHasher): "No feature names are available since " "this model was trained on hashed " for i, class_pair in enumerate(combinations(range(len(self.label_list)), 2)): coef = self.model.coef_[i] coef = coef.toarray() coef = self.feat_selector.inverse_transform(coef)[0] class1 = self.label_list[class_pair[0]] class2 = self.label_list[class_pair[1]] class_pair_res = self._convert_coef_array_to_feature_names( coef, feature_name_prefix=f"{class1}-vs-{class2}\t" intercept[f"{class1}-vs-{class2}"] = self.model.intercept_[i] # not supported raise ValueError( f"{self._model_type.__name__} is not supported " "by model_params with its current settings." return res, intercept def probability(self) -> bool: Return the value of the probability flag. The flag indicages whether the learner return probabilities of all labels (instead of just label with highest probability)? return self._probability def probability(self, value: bool) -> None: Set the probability flag. value : bool Whether learner should return probabilities of all labels. # LinearSVC doesn't support predict_proba self._probability = value if not hasattr(self.model_type, "predict_proba") and value: "Probability was set to True, but " f"{self.model_type.__name__} does not have a " "predict_proba() method." self._probability = False def __getstate__(self) -> Dict[str, Any]: Return attributes that should be pickled. We need this because we cannot pickle loggers. attribute_dict = dict(self.__dict__) if "logger" in attribute_dict: del attribute_dict["logger"] return attribute_dict def save(self, learner_path: PathOrStr) -> None: Save the ``Learner`` instance to a file. learner_path : :class:`skll.types.PathOrStr` The path to save the ``Learner`` instance to. _save_learner_to_disk(self, learner_path) def _create_estimator(self): Create an estimator. The estimator that was created. default_param_grid : Dict[str, Any] The parameter grid for the estimator. If there is no default parameter grid for estimator. estimator = None default_param_grid = None for key_class, grid in _DEFAULT_PARAM_GRIDS.items(): if issubclass(self._model_type, key_class): default_param_grid = grid if default_param_grid is None: raise ValueError(f"{self._model_type.__name__} is not a valid " "learner type.") estimator = self._model_type(**self._model_kwargs) return estimator, default_param_grid def get_feature_names_out(self) -> np.ndarray: Return the names of the actual features used by the estimator. It is possible for some features to get filtered out by the feature selector which means that the vectorizer is no longer the correct source for the feature names. This method takes into account the feature selector and returns the names of the features that were actually selected to be used by the estimator. names : numpy.ndarray of shape (num_features,) Names of features actually used by the estimator. If ``self.feat_vectorizer`` is either ``None`` or a if isinstance(self.feat_vectorizer, DictVectorizer): return self.feat_vectorizer.get_feature_names_out()[self.feat_selector.get_support()] raise ValueError( "Cannot get feature names: `feat_vectorizer` is not " "defined or a `FeatureHasher`." def _check_input_formatting(self, examples: FeatureSet) -> None: Check that the examples are properly formatted. examples : :class:`skll.data.featureset.FeatureSet` The ``FeatureSet`` instance to use for training. If labels are strings. If any features are strings. # Make sure the labels for a regression task are not strings. if self.model_type._estimator_type == "regressor" and examples.labels is not None: for label in examples.labels: if isinstance(label, str): raise TypeError( "You are doing regression with string " "labels. Convert them to integers or " # make sure that feature values are not strings; to check this # we need to get a flattened version of the feature array, # whether it is sparse (more likely) or dense if examples.features is not None: if sp.issparse(examples.features): flattened_features = examples.features.data flattened_features = examples.features.flat for val in flattened_features: if isinstance(val, str): raise TypeError( "You have feature values that are strings. Convert them to floats." def _check_max_feature_value(self, feat_array: np.ndarray): Check if the the maximum absolute value of any feature is too large. feat_array : numpy.ndarray A numpy array with features. max_feat_abs = np.max(np.abs(feat_array.data)) if max_feat_abs > 1000.0: "You have a feature with a very large " f"absolute value ({max_feat_abs}). That may " "cause the learning algorithm to crash or " "perform poorly." def _create_label_dict(self, examples: FeatureSet) -> None: Create a dictionary of labels for classification problems. examples : :class:`skll.data.featureset.FeatureSet` The examples to use for training. # we don't need to do this if we have already done it # or for regression models, so simply return. if len(self.label_dict) > 0 or self.model_type._estimator_type == "regressor": # extract list of unique labels if we are doing classification; # note that the output of np.unique() is sorted if examples.labels is not None: self.label_list = np.unique(examples.labels).tolist() # for binary classification, if one label is specified as # the positive class, re-sort the label list to make sure # that it is last in the list; for multi-class classification # raise a warning and set it back to None, since it does not # make any sense anyway if self.pos_label is not None: if len(self.label_list) != 2: "Ignoring value of `pos_label` for " "multi-class classification." self.pos_label = None self.label_list = sorted(self.label_list, key=lambda x: (x == self.pos_label, x)) # Given a list of all labels in the dataset and a list of the # unique labels in the set, convert the first list to an array of # numbers. self.label_dict = {label: i for i, label in enumerate(self.label_list)} def _train_setup(self, examples: FeatureSet) -> None: Set up the feature vectorizer and the scaler. examples : :class:`skll.data.featureset.FeatureSet` The ``FeatureSet`` instance to use for training. # Check feature values and labels # Create feature name -> value mapping self.feat_vectorizer = examples.vectorizer # Create a scaler if we weren't passed one and we are asked # to do feature scaling; note that we do not support feature # scaling for `MultinomialNB` learners if not issubclass(self._model_type, MultinomialNB) and self._feature_scaling != "none": scale_with_mean = self._feature_scaling in {"with_mean", "both"} scale_with_std = self._feature_scaling in {"with_std", "both"} self.scaler = StandardScaler( copy=True, with_mean=scale_with_mean, with_std=scale_with_std # Doing this is to prevent any modification of feature values # using a dummy transformation self.scaler = StandardScaler(copy=False, with_mean=False, with_std=False) def train( examples: FeatureSet, param_grid: Optional[Dict[str, Any]] = None, grid_search_folds: Union[int, FoldMapping] = 5, grid_search: bool = True, grid_objective: Optional[str] = None, grid_jobs: Optional[int] = None, shuffle: bool = False, ) -> Tuple[float, Dict[str, Any]]: Train model underlying the learner. Return the grid search score and a dictionary of grid search results. examples : :class:`skll.data.featureset.FeatureSet` The ``FeatureSet`` instance to use for training. param_grid : Optional[Dict[str, Any]], default=None The parameter grid to search through for grid search. If ``None``, a default parameter grid will be used. grid_search_folds : Union[int, :class:`skll.types.FoldMapping`], default=5 The number of folds to use when doing the grid search, or a mapping from example IDs to folds. grid_search : bool, default=True Should we do grid search? grid_objective : Optional[str], default=None The name of the objective function to use when doing the grid search. Must be specified if ``grid_search`` is ``True``. grid_jobs : Optional[int], default=None The number of jobs to run in parallel when doing the grid search. If ``None`` or 0, the number of grid search folds will be used. shuffle : bool, default=False Shuffle examples (e.g., for grid search CV.) The best grid search objective function score, or 0 if we're not doing grid search Dict[str, Any] Dictionary of grid search CV results with keys such as "params", "mean_test_score", etc, that are mapped to lists of values associated with each hyperparameter set combination, or None if not doing grid search. If grid_objective is not a valid grid objective or if one is not specified when necessary. If process runs out of memory converting training data to dense. If FeatureHasher is used with MultinomialNB. # get the estimator type since we need it in multiple places below estimator_type = self.model_type._estimator_type # if we are asked to do grid search, check that the grid objective # is specified and that the specified function is valid for the # selected learner if grid_search: if not grid_objective: raise ValueError( "Grid search is on by default. You must " "either specify a grid objective or turn off" " grid search." # get the list of objectives that are acceptable in the current # prediction scenario and raise an exception if the current # objective is not in this allowed list if examples.labels is not None: label_type = examples.labels.dtype.type if estimator_type == "classifier": sorted_unique_labels = np.unique(examples.labels) allowed_objectives = get_acceptable_classification_metrics(sorted_unique_labels) allowed_objectives = get_acceptable_regression_metrics() if grid_objective not in allowed_objectives: raise ValueError( f"'{grid_objective}' is not a valid objective" f" function for {self._model_type.__name__} " "with labels of type " # If we're using a correlation metric for doing binary # classification and probability is set to true, we assume # that the user actually wants the `_with_probabilities` # version of the metric if ( grid_objective in CORRELATION_METRICS and estimator_type == "classifier" and self.probability f'You specified "{grid_objective}" as the ' 'objective with "probability" set to "true".' " If this is a binary classification task " "with integer labels, the probabilities for " "the positive class will be used to compute " "the correlation." old_grid_objective = grid_objective new_grid_objective = f"{grid_objective}_probs" metrics_module = import_module("skll.metrics") metric_func = getattr(metrics_module, "correlation") _CUSTOM_METRICS[new_grid_objective] = make_scorer( metric_func, corr_type=grid_objective, response_method="predict_proba" grid_objective = new_grid_objective # Shuffle so that the folds are random for the inner grid search CV. # If grid search is True but shuffle isn't, shuffle anyway. # You can't shuffle a scipy sparse matrix in place, so unfortunately # we make a copy of everything (and then get rid of the old version) if grid_search or shuffle: if grid_search and not shuffle: "Training data will be shuffled to randomize " "grid search folds. Shuffling may yield " "different results compared to scikit-learn." ids, labels, features = sk_shuffle( examples.ids, examples.labels, examples.features, random_state=123456789 examples = FeatureSet( examples.name, ids, labels=labels, features=features, vectorizer=examples.vectorizer # call train setup to set up the vectorizer, the labeldict, and the # scaler # select features xtrain = self.feat_selector.fit_transform(examples.features) # Convert to dense if necessary if self._use_dense_features: xtrain = xtrain.toarray() except MemoryError: if issubclass(self._model_type, _REQUIRES_DENSE): reason = f"{self._model_type.__name__} does not support " "sparse matrices." reason = f"{self._feature_scaling} feature scaling " "requires a dense matrix." raise MemoryError( "Ran out of memory when converting training" " data to dense. This was required because " if isinstance(self.feat_vectorizer, FeatureHasher) and issubclass( self._model_type, MultinomialNB raise ValueError( "Cannot use FeatureHasher with MultinomialNB " "because MultinomialNB cannot handle negative " "feature values." # Scale features if necessary if self.scaler: xtrain = self.scaler.fit_transform(xtrain) # check whether any feature values are too large # Sampler if self.sampler is not None and issubclass(self._model_type, MultinomialNB): raise ValueError( "Cannot use a sampler with MultinomialNB " "because MultinomialNB cannot handle negative " "feature values." if self.sampler: self.logger.warning("Sampler converts sparse matrix to dense") if isinstance(self.sampler, SkewedChi2Sampler): self.logger.warning("SkewedChi2Sampler uses a dense matrix") if sp.issparse(xtrain): xtrain = xtrain.toarray() xtrain = self.sampler.fit_transform(xtrain) # use label dict transformed version of examples.labels if doing # classification if examples.labels is not None: if estimator_type == "classifier": labels = np.array([self.label_dict[label] for label in examples.labels]) labels = examples.labels # Instantiate an estimator and get the default parameter grid to search estimator, default_param_grid = self._create_estimator() # Use default parameter grid if we weren't passed one # In case the default parameter grid is also empty # then there's no point doing the grid search at all if grid_search and not param_grid: if default_param_grid == {}: "SKLL has no default parameter grid " "available for the " f"{self._model_type.__name__} learner and" " no parameter grids were supplied. Using" " default values instead of grid search." grid_search = False param_grid = default_param_grid # set up a grid searcher if we are asked to if grid_search: # explicitly declare the variable types folds: Union[int, IndexIterator] final_grid_jobs: int # set up grid search folds if isinstance(grid_search_folds, int): grid_search_folds = compute_num_folds_from_example_counts( grid_search_folds, labels, self.model_type._estimator_type, logger=self.logger if not grid_jobs: final_grid_jobs = grid_search_folds final_grid_jobs = min(grid_search_folds, grid_jobs) folds = grid_search_folds elif examples.labels is not None: # use the number of unique fold IDs as the number of grid jobs num_specified_folds = len(set(grid_search_folds.values())) if not grid_jobs: final_grid_jobs = num_specified_folds final_grid_jobs = min(num_specified_folds, grid_jobs) # Only retain IDs within folds if they're in grid_search_folds dummy_label = next(iter(grid_search_folds.values())) fold_groups = [ grid_search_folds.get(curr_id, dummy_label) for curr_id in examples.ids kfold = FilteredLeaveOneGroupOut( grid_search_folds, examples.ids, logger=self.logger folds = kfold.split(examples.features, examples.labels, fold_groups) # limit the number of grid_jobs to be no higher than five or the # number of cores for the machine, whichever is lower final_grid_jobs = min(final_grid_jobs, cpu_count(), MAX_CONCURRENT_PROCESSES) # look up the scorer function in SKLL's custom metrics if the metric # is not provided by scikit-learn itself assert grid_objective is not None final_grid_objective = ( if grid_objective in get_scorer_names() else _CUSTOM_METRICS[grid_objective] # we set `error_score` to "raise" since we want scikit-learn to explicitly # raise an exception if the estimator fails to fit for any reason grid_searcher = GridSearchCV( # run the grid search for hyperparameters grid_searcher.fit(xtrain, labels) self._model = grid_searcher.best_estimator_ grid_score = grid_searcher.best_score_ grid_cv_results = grid_searcher.cv_results_ self._model = estimator.fit(xtrain, labels) grid_score = 0.0 grid_cv_results = None # restore the original of the grid objective if we # had futzed with it to handle correlation # objectives and probability outputs if "old_grid_objective" in locals(): grid_objective = old_grid_objective del _CUSTOM_METRICS[new_grid_objective] # store a scikit-learn Pipeline in the `pipeline` attribute # composed of a copy of the vectorizer, the selector, # the sampler, the scaler, and the estimator. This pipeline # attribute can then be used by someone who wants to take a SKLL # model and then do further analysis using scikit-learn # We are using copies since the user might want to play # around with the pipeline and we want to let her do that # but keep the SKLL model the same if self._store_pipeline: # initialize the list that will hold the pipeline steps pipeline_steps: List[Tuple[str, Any]] = [] # start with the vectorizer # note that sometimes we may have to end up using dense # features or if we were using a SkewedChi2Sampler which # requires dense inputs. If this turns out to be the case # then let's turn off `sparse` for the vectorizer copy # to be stored in the pipeline as well so that it works # on the scikit-learn in the same way. However, note that # this solution will only work for DictVectorizers. For # feature hashers, we manually convert things to dense # when we need in SKLL. Therefore, to handle this case, # we basically need to create a custom intermediate # pipeline stage that will convert the features to dense # once the hashing is done since this is what happens # in SKLL. vectorizer_copy = copy.deepcopy(self.feat_vectorizer) if self._use_dense_features or isinstance(self.sampler, SkewedChi2Sampler): if isinstance(vectorizer_copy, DictVectorizer): "The `sparse` attribute of the DictVectorizer stage " "will be set to `False` in the pipeline since dense " "features are required when centering." vectorizer_copy.sparse = False "A custom pipeline stage (`Densifier`) will be " "inserted in the pipeline since the current SKLL " "configuration requires dense features." densifier = Densifier() pipeline_steps.append(("densifier", densifier)) pipeline_steps.insert(0, ("vectorizer", vectorizer_copy)) # next add the selector pipeline_steps.append(("selector", copy.deepcopy(self.feat_selector))) # next, include the scaler pipeline_steps.append(("scaler", copy.deepcopy(self.scaler))) # next, include the sampler, if there is one if self.sampler: pipeline_steps.append(("sampler", copy.deepcopy(self.sampler))) # finish with the estimator pipeline_steps.append(("estimator", copy.deepcopy(self.model))) self.pipeline = Pipeline(steps=pipeline_steps) return grid_score, grid_cv_results def evaluate( examples: FeatureSet, prediction_prefix: Optional[str] = None, append: bool = False, grid_objective: Optional[str] = None, output_metrics: List[str] = [], ) -> EvaluateTaskResults: Evaluate the learner on a given dev or test ``FeatureSet``. examples : :class:`skll.data.featureset.FeatureSet` The ``FeatureSet`` instance to evaluate the performance of the model on. prediction_prefix : Optional[str], default=None If not ``None``, predictions will also be written out to a file with the name ``<prediction_prefix>_predictions.tsv``. Note that the prefix can also contain a path. append : bool, default=False Should we append the current predictions to the file if it exists? grid_objective : Optional[str], default=None The objective function that was used when doing the grid search. output_metrics : List[str], default=[] List of additional metric names to compute in addition to grid A 6-tuple containing the confusion matrix, the overall accuracy, the per-label PRFs, the model parameters, the grid search objective function score, and the additional evaluation metrics, if any. For regressors, the first two elements in the tuple are ``None``. # are we in a regressor or a classifier estimator_type = self.model_type._estimator_type # make the prediction on the test data; note that these # are either class indices or class probabilities yhat = self.predict( examples, prediction_prefix=prediction_prefix, append=append, class_labels=False # for classifiers, convert class labels indices for consistency # but account for any unseen labels in the test set that may not # have occurred in the training data at all; then get acceptable # metrics based on the type of labels we have if examples.labels is not None: if estimator_type == "classifier": sorted_unique_labels = np.unique(examples.labels) test_label_list = sorted_unique_labels.tolist() train_and_test_label_dict = add_unseen_labels(self.label_dict, test_label_list) ytest = np.array([train_and_test_label_dict[label] for label in examples.labels]) acceptable_metrics = get_acceptable_classification_metrics(sorted_unique_labels) # for regressors we do not need to do anything special to the labels train_and_test_label_dict = None ytest = examples.labels acceptable_metrics = get_acceptable_regression_metrics() # check that all of the output metrics are acceptable unacceptable_metrics = set(output_metrics).difference(acceptable_metrics) if unacceptable_metrics and examples.labels is not None: label_type = examples.labels.dtype.type raise ValueError( "The following metrics are not valid " f"for this learner ({self._model_type.__name__})" " with these labels of type " f"{label_type.__name__}: " # get the values of the evaluation metrics ) = compute_evaluation_metrics( # add in the model parameters and return model_params: Dict[str, Any] = self.model.get_params() res = (conf_matrix, accuracy, result_dict, model_params, objective_score, metric_scores) return res def predict( examples: FeatureSet, prediction_prefix: Optional[str] = None, append: bool = False, class_labels: bool = True, ) -> np.ndarray: Generate predictions for the given examples using the learner model. Return, and optionally, write out predictions on a given ``FeatureSet`` to a file. For regressors, the returned and written-out predictions are identical. However, for classifiers: - if ``class_labels`` is ``True``, class labels are returned as well as written out. - if ``class_labels`` is ``False`` and the classifier is probabilistic (i.e., ``self..probability`` is ``True``), class probabilities are returned as well as written out. - if ``class_labels`` is ``False`` and the classifier is non-probabilistic (i.e., ``self..probability`` is ``False``), class indices are returned and class labels are written out. TL;DR: for regressors, just ignore ``class_labels``. For classfiers, set it to ``True`` to get class labels and ``False`` to get class examples : :class:`skll.data.featureset.FeatureSet` The ``FeatureSet`` instance to predict labels for. prediction_prefix : Optional[str], default=None If not ``None``, predictions will also be written out to a file with the name ``<prediction_prefix>_predictions.tsv``. For classifiers, the predictions written out are class labels unless the learner is probabilistic AND ``class_labels`` is set to ``False``. Note that this prefix can also contain a path. append : bool, default=False Should we append the current predictions to the file if it exists? class_labels : bool, default=True If ``False``, return either the class probabilities (probabilistic classifiers) or the class indices (non-probabilistic ones). If ``True``, return the class labels no matter what. Ignored for The predictions returned by the ``Learner`` instance. If invalid predictions are being returned or written out. If process runs out of memory when converting to dense. If there is a mismatch between the learner vectorizer and the test set vectorizer. example_ids = examples.ids # Need to do some transformations so the features are in the right # columns for the test set. Obviously a bit hacky, but storing things # in sparse matrices saves memory over our old list of dicts approach. # We also need to think about the various combinations of the model # vectorizer and the vectorizer for the set for which we want to make # predictions: # 1. Both vectorizers are DictVectorizers. If they use different sets # of features, we raise a warning and transform the features of the # prediction set from its space to the trained model space. # 2. Both vectorizers are FeatureHashers. If they use different number # of feature bins, we should just raise an error since there's no # inverse_transform() available for a FeatureHasher - the hash function # is not reversible. # 3. The model vectorizer is a FeatureHasher but the prediction feature # set vectorizer is a DictVectorizer. We should be able to handle this # case, since we can just call inverse_transform() on the DictVectorizer # and then transform() on the FeatureHasher? # 4. The model vectorizer is a DictVectorizer but the prediction feature # set vectorizer is a FeatureHasher. Again, we should raise an error here # since there's no inverse available for the hasher. # 1. both are DictVectorizers if isinstance(self.feat_vectorizer, DictVectorizer) and isinstance( examples.vectorizer, DictVectorizer if set(self.feat_vectorizer.feature_names_) != set(examples.vectorizer.feature_names_): "There is mismatch between the training model features " "and the data passed to predict. The prediction features " "will be transformed to the trained model space." if self.feat_vectorizer == examples.vectorizer: xtest = examples.features xtest = self.feat_vectorizer.transform( # 2. both are FeatureHashers elif isinstance(self.feat_vectorizer, FeatureHasher) and isinstance( examples.vectorizer, FeatureHasher self_feat_vec_tuple = ( example_feat_vec_tuple = ( if self_feat_vec_tuple == example_feat_vec_tuple: xtest = examples.features "There is mismatch between the FeatureHasher " "configuration for the training data and the " "configuration for the data passed to predict" raise RuntimeError("Mismatched hasher configurations") # 3. model is a FeatureHasher and test set is a DictVectorizer elif isinstance(self.feat_vectorizer, FeatureHasher) and isinstance( examples.vectorizer, DictVectorizer xtest = self.feat_vectorizer.transform( # 4. model is a DictVectorizer and test set is a FeatureHasher elif isinstance(self.feat_vectorizer, DictVectorizer) and isinstance( examples.vectorizer, FeatureHasher "Cannot predict with a model using a " "DictVectorizer on data that uses a " raise RuntimeError("Cannot use FeatureHasher for data") # filter features based on those selected from training set xtest = self.feat_selector.transform(xtest) # Convert to dense if necessary if self._use_dense_features and not isinstance(xtest, np.ndarray): xtest = xtest.toarray() except MemoryError: if issubclass(self._model_type, _REQUIRES_DENSE): reason = f"{self._model_type.__name__} does not support " "sparse matrices." reason = f"{self._feature_scaling} feature scaling " "requires a dense matrix." raise MemoryError( "Ran out of memory when converting test " "data to dense. This was required because " # Scale xtest if necessary if not issubclass(self._model_type, MultinomialNB) and self.scaler: xtest = self.scaler.transform(xtest) # Sampler if self.sampler: self.logger.warning("Sampler converts sparse matrix to dense") if isinstance(self.sampler, SkewedChi2Sampler): self.logger.warning("SkewedChi2Sampler uses a dense matrix") if sp.issparse(xtest): xtest = xtest.toarray() xtest = self.sampler.transform(xtest) # get the various prediction from this learner on these features prediction_dict = get_predictions(self, xtest) # for classifiers ... if self.model_type._estimator_type == "classifier": # return and write class labels if they were explicitly asked for if class_labels: to_return = to_write = prediction_dict["labels"] # return and write probabilities if self.probability: to_return = to_write = prediction_dict["probabilities"] # return class indices and write labels to_return = prediction_dict["raw"] to_write = prediction_dict["labels"] # for regressors, it's really simple to_write = to_return = prediction_dict["raw"] # check that our predictions to write and return # are not invalid; this should NEVER happen assert to_return is not None assert to_write is not None except AssertionError: raise AssertionError("invalid predictions generated") # write out the predictions if we are asked to if prediction_prefix is not None: return to_return def cross_validate( examples: FeatureSet, stratified: bool = True, cv_folds: Union[int, FoldMapping] = 10, cv_seed: int = 123456789, grid_search: bool = True, grid_search_folds: Union[int, FoldMapping] = 5, grid_jobs: Optional[int] = None, grid_objective: Optional[str] = None, output_metrics: List[str] = [], prediction_prefix: Optional[str] = None, param_grid: Optional[Dict[str, Any]] = None, shuffle: bool = False, save_cv_folds: bool = True, save_cv_models: bool = False, use_custom_folds_for_grid_search: bool = True, ) -> CrossValidateTaskResults: Cross-validate the learner on the given training examples. examples : :class:`skll.data.featureset.FeatureSet` The ``FeatureSet`` instance to cross-validate learner performance on. stratified : bool, default=True Should we stratify the folds to ensure an even distribution of labels for each fold? cv_folds : Union[int, :class:`skll.types.FoldMapping`], default=10 The number of folds to use for cross-validation, or a mapping from example IDs to folds. cv_seed: int, default=123456789 The value for seeding the random number generator used to create the random folds. Note that this seed is *only* used if either ``grid_search`` or ``shuffle`` are set to ``True``. grid_search : bool, default=True Should we do grid search when training each fold? Note: This will make this take *much* longer. grid_search_folds : Union[int, :class:`skll.types.FoldMapping`], default=5 The number of folds to use when doing the grid search, or a mapping from example IDs to folds. grid_jobs : Optional[int], default=None The number of jobs to run in parallel when doing the grid search. If ``None`` or 0, the number of grid search folds will be used. grid_objective : Optional[str], default=None The name of the objective function to use when doing the grid search. Must be specified if ``grid_search`` is ``True``. output_metrics : List[str], default = [] List of additional metric names to compute in addition to the metric used for grid search. prediction_prefix : Optional[str], default=None If saving the predictions, this is the prefix that will be used for the filename. It will be followed by ``"_predictions.tsv"`` param_grid : Optional[Dict[str, Any]], default=None The parameter grid to search. shuffle : bool, default=False Shuffle examples before splitting into folds for CV. save_cv_folds : bool, default=True Whether to save the cv fold ids or not? save_cv_models : bool, default=False Whether to save the cv models or not? use_custom_folds_for_grid_search : bool, default=True If ``cv_folds`` is a custom dictionary, but ``grid_search_folds`` is not, perhaps due to user oversight, should the same custom dictionary automatically be used for the inner grid-search A 5-tuple containing the following: List[:class:`skll.types.EvaluateTaskResults`]: the confusion matrix, overall accuracy, per-label PRFs, model parameters, objective function score, and evaluation metrics (if any) for each fold. List[float]: the grid search scores for each fold. List[Dict[str, Any]]: list of dictionaries of grid search CV results, one per fold, with keys such as "params", "mean_test_score", etc, that are mapped to lists of values associated with each hyperparameter set combination. Optional[:class:`skll.types.FoldMapping`]: dictionary containing the test-fold number for each id if ``save_cv_folds`` is ``True``, otherwise ``None``. Optional[List[:class:`skll.learner.Learner`]]: list of learners, one for each fold if ``save_cv_models`` is ``True``, otherwise ``None``. If classification labels are not properly encoded as strings. If ``grid_search`` is ``True`` but ``grid_objective`` is ``None``. # Seed the random number generator so that randomized # algorithms are replicable random_state = np.random.RandomState(cv_seed) # We need to check whether the labels in the featureset are labels # or continuous values. If it's the latter, we need to raise an # an exception since the stratified splitting in sklearn does not # work with continuous labels. Note that although using random folds # _will_ work, we want to raise an error in general since it's better # to encode the labels as strings anyway for classification problems. if self.model_type._estimator_type == "classifier" and type_of_target( ) not in ["binary", "multiclass"]: raise ValueError( "Floating point labels must be encoded as " "strings for cross-validation." # check that we have an objective since grid search is on by default # Note that `train()` would raise this error anyway later but it's # better to raise this early on so rather than after a whole bunch of # stuff has happened if grid_search: if not grid_objective: raise ValueError( "Grid search is on by default. You must " "either specify a grid objective or turn off" " grid search." # Shuffle so that the folds are random for the inner grid search CV. # If grid search is True but shuffle isn't, shuffle anyway. # You can't shuffle a scipy sparse matrix in place, so unfortunately # we make a copy of everything (and then get rid of the old version) if grid_search or shuffle: if grid_search and not shuffle: "Training data will be shuffled to " "randomize grid search folds. Shuffling " "may yield different results compared to " ids, labels, features = sk_shuffle( examples.ids, examples.labels, examples.features, random_state=random_state examples = FeatureSet( examples.name, ids, labels=labels, features=features, vectorizer=examples.vectorizer # call train setup # Set up the cross-validation iterator. kfold, cv_groups = setup_cv_fold_iterator( # When using custom CV folds (a dictionary), if we are planning to do # grid search, set the grid search folds to be the same as the custom # cv folds unless a flag is set that explicitly tells us not to. # Note that this should only happen when we are using the API; otherwise # the configparser should take care of this even before this method is called if isinstance(cv_folds, dict): if grid_search and use_custom_folds_for_grid_search and grid_search_folds != cv_folds: "The specified custom folds will be used " "for the inner grid search." grid_search_folds = cv_folds # handle each fold separately & accumulate the predictions and results results = [] grid_search_scores = [] grid_search_cv_results_dicts = [] append_predictions = False models: Optional[List["Learner"]] = [] if save_cv_models else None skll_fold_ids: Optional[FoldMapping] = {} if save_cv_folds else None assert examples.labels is not None for fold_num, (train_indices, test_indices) in enumerate( kfold.split(examples.features, examples.labels, cv_groups) # Train model assert examples.labels is not None and examples.features is not None self._model = None # prevent feature vectorizer from being reset. train_set = FeatureSet( (grid_search_score, grid_search_cv_results) = self.train( if save_cv_models and models is not None: # note: there is no need to shuffle again within each fold, # regardless of what the shuffle keyword argument is set to. # Evaluate model test_tuple = FeatureSet( append_predictions = True # save the fold number for each test ID if we were asked to if save_cv_folds and skll_fold_ids is not None: for index in test_indices: skll_fold_ids[examples.ids[index]] = str(fold_num) # return list of results/outputs for all folds return (results, grid_search_scores, grid_search_cv_results_dicts, skll_fold_ids, models) def learning_curve( examples: FeatureSet, metric: str, cv_folds: Union[int, FoldMapping] = 10, train_sizes: LearningCurveSizes = np.linspace(0.1, 1.0, 5), override_minimum: bool = False, ) -> Tuple[List[float], List[float], List[float], List[int]]: Generate learning curves for the learner using the examples. The learning curves are generated on the training examples via cross-validation. Adapted from the scikit-learn code for learning curve generation (cf.``sklearn.model_selection.learning_curve``). examples : :class:`skll.data.featureset.FeatureSet` The ``FeatureSet`` instance to generate the learning curve on. cv_folds : Union[int, :class:`skll.types.FoldMapping`], default=10 The number of folds to use for cross-validation, or a mapping from example IDs to folds. metric : str The name of the metric function to use when computing the train and test scores for the learning curve. train_sizes : :class:`skll.types.LearningCurveSizes`, default= :func:`numpy.linspace` with start=0.1, stop=1.0, num=5 Relative or absolute numbers of training examples that will be used to generate the learning curve. If the type is float, it is regarded as a fraction of the maximum size of the training set (that is determined by the selected validation method), i.e. it has to be within (0, 1]. Otherwise it is interpreted as absolute sizes of the training sets. Note that for classification the number of samples usually have to be big enough to contain at least one sample from each class. override_minimum : bool, default=False Learning curves can be unreliable for very small sizes esp. for > 2 labels. If this option is set to ``True``, the learning curve would be generated even if the number of example is less 500 along with a warning. If ``False``, the curve is not generated and an exception is raised instead. train_scores : List[float] The scores for the training set. test_scores : List[float] The scores on the test set. fit_times : List[float] The average times taken to fit each model. num_examples : List[int] The numbers of training examples used to generate the curve. If the number of examples is less than 500. # check that the number of training examples is more than the minimum # needed for generating a reliable learning curve if len(examples) < 500: if not override_minimum: raise ValueError( f"Number of training examples provided ({len(examples)}) " "is less than the minimum needed (500) for the " "learning curve to be reliable." "Learning curves can be unreliable for examples fewer than " f"500. You provided {len(examples)}." # raise a warning if we are using a probabilistic classifier # since that means we cannot use the predictions directly if self.probability: "Since ``probability`` is set, the most likely " "class will be computed via an argmax before " "computing the curve." # Call train setup before since we need to train # the learner eventually # set up the CV split iterator over the train/test featuresets # which also returns the maximum number of training examples (featureset_iter, n_max_training_samples) = setup_cv_split_iterator(cv_folds, examples) # Get the `_translate_train_sizes()` function from scikit-learn # since we need it to get the right list of sizes after cross-validation _module = import_module("sklearn.model_selection._validation") _translate_train_sizes = getattr(_module, "_translate_train_sizes") train_sizes_abs = _translate_train_sizes(train_sizes, n_max_training_samples) n_unique_ticks = train_sizes_abs.shape[0] # Limit the number of parallel jobs for this # to be no higher than five or the number of cores # for the machine, whichever is lower n_jobs = min(cpu_count(), MAX_CONCURRENT_PROCESSES) # Run jobs in parallel that train the model on each subset # of the training data and compute train and test scores parallel = joblib.Parallel(n_jobs=n_jobs, pre_dispatch=n_jobs) out = parallel( joblib.delayed(train_and_score)(self, train_fs[:n_train_samples], test_fs, metric) for train_fs, test_fs in featureset_iter for n_train_samples in train_sizes_abs # Reshape the outputs out = np.array(out) n_cv_folds = out.shape[0] // n_unique_ticks out = out.reshape(n_cv_folds, n_unique_ticks, 3) out = np.asarray(out).transpose((2, 1, 0)) return list(out[0]), list(out[1]), list(out[2]), list(train_sizes_abs) # Rescaled regressors class RescaledBayesianRidge(BayesianRidge): # noqa: D101 class RescaledAdaBoostRegressor(AdaBoostRegressor): # noqa: D101 class RescaledDecisionTreeRegressor(DecisionTreeRegressor): # noqa: D101 class RescaledElasticNet(ElasticNet): # noqa: D101 class RescaledGradientBoostingRegressor(GradientBoostingRegressor): # noqa: D101 class RescaledHuberRegressor(HuberRegressor): # noqa: D101 class RescaledKNeighborsRegressor(KNeighborsRegressor): # noqa: D101 class RescaledLars(Lars): # noqa: D101 class RescaledLasso(Lasso): # noqa: D101 class RescaledLinearRegression(LinearRegression): # noqa: D101 class RescaledLinearSVR(LinearSVR): # noqa: D101 class RescaledMLPRegressor(MLPRegressor): # noqa: D101 class RescaledRandomForestRegressor(RandomForestRegressor): # noqa: D101 class RescaledRANSACRegressor(RANSACRegressor): # noqa: D101 class RescaledRidge(Ridge): # noqa: D101 class RescaledSGDRegressor(SGDRegressor): # noqa: D101 class RescaledSVR(SVR): # noqa: D101 class RescaledTheilSenRegressor(TheilSenRegressor): # noqa: D101
{"url":"https://scikit-learn-laboratory.readthedocs.io/en/latest/_modules/skll/learner.html","timestamp":"2024-11-15T02:27:34Z","content_type":"text/html","content_length":"240785","record_id":"<urn:uuid:780c881c-7e6d-41cf-ba0a-2a9b72112009>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00529.warc.gz"}
5 Useful Statistics Data Scientists Need to KnowData Science News 5 Useful Statistics Data Scientists Need to Know Data Science can be practically defined as the process by which we get extra information from data. When doing Data Science, what we’re really trying to do is explain what all of the data actually means in the real-world, beyond the numbers. To extract the information embedded in complex datasets, Data Scientists employ a number of tools and techniques including data exploration, visualisation, and modelling. One very important class of mathematical technique often used in data exploration is statistics. In a practical sense, statistics allows us to define concrete mathematical summaries of our data. Rather than trying to describe every single data point, we can use statistics to describe some of its properties. And that’s often enough for us to extract some kind of information about the structure and make-up of the data. Sometimes, when people hear the word “statistics” they think of something overly complicated. Yes, it can get a bit abstract, but we don’t always need to resort to the complex theories to get some kind of value out of statistical techniques. The most basic parts of statistics can often be of the most practical use in Data Science. Today, we’re going to look at 5 useful Statistics for Data Science. These won’t be crazy abstract concepts but rather simple, applicable techniques that go a long way. Let’s get started! The central tendency of a dataset or feature variable is the center or typical value of the set. The idea is that there may be one single value that can best describe (to an extent) our dataset. For example, imagine if you had a normal distribution centered at the x-y position of (100, 100). Then the point (100, 100) is the central tendency since, out of all the points to choose from, it is the one that provides the best summary of the data. For Data Science, we can use central tendency measures to get a quick and simple idea of how our dataset looks as a whole. The “center” of our data can be a very valuable piece of information, telling us how exactly the dataset is biased, since whichever value the data revolves around is essentially a bias. There are 2 common ways of mathematically selecting a central tendency. MeanThe Mean value of a dataset is the average value i. a number around which a whole data is spread out. All values used in calculating the average are weighted equally when defining the Mean. For example, let’s calculate the Mean of the following 5 numbers:(3 + 64 + 187 + 12 + 52) / 5 = 63. 6The mean is great for computing the actual mathematical average. It’s also very fast to compute with Python libraries like NumpyMedianMedian is the middle value of the dataset i. e if we sort the data from smallest to biggest (or biggest to smallest) and then take the value in the middle of the set: that’s the Median. Let’s again compute the Median for that same set of 5 numbers:[3, 12, 52, 64, 187] → 52The Median value is quite different from the Mean value of 63. Neither of them are right or wrong, but we can pick one based on our situation and goals. Computing the Median requires sorting the data—this won’t be practical if your dataset is large. On the other hand the Median will be more robust to outliers than the Mean, since the Mean will be pulled one way or the other if there are some very high magnitude outlier values. The mean and median can be calculated with simple numpy one-liners: Under the umbrella of Statistics, the spread of the data is the extent to which it is squeezed towards a single value or more spread out across a wider range. Take a look at the plots of the Gaussian probability distributions below—imagine that these are probability distributions describing an real-world dataset. The blue curve has the smallest spread value since most of its data points all fall within a fairly narrow range. The red curve has the largest spread value since most of the data points take up a much wider range. The legend shows the standard deviation values of these curves, explained in the next section. Standard DeviationThe standard deviation is the most common way of quantifying the spread of a data. Calculating it involves 5 steps: A larger value means that our data is more “spread out” from the mean. A smaller value means that our data is more concentrated around the mean. Easily calculate the standard deviation in Numpy like so: We can further describe the position of each data point throughout the range using percentiles. The percentile describes the exact position of the data point in terms of how high or low it is positioned in the range of values. More formally, the pth percentile is the value in the dataset at which it can be split into two parts. The lower part contains p percent of the data i. e the pth percentile. For example, consider the set of 11 numbers below: The number 15 is the 70th percentile since when we split the dataset into 2 parts at the number 15, 70% of the remaining data is less than 15. Percentiles combined with the mean and standard deviation can give us a good idea of where a specific point lies within the spread / range of our data. If it’s an outlier, then its percentile will be close to the ends—less than 5% or great than 95%. On the other hand, if the percentile is calculated as close to 50 then we know that it’s close our our central tendency. The 50th percentile for an array can be calculated in Numpy like so: The Skewness of data measures its asymmetry. A positive value for the skewness means that values are concentrated on the left of the center of the data points; negative skewness means values are concentrated on the right of the center of the data points. The graph below provides a nice illustration. We can calculate the skewness with the following equation: Skewness will give us an idea of how close our data’s distribution is to being Gaussian. The greater the magnitude of the skewness, the further from a Gaussian distribution our dataset is. This is important because if we have a rough idea of our data’s distribution, we can tailor whatever ML model we are going to train for that particular distribution. Moreover, not all ML modelling techniques will be effective on data which is not Gaussian. Once again, stats gives us insightful information before we jump right into modelling!Here’s how we can compute the Skewness in Scipy code: CovarianceThe covariance of two features variables measures how “related” they are. If the two variables have a positive covariance, then when one variable increases so does the other; with a negative covariance the values of the feature variables will change in opposite directions. CorrelationCorrelation is simply the normalised (scaled) covariance where we divide by the product of the standard deviation of the two variables being analysed. This effectively forces the range of correlation to always be between -1. 0 and 1. If the correlation of two feature variables is 1. 0, then the variables have a perfect positive correlation. This means that if one variable changes by a given amount, the second moves proportionally in the same direction. A positive correlation coefficient less than one indicates a less than perfect positive correlation, with the strength of the correlation growing as the number approaches one. The same idea works for negative values of correlation, just with the values of the feature variables changing in opposite directions rather than changing in the same direction. Knowing about correlation is incredibly useful for techniques like Principal Component Analysis (PCA) used for Dimensionality Reduction. We start by computing a correlation matrix—if there are 2 or more variables which are highly correlated, then they are effectively redundant in explaining our data and some of them can be dropped to reduce the complexity. Bio: George Seif is a Certified Nerd and AI / Machine Learning Engineer. Related: var disqus_shortname = kdnuggets; (function() { var dsq = document. createElement(script); dsq. type = text/javascript; dsq. async = true; dsq. src = https://kdnuggets. js; (document. getElementsByTagName(head)[0] || document. appendChild(dsq); })();. You must be logged in to post a comment.
{"url":"https://datascience.sharerecipe.net/2019/06/15/5-useful-statistics-data-scientists-need-to-know/","timestamp":"2024-11-12T18:23:24Z","content_type":"text/html","content_length":"38994","record_id":"<urn:uuid:7a649a26-eae1-456b-8962-13b42f1bf775>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00691.warc.gz"}
Identities for minors of the Laplacian, resistance and distance matrices Bapat, R. B. ; Sivasubramanian, Sivaramakrishnan (2011) Identities for minors of the Laplacian, resistance and distance matrices Linear Algebra and its Applications, 435 (6). pp. 1479-1489. ISSN Full text not available from this repository. Official URL: http://www.sciencedirect.com/science/article/pii/S... Related URL: http://dx.doi.org/10.1016/j.laa.2011.03.028 It is shown that if L and D are the Laplacian and the distance matrix of a tree respectively, then any minor of the Laplacian equals the sum of the cofactors of the complementary submatrix of D, up to sign and a power of 2. An analogous, more general result is proved for the Laplacian and the resistance matrix of any graph. A similar identity is proved for graphs in which each block is a complete graph on r vertices, and for q-analogues of such matrices of a tree. Our main tool is an identity for the minors of a matrix and its inverse. Item Type: Article Source: Copyright of this article belongs to Elsevier Science. Keywords: Laplacian; Distance Matrix; Resistance Matrix; Determinant; Partitioned Matrix; q-analogue ID Code: 81599 Deposited On: 07 Feb 2012 05:14 Last Modified: 07 Feb 2012 05:14 Repository Staff Only: item control page
{"url":"https://repository.ias.ac.in/81599/","timestamp":"2024-11-11T09:32:03Z","content_type":"application/xhtml+xml","content_length":"16724","record_id":"<urn:uuid:e772f4bf-6c3a-4d39-a493-b5cc0c33b785>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00668.warc.gz"}
Triangular Shawl Calculator Here’s a little way to calculate the maximum number of rows you can work on a shawl (top down shawls only). You need to have knitted at least 20% of your yarn to do get an accurate answer, though it will return a result with more than 10% yarn used. Triangular shawl This calculation will work for any shawl pattern that starts at the top and has a consistent number of increases in each row. (ie. 4 increases on every right side row and 2 sts on every wrong side row, or 4increases every other row.) (Examples of this type of shawl are: ‘Swallowtail’, ‘Ishbel’, ‘Aeolian’, ‘Kiri’, ‘Traveling Woman’, My ‘Dew Drops’ & ‘Danish Ripple’ Shawls) (I know these shawls have slightly different shapes, but trust me the maths works for all of them.) You need to know: Number of Constant stitches in each row: (eg. Swallowtail: 5, Ishbel: 7, Aeolian: 5 or 7.) (usually edge stitches on each side + centre stitch) (If you don’t know this don’t worry too much as it doesn’t make a huge difference to the result.) Total yarn weight: (This is how much yarn you have available for the project, in grams is best.) Used yarn weight so far: (This is the total weight of yarn minus what you have left un-knitted.) Rows worked so far: (with many shawl you can count the number of holes running up the middle next to the centre st and x2). Maximum Number of Rows: This result is the number of rows you can work with the yarn you have available, it allows you 3rows worth of yarn to cast off which is sufficient for a very stretchy bind off. If you pattern has a lot of increases in the final few rows, ie lots of yarn over’s for a pointier edge you will need to subtract a few more rows to allow for that. If your pattern tells you to cast off with the yarn held double you will need to subtract a few more rows to allow for this. This calculator requires javascript to be enabled. I hope you find this page useful, I provide it free for everyone, please link to it here. Contact me through Raverly, or email me if you have any questions. P.s. Please don’t blame me if the answer doesn’t work out for you, I provide this script working to the best of my knowledge, free to everyone. (c) Bex Hopkins 2010, please do not attempt to steal this script. If you would like to know how this is calculated please contact me. 4 thoughts on “Triangular Shawl Calculator” 1. Where on-line can I purchase a copy of the pattern for the cardigan that is shown on Ravelry with the Equality Stripe Hat? I am not able to locate it on Ravelry. I have tried googling… no luck. 2. Thank you so much!!!!! 3. I’m not getting the calculator to input a max # of rows when I put in all the variables. I’ve tried on several devices. Can you help me? 4. All sorted now. Sometimes the wordpress updates break it. Leave a Comment
{"url":"https://knit.colourfuldesigns.co.uk/?p=157","timestamp":"2024-11-04T16:41:54Z","content_type":"text/html","content_length":"44207","record_id":"<urn:uuid:ef5f4948-74a1-455a-b888-26bcda36aa87>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00672.warc.gz"}
amp; POJ 1511 Invitation Cards (SPFA template + reverse graph creation) A Free Trial That Lets You Build Big! Start building with 50+ products and up to 12 months usage for Elastic Compute Service • Sales Support 1 on 1 presale consultation • After-Sales Support 24/7 Technical Support 6 Free Tickets per Quarter Faster Response • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.
{"url":"https://topic.alibabacloud.com/a/hdu-1535-amp-amp-poj-1511-invitation-cards-spfa-template--reverse-graph-creation_1_31_32688512.html","timestamp":"2024-11-14T05:40:30Z","content_type":"text/html","content_length":"84721","record_id":"<urn:uuid:e3078710-fcc0-4f75-9a5c-08870bacd9c1>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00428.warc.gz"}
45+ Must-Know Shapes with Pictures Shapes are all around us, forming the building blocks of our world. We see them in everyday objects, nature, and art. Yet, many struggle to name or identify these basic geometric forms. We understand this challenge and want to help. This blog promises to guide you through 45+ essential shapes with clear pictures and simple explanations. We’ll explore 2D and 3D shapes, from the familiar circles and squares to more complex forms like dodecahedrons. By the end of this post, you’ll have a solid grasp of shape names and be able to spot them in your daily life. Let’s begin our shape-filled journey together! List of 2D Shapes 1. Circle A curved shape where every point on its edge is the same distance from its center. Real-life examples: The sun, a pizza, a wheel, or a clock face. 2. Square A flat shape with four equal sides and four right angles (90 degrees each). Real-life examples: A chessboard, a slice of bread, or a sticky note. 3. Rectangle A four-sided shape with two pairs of parallel sides and four right angles. Real-life examples: A door, a television screen, or a book cover. 4. Triangle A shape with three straight sides and three angles. Real-life examples: A slice of pizza, a yield sign, or a musical triangle instrument. 5. Oval An egg-shaped curve that’s longer than it is wide. Real-life examples: An egg, certain mirrors, or some stadium tracks. 6. Hexagon A six-sided shape with equal sides and six angles. Real-life examples: A honeycomb cell, some nuts and bolts, or a benzene molecule. 7. Pentagon A five-sided shape with equal sides and five angles. Real-Life Examples: The U.S. Pentagon building, some home plate designs in baseball, or certain flower petals. 8. Octagon An eight-sided shape with equal sides and eight angles. Real-life examples: A stop sign, some umbrellas when open, or certain gemstone cuts. 9. Rhombus A four-sided shape with all sides equal in length, opposite sides parallel, and opposite angles equal. Real-life examples: Some kites, certain floor tiles, or a baseball diamond. A four-sided shape with at least one pair of parallel sides. Real-life examples: Lampshades, certain table tops, and some roof designs. 10. Parallelogram A four-sided shape with opposite sides parallel and equal in length. Real-life examples: Chocolate bar shapes, certain road signs, or some kitchen countertops. 11. Kite A four-sided shape with two pairs of adjacent sides equal in length. Real-life examples: A traditional kite, some gemstone cuts, or certain leaf shapes. 12. Star A shape with multiple points radiating from a center. Real-life examples: A starfish, decorative ornaments, or certain flower shapes. 13. Arrow A shape consisting of a line with a triangular point at one end. Real-life examples: Road signs, computer cursors, or archery arrows. 14. Crescent A shape resembling a phase of the moon with a curved edge. Real-life examples: A crescent moon, some pastries, or certain architectural designs. 15. Cross A shape formed by two intersecting lines or bars. Real-life examples: Some religious symbols, a plus sign, or certain road intersections. 16. Heart A shape with a rounded top, curved sides, and a point at the bottom. Real-life examples: Valentine’s Day decorations, certain playing card suits, or some leaf shapes. 17. Heptagon A seven-sided shape with equal sides and seven angles. Real-life examples: Some coins, certain architectural designs, or some company logos. 18. Nonagon A nine-sided shape with equal sides and nine angles. Real-life examples: Some floor tiles, certain architectural elements, or some traffic signs. 19. Decagon A ten-sided shape with equal sides and ten angles. Real-life examples: Some coins, certain clock faces, or some architectural designs. 20. Scalene Triangle A triangle with no equal sides and no equal angles. Real-life examples: Some roof designs, certain architectural elements, or some mountain silhouettes. 21. Isosceles Triangle A triangle with two equal sides and two equal angles. Real-life examples: Some warning signs, certain musical instruments, or some bridge supports. 22. Equilateral Triangle A triangle with all sides equal and all angles equal (60 degrees each). Real-life examples: Some traffic signs, certain puzzle pieces, or some corporate logos. 23. Right Triangle A triangle with one 90-degree angle. Real-life examples: A carpenter’s square, some roof designs, or certain staircase profiles. 24. Obtuse Triangle A triangle with one angle greater than 90 degrees. Real-life examples: Some modern furniture designs, certain architectural features, or some abstract art pieces. 25. Acute Triangle A triangle with all angles less than 90 degrees. Real-life examples: Some arrowheads, certain roof designs, or some jewelry pieces. 26. Pentagram A five-pointed star polygon. Real-life examples: Some national flags, certain award designs, or some decorative patterns. 27. Hexagram A six-pointed star polygon formed by two overlapping equilateral triangles. Real-life examples: Some religious symbols, certain snowflake shapes, or some decorative designs. 28. Octagram An eight-pointed star polygon. Real-life examples: Some compass roses, certain tile patterns, or some cultural symbols. 29. Ellipse An oval-like shape that can be thought of as a stretched circle. Real-life examples: Planetary orbits, some stadiums viewed from above, or certain machine parts. 30. Parabola A U-shaped curve is where any point is equidistant from a fixed point and a fixed-line. Real-life examples: The path of a ball thrown in the air, some satellite dishes, or certain bridge arches. 31. Hyperbola A curve with two separate parts, each of which is shaped like an infinite U. Real-life examples: Some cooling tower shapes, certain telescope designs, or the path of comets around the sun. List of 3D Shapes 1. Cube A solid shape with six square faces, all equal in size. Real-life examples: A die, a sugar cube, or a Rubik’s cube. 2. Sphere A perfectly round three-dimensional object. Real-life examples: A ball, a globe, or a bubble. 3. Cylinder A solid shape with two circular bases connected by a curved surface. Real-life examples: A can of soup, a roll of paper towels, or a drinking glass. 4. Cone A three-dimensional shape with a circular base connected to a single point by a curved surface. Real-life examples: An ice cream cone, a traffic cone, or a party hat. 5. Pyramid A solid shape with a polygon base and triangular faces that meet at a single point. Real-life examples: The ancient Egyptian pyramids, some tent designs, or certain roof structures. 6. Rectangular Prism A solid shape with six rectangular faces. Real-life examples: A box, a brick, or a book. 7. Tetrahedron A solid shape with four triangular faces. Real-life examples: A triangular pyramid, some dice, or certain molecular structures. 8. Octahedron A solid shape with eight triangular faces. Real-life examples: Some crystals, certain dice, or some architectural designs. 9. Dodecahedron A solid shape with twelve pentagonal faces. Real-life examples: Some dice, certain soccer ball designs, or some molecular structures. 10. Icosahedron A solid shape with twenty triangular faces. Real-life examples: Some dice, certain virus structures, or some geodesic domes. 11. Hexagonal Pyramid A solid shape with a hexagonal base and six triangular faces meeting at a point. Real-life examples: Some crystal formations, certain architectural designs, or some specialty dice. 12. Square Pyramid A solid shape with a square base and four triangular faces meeting at a point. Real-life examples: The Great Pyramid of Giza, some roof designs, or certain paperweights. 13. Triangular Prism A solid shape with two triangular bases and three rectangular faces. Real-life examples: A Toblerone chocolate bar, some roof trusses, or certain tent designs. 14. Pentagonal Prism A solid shape with two pentagonal bases and five rectangular faces. Real-life examples: Some pencils, certain architectural columns, or some packaging designs. 15. Rhombicuboctahedron A solid shape with 26 faces, including 18 squares and 8 triangles. Real-life examples: Some gemstone cuts, certain architectural elements, or some modern art sculptures. 16. Ellipsoid A three-dimensional shape resembling a stretched or compressed sphere. Real-life examples: Some sports balls (like rugby balls), certain pills, or some planetary shapes. 17. Torus A three-dimensional shape resembling a donut. Real-life examples: A ring, a life preserver, or some architectural designs. 18. Frustum The portion of a solid (often a cone or pyramid) that lies between two parallel planes cutting the solid. Real-life examples: A lampshade, some buckets, or certain architectural elements. 19. Parallelepiped A three-dimensional figure formed by six parallelograms. Real-life examples: Some tissue boxes, certain crystal structures, or some modern furniture designs. 20. Triangular Pyramid A solid shape with a triangular base and three triangular faces meeting at a point. Real-life examples: Some roof designs, certain food packaging, or some decorative objects. 21. Hemisphere Half of a sphere is created by cutting it along a plane that passes through its center. Real-life examples: A dome, half of a sports ball, or some serving bowls. A portion of a sphere is cut off by a plane. Real-life examples: Some lens designs, certain architectural domes, or some bowl shapes. 23. Oblate Spheroid A sphere that has been flattened at the poles, resembling a slightly squashed ball. Real-life examples: The shape of Earth, some planetary bodies, or certain sports balls when compressed. 24. Prolate Spheroid A sphere that has been elongated at the poles resembles a slightly stretched ball. Real-life examples: An American football, some watermelons, or certain antenna designs. Practical Applications of Shapes In Education: Shapes serve as fundamental tools for teaching geometry. Educators use tangible objects to help students understand basic geometric concepts. For instance, students might use shape blocks to learn about angles, sides, and geometric properties. In Design and Architecture: Designers and architects use shapes to create functional and visually appealing structures. Circles might stabilize load-bearing structures, while triangles are often used for strength in bridge designs. In Daily Life, Shapes are everywhere. Traffic signs use specific shapes for quick recognition, and household items like plates (circles) and books (rectangles) demonstrate practical shape As we wrap up our journey through the world of shapes, we hope you’ve gained a new appreciation for these geometric wonders. From the simple circle to the complex rhombicuboctahedron, shapes are the building blocks of our visual world. They’re mathematical concepts and practical tools used in education, design, and everyday life. Understanding shapes can sharpen your problem-solving skills and enhance your appreciation of art and The next time you look around, try to spot these shapes in your environment. You might be surprised at how often you encounter them! Whether you’re a student, a professional, or simply curious about the world around you, this knowledge of shapes will serve you well. Keep exploring, and let the world of geometry continue to amaze
{"url":"https://theresasreviews.com/must-know-shapes-with-pictures/","timestamp":"2024-11-08T08:16:05Z","content_type":"text/html","content_length":"214603","record_id":"<urn:uuid:b743a510-f765-4a25-a876-73103ec07aeb>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00362.warc.gz"}
Tommi Sottinen - Research [Home Page| Teaching| Personal] Tommi Sottinen - Research Research interests Fractional, Gaussian, self-similar, and quadratic variation processes; stochastic analysis; statistics for stochastic processes; stochastic simulation; mathematical finance; financial engineering. Refereed publications 1. Sottinen, T. (2001) Fractional Brownian motion, random walks and binary market models. Finance and Stochastics 5, no. 3, 343-355. 2. Kozachenko, Yu., Sottinen, T. and Vasylyk, O. (2002) Weakly Self-similar processes with stationary increments in the spaces SSub_\phi(\Omega). Theory of Probability and Mathematical Statistics 65 , 77-88 . 3. Kozachenko, Yu., Vasylyk, O. and Sottinen, T. (2002) Path Space Large Deviations of a Large Buffer with Gaussian Input Traffic. Queueing Systems 42, no. 2, 113-129. 4. Sottinen, T. and Valkeila, E. (2003) On arbitrage and replication in the Fractional Black-Scholes pricing model. Statistics & Risk Modeling 21, 93-107. 5. Gilsing, H. and Sottinen, T. (2003) Power series expansions for fractional Brownian motions. Theory of Stochastic Processes Vol. 9 (25), no. 3-4 (Proceedings of Seventh International School on Mathematical and Statistical Methods in Economics, Finance and Insurance), 38-49. 6. Sottinen, T. (2003) Fractional Brownian motion in finance and queueing. Ph.D. Thesis, University of Helsinki. 7. Sottinen, T. (2004) On Gaussian processes equivalent in law to fractional Brownian motion. Journal of Theoretical Probability 17, no. 2, 309-325. 8. Kozachenko, Yu., Sottinen, T. and Vasylyk, O. (2005) Simulation of weakly self-similar stationary increment Sub_\phi(\Omega)-processes: a series expansion approach. Methodology and Computing in Applied Probability 7, 379-400. 9. Sottinen, T. and Tudor, C.A. (2006) On the equivalence of multiparameter Gaussian processes. Journal of Theoretical Probability 19, no. 2., 461-485. 10. Gasbarra, D., Sottinen, T. and Valkeila, E. (2007) Gaussian bridges. Stochastic Analysis and Applications. Volume 2 of the series Abel Symposia, pp. 361-382. 11. Bender, C., Sottinen, T. and Valkeila, E. (2007) Arbitrage with fractional Brownian motion? Theory of Stochastic Processes 13 (29), 23-34. 12. Sottinen, T. and Tudor, C.A. (2008) Parameter estimation for stochastic equations with additive fractional Brownian sheet. Statistical Inference for Stochastic Processes 11, 221-236. 13. Särkkä, S. and Sottinen, T. (2008) Application of Girsanov Theorem to Particle Filtering of Discretely Observed Continuous-Time Non-Linear Systems. Bayesian Analysis 3, no. 3., 555-584. 14. Bender, C., Sottinen, T. and Valkeila, E. (2008) Pricing by hedging and no-arbitrage beyond semimartingales. Finance and Stochastics 12, 441-468. 15. Morlanes, J. I., Rasila, A. and Sottinen, T. (2009) Empirical evidence on arbitrage by changing the stock exchange. Advances and Applications in Statistics, no. 2, Vol. 12, 223-233. 16. Kozachenko, Yu., Sottinen, T. and Vasylyk, O. (2011) Lipschitz conditions for Sub_\phi(\Omega)-processes with application to weakly self-similar stationary increment processes. Theory Probab. Math. Stat. 82, 57-73. 17. Gapeev, P., Sottinen, T. and Valkeila, E. (2011) Robust replication in H-self-similar Gaussian market models under uncertainty. Statistics & Risk Modeling 28, 37-50. 18. Gasbarra, D., Sottinen, T., and van Zanten, H. (2011) Conditional full support of Gaussian processes with stationary increments. Journal of Applied Probability 48, No. 2., 561-568. 19. Bender, C., Sottinen, T. and Valkeila, E. (2011) Fractional procesess as models in stochastic finance. Advanced Mathematical Methods for Finance. Series in Mathematical Finance, Springer, 20. Sottinen, T. and Yazigi, A. (2014) Generalized Gaussian Bridges. Stochastic Processes and their Applications 124, Issue 9, 3084-3105. 21. Azmoodeh, E., Sottinen, T., Viitasaari, L. and Yazigi, A. (2014) Necessary and Sufficient Conditions for Hölder Continuity of Gaussian Processes. Statistics & Probability Letters 94, 230-235. 22. Azmoodeh, E., Sottinen, T. and Viitasaari, L. (2015) Asymptotic normality of randomized periodogram for estimating quadratic variation in mixed Brownian-fractional Brownian model. Modern Stochastics: Theory and Applications, 2, No. 1, 29-49. 23. Sottinen, T. and Viitasaari, L. (2015) Fredholm representation of multi-parameter Gaussian processes with applications to equivalence in law and series expansions. Modern Stochastics: Theory and Applications, 2, No 3 (Proceedings of PRESTO-2015 conference), 287-295. 24. Sottinen, T. and Viitasaari, L. (2016) Pathwise integrals and Ito-Tanaka Formula for Gaussian processes. Journal of Theoretical Probability 29, Issue 2, 590-616. 25. Sottinen, T. and Viitasaari, L. (2016) Stochastic Analysis of Gaussian Processes via Fredholm Representation. International Journal of Stochastic Analysis, doi:10.1155/2016/8694365. 26. Pakkanen, M.S., Sottinen, T. and Yazigi, A. (2017) On the conditional small ball property of multivariate Levy-driven moving average processes, Stochastic Processes and their Applications, 127, Issue 3, 749-782. 27. Yang, X., Rasila, A. and Sottinen, T. (2017) Walk on Spheres Algorithm for Helmholtz and Yukawa Equations via Duffin Correspondence. Methodology and Computing in Applied Probability, 19, 589-602. 28. Sottinen, T. and Viitasaari, L. (2017) Prediction Law of Fractional Brownian Motion. Statistics & Probability Letters 129, 155-166. 29. Shokrollahi, F. and Sottinen, T. (2017) Hedging in fractional Black-Scholes model with transaction costs. Statistics & Probability Letters 130, 85-91. 30. Sottinen, T. and Viitasaari, L. (2018) Conditional-Mean Hedging Under Transaction Costs in Gaussian Models. International Journal of Theoretical and Applied Finance 21, no. 2. 31. Rasila, A. and Sottinen, T. (2018) Yukawa Potential, Panharmonic Measure and Brownian Motion. Axioms 2018, 7(2), 28. 32. Sottinen, T. and Viitasaari, L. (2018) Parameter Estimation for the Langevin Equation with Stationary-Increment Gaussian Noise. Statistical Inference for Stochastic Processes 21 (3), 569-601. 33. Sottinen, T. and Viitasaari, L. (2019) Transfer Principle for nth Order Fractional Brownian Motion with Applications to Prediction and Equivalence in Law. Theory of Probability and Mathematical Statistics 98, 199-216. 34. Yang, X., Rasila, A. and Sottinen, T. (2019) Efficient simulation of Schrödinger equation with piecewise constant positive potential. Mathematics and Computers in Simulation 166, 315-323. 35. Lehto, S., Ernvall-Hytönen, A.-M. and Sottinen, T. (2019) Divisible Skylines: Exploring Least Common Multiples and Divisibility through Visual Art. Bridges 2019 short paper. 36. Sottinen, T. and Viitasaari, L. (2020) Prediction Law of Mixed Gaussian Volterra Processes. Statistics & Probability Letters 156, January 2020, 108594, https://doi.org/10.1016/j.spl.2019.108594 37. Azmoodeh, E., Sottinen, T., Tudor, C.A. and Viitasaari, L. (2021) Integration-by-Parts Characterizations of Gaussian Processes. Collectanea Mathematica 72, 25-41. 38. Sottinen, T., Alos, E., Azmoodeh, E. and Di Nunno, G. (2021) Editorial: Long-Memory Models in Mathematical Finance. Frontiers in Applied Mathematics and Statistics 7, 28. 39. Merino, R., Pospisil, J., Sobotka, T., Sottinen, T. and Vives, J. (2021) Decomposition formula for rough Volterra stochastic volatility models. International Journal of Theoretical and Applied Finance 24, No. 02, 2150008 https://doi.org/10.1142/S0219024921500084 40. Sottinen, T. (2021) The Characterization of Brownian Motion as an Isotopic i.i.d.-component Lévy Process. In Contributions to Mathematics and Statistics: Essays in Honor of Seppo Hassi (eds. De Snoo, H.S.V. and Wietsma, H.L.), Acta Wasaensia 462, 179-186. 41. Dufitinema, J., Pynnönen, S. and Sottinen, T. (2022) Maximum likelihood estimators from discrete data modeled by mixed fractional Brownian motion with application to the Nordic stock markets. Communications in Statistics - Simulation and Computation. https://doi.org/10.1080/03610918.2020.1764581 42. Sottinen, T. (2022) Brownian Bridges on Polygons. Proceedings of Bridges 2022: Mathematics, Art, Music, Architecture, Culture 43. Maleki Almani, H. and Sottinen, T. (2023) Multi-mixed fractional Brownian motions and Ornstein-Uhlenbeck processes. Modern Stochastics: Theory and Applications, Vol 10, No. 4, 343-366. https:// 44. Azmoodeh, E., Ilmonen, P, Shafik, N., Sottinen, T. and Viitasaari, L. (2023) On Sharp Rate of Convergence for Discretization of Integrals Driven by Fractional Brownian Motions and Related Processes with Discontinuous Integrands. Journal of Theoretical Probability. https://doi.org/10.1007/s10959-023-01272-7 45. Dufitinema, J., Shokrollahi, F., Sottinen, T. and Viitasaari, L. (2024) Long-range dependent completely correlated mixed fractional Brownian motion. Stochastic Processes and Their Applications, Volume 170, April 2024, 104289. https://doi.org/10.1016/j.spa.2023.104289 46. Maleki Almani, H., Shokrollahi, F., and Sottinen, T. (2024) Prediction of Gaussian Volterra Processes with Compound Poisson Jumps. Statistics and Probability Letters, Volume 208, May 2024, 110054. https://doi.org/10.1016/j.spl.2024.110054 47. Sottinen, T., Sönmez, E., and Viitasaari, L. (2024) On the existence and regularity of local times. Electron. J. Probab. 29 1-27, 2024. https://doi.org/10.1214/24-EJP1172 48. Han, Q., Rasila, A. and Sottinen, T. (2025) Efficient simulation of mixed boundary value problems and conformal mappings. Applied Mathematics and Computation 488, 1 March 2025, 129119 https:// 49. Sottinen, T. and Valkeila, E. (2001) Fractional Brownian motion as a model in finance. University of Helsinki, Department of Mathematics, Preprint 302. 50. Van Bever, G., Ilmonen, P., Viitasaari, L., Shafik, N. and Sottinen, T. (2022) On optimal prediction of missing functional data with memory. arXiv preprint arXiv:2208.09925 51. Sottinen, T. and Viitasaari, L. (2023) Transfer principle for fractional Ornstein-Uhlenbeck processes. arXiv preprint arXiv:2311.00823 52. Maleki Almani, H. and Sottinen, T. (2024) Parameter estimation for multi-mixed fractional Ornstein-Uhlenbeck processes by generalized method of moments. arXiv preprint arXiv:2401.05114 53. Ralchenko, K., Shokrollahi, F. and Sottinen, T. (2024) Discretization of integrals driven by multifractional Brownian motions with discontinuous integrands arXiv preprint arXiv:2408.02449 54. Maleki Almani, H., Shokrolahi, F. and Sottinen, T. (2024) Hedging in Jump Diffusion Model with Transaction Costs arXiv preprint arXiv:2408.10785 Other publications (most in Finnish) 55. Sottinen, T. (2004) Nobelin muistopalkinto taloustieteestä 2003: R. Englen ARCH-malli. Arkhimedes 2004:2, 10-12 (in Finnish). 56. Sottinen, T. (2004) Sattuman matematiikkaa III: Kolmogorovin aksioomat ja frekvenssitulkinta. Solmu 2004:2, 17-21 (in Finnish). 57. Lehto, S. and Sottinen, T. (2005) Sisarusongelma - paradoksi ehdollisesta todennäköisyydestä. Solmu 2005:1, 14-15 (in Finnish). 58. Rasila, A., and Sottinen, T. (2005) Algebra, PlayStation ja älykkyys. Solmu Erikoisnumero 1/2005-2006 (in Finnish). 59. Norros, I. and Sottinen, T. (2013) Esko Valkeila 1951-2012. Arkhimedes 2013:1, 30-33 (in Finnish). 60. Sottinen, T. (2015) BS-kaava ja lama. Arkhimedes 2015:1, 26-30 (in Finnish). 61. Sottinen, T. (2016) Moni sekoaa muotiin. Professoriblogi 2.5.2016. 62. Sottinen, T. (2016) Häränpaskahommia. Professoriblogi 12.9.2016. 63. Sottinen, T. (2016) Muotia maailmalla. Professoriblogi 28.11.2016. 64. Sottinen, T. (2017) Yliopistojen autonomia ja universumin maksimaalisen ironian periaate. Professoriblogi 27.2.2017. 65. Sottinen, T. (2017) Avoin tiede. Professoriblogi 29.5.2017. Lecture notes (some in Finnish) 66. Malliavin-laskenta: Eli gaussisten prosessien derivointi., 87 pages, 2006. 67. Rahoitusteoria: Eli optioiden hinnoittelun ja toistamisen taito tai oppi optioiden oikeasta hinnasta., 149 pages, 2006. 68. Todennäköisyysteoria: Teoria mitasta, mitallisuudesta, mitattomuudesta ja riippumattomuudesta., 130 pages, 2006. 69. Operations Research with GNU Linear Programming Kit, 201 pages, 2009. 70. Päätöksenteko epävarmuuden vallitessa, 102 pages, 2010. 71. Operations Research with GNU Octave, 187 pages, 2011. 72. Päätöksiä ja Paatoksia, 146 pages, 2011. 73. Lineaarialgebraa lähinnä tasossa hipauksella GNU Octavea, 81 pages, 2022. 74. Octave with Spice: Or a Gentle Introduction to GNU Octave Towards Linear Programming, 38 pages, 2022. 75. Päätöksentekoa Oliopolion aikaan, 86 pages, 2023. List of publications from AMS Mathematical Reviews, Zentralblatt MATH, and Google Scholar. 1. Elisa Alòs, Pompeu Fabra University, Spain 2. Ehsan Azmoodeh, Univerity of Liverpool, UK 3. Christian Bender, Saarland University, Germany 4. Germain Van Bever, University of Namur, Belgium 5. Josephine Dufitinema, IQVIA, Finland 6. Anne-Maria Ernvall-Hytönen, University of Helsinki, Finland 7. Pavel Gapeev, London School of Economics, UK 8. Dario Gasbarra, University of Helsinki, Finland 9. Hagen Gilsing 10. Qiansheng Han, Guangdong Technion - Israel Institute of Technology, PRC 11. Pauliina Ilmonen, Aalto University, Finland 12. Yuriy Kozachenko 13. Hamidreza Maleki Almani, University of Vaasa, Finland 14. Raúl Merino, University of Barcelona, Spain 15. Igor Morlanes, Stockholm University, Sweden 16. Giulia Di Nunno, University of Oslo, Norway 17. Mikko S. Pakkanen, Imperial College London, UK 18. Jan Pospíšil, University of West Bohemia, Czechia 19. Seppo Pynnönen, University of Vaasa, Finland 20. Antti Rasila, Guangdong Technion - Israel Institute of Technology, PRC 21. Simo Särkkä, Aalto University, Finland 22. Nourhan Shafik, Aalto University, Finland 23. Foad Shokrollahi, University of Vaasa, Finland 24. Tomáš Sobotka, University of West Bohemia, Czechia 25. Ercan Sönmez, University of Klagenfurt, Austria 26. Ciprian A. Tudor, University of Lille 1, France 27. Esko Valkeila 28. Olga Vasylyk, Taras Shevchenko Kyiv National University, Ukraine 29. Lauri Viitasaari, Aalto University, Finland 30. Xuxin Yang, Hunan First Normal University, PRC 31. Josep Vives, University of Barcelona, Spain 32. Adil Yazigi, University of Eastern Finland, Finland 33. Harry van Zanten, University of Amsterdam, The Netherlands 1. Fractional Brownian motion, random walks, and binary market models, The 2nd Nordic-Russian Symposium on Stochastic Analysis, Beitostølen, Norway, 1-6 August 1999. 2. Fractional Brownian Motion as a Model in Finance, Analysis of High Frequency Data: Annual Meeting of the Finnish Statistical Society, Vaasa, Finland, 17-18 May 2001. 3. Sample path large deviations of a Gaussian process with stationary increments and regularly varying variance, 12th European Young Statisticians Meeting, Janska Dolina, Slovakia, 4-8 September 4. Busy periods of a fractional Brownian type Gaussian storage, A conference dedicated to the 90th anniversary of B. V. Gnedenko, Kyiv, Ukraine, 3-7 June 2002. 5. On Gaussian stochastic differential equations with fractional Brownian noise, Laugarvatn Workshop: Stochastic Analysis and its Applications, Laugarvatn, Iceland, 2-7 August 2002. 6. Arbitrage in the fractional Black-Scholes model, Seminar in Mathematical Statistics, University of Stockholm, Stockholm, Sweden, 16 January 2003. 7. Power series series expansions of fractional Brownian motion, The Seventh International School on Mathematical and Statistical Methods in Economics, Finance and Insurance, Laspi, Ukraine, September 2003. (Invited) 8. Gaussian bridges, Groupe de travail: Processus Stochastiques, Matrices Aléatoires, University of Paris VI, Paris, France, 15 January 2004. 9. Representations of Gaussian bridges, DYNSTOCH Workshop, Copenhagen, Denmark, 3-5 June 2004. 10. Fractional Brownian Motions and Sheets, Porkkala Fractional Symposium, Porkkala, Finland, 25 May 2005. 11. Gaussian Bridges, 35th International Probability Summer School, Saint-Flour, France, 6-23 July 2005. 12. Replication and Absence of Arbitrage in Non-Semimartingale Models, The Finnish Mathematical Days and the Second Finnish-Estonian Mathematical Colloquium, Tampere, Finland, 4-5 January 2006. 13. Replication and Absence of Arbitrage in Non-Semimartingale Models, Probability Seminar, University of Barcelona, Barcelona, Spain, 15 March 2006. 14. Are stylized facts irrelevant in option-pricing? International Conference Modern Stochastics: Theory and Applications, Kyiv, Ukraine, 19-23 June 2006. 15. On Skorohod-type stochastic differential equations with respect to fractional Brownian motion, 31st Conference on Stochastic Processes and their Applications, Paris, France, 17-21 July 2006. 16. Black-Scholes Prices with Stylized Facts, Russian-Scandinavian Symposium: Probability Theory and Applied Probability, Petrozavodsk, Russia, 26-31 August 2006. 17. Black-Scholes-hinnoittelumallin robustisuus ja tyylitellyt tosiseikat, Monthly meeting of the Actuarial Society of Finland, 10 October 2006. 18. Conditional Small Balls and No-Arbitrage, Advances in Mathematical Finance, Second General AMAMEF Conference and Banach Center Conference, Bedlewo, Poland, 30 April-5 May 2007. (Invited) 19. What is the Price of the Future?, The Icelandic Centre of Excellence in Theoretical Computer Science ICE-TCS Third Symposium on Theoretical Computer Science, Reykjavik, Iceland, 10 August 2007. 20. Local Continuity Of Stopping Times And Arbitrage, Workshop and Mid-Term Conference on Advanced Mathematical Methods for Finance, Vienna, Austria, 17-22 September 2007. 21. Local Continuity, Mathematics Seminar, University of Iceland, Iceland, 25 October 2007. 22. Probability is irrelevant in stochastic finance: Black-Scholes model is correct despite of stylized facts, Annual meeting of the Icelandic Mathematical Society, Borgarnes, Iceland, 29-30 November 2007. (Invited) 23. Local Continuity (for Stopping Times), the Finnish Mathematical Days, Espoo, Finland, 3-4 January 2008. (Invited) 24. Local Continuity, Workshop on Limit theorems and Applications, Paris, France, 14-16 January 2008. (Invited) 25. What is Volatility?, The 6th NoonToNoon Meeting: Insurance and Financial Mathematics - Theory and Practice, Jyväskylä, Finland, 2-3 October 2008. 26. Non-semimartingales in finance, 1st Northern Triangular Seminar, Espoo, Finland, 9-11 March 2009. (Tutorial) 27. On Conditional Full Support with Applications to Mathematical Finance, 25th Nordic and 1st British-Nordic congress of Mathematicians, Oslo, Norway, 8-11 June 2009. (Invited) 28. Conditional full support for Gaussian processes with stationary increments, Modern Stochastics: Theory and Applications II, Kyiv, Ukraine, 7-11 September 2010. (Invited) 29. "Todellinen" veroprosentti: Kaksi ajankohtaista esimerkkiä talousmatematiikasta ja "todellisuudesta", MAOL ry:n syyspäivät, Vantaa, Finland, 8-10 October 2010. (Invited) 30. Pricing by hedging and no-arbitrage beyond semimartingales, International symposium: Visions in stochastics (Leaders and their Pupils), Moscow, Russia, 1-3 November 2010. 31. No-Arbitrage with Non-Semimartingales: Continuous Simple Arbitrage Case, Seventh Seminar on Stochastic Analysis, Random Fields and Applications, Ascona, Switzerland, 23-27 May 2011. 32. Black ja Scholes ilman Gaussia, Annual meeting of the Finnish Mathematical Society, Helsinki, Finland, 19 March 2012. (Invited) 33. Generalized Gaussian Bridges of Prediction-Invertible Processes, Presentation at Hunan Normal University, Changsha, China, 22 May 2012. 34. Black-Scholes Prices and Hedges for Financial Derivatives in Non-Gaussian Non-Martingale Models, International Conference on Applied Mathematics 2012, Modeling, Analysis & Computation, City University of Hong Kong, Hong Kong, China, 30 May 2012. 35. Yukawa Potential, Harmonic Measure and Killing Brownian Motion, First Chinese-Finnish Seminar and Workshop on Modern Trends in Classical Analysis and Applications, Turku, Finland, 18 August 2012. 36. Generalized Gaussian Bridges of Prediction-Invertible Processes, Modern Stochastics: Theory and Applications III, Kyiv, Ukraine, 10 September 2012. (Invited) 37. Generalized Gaussian Bridges, Marrakesh International Conference on Probability and Statistics, Marrakesh, Morocco, 19 December 2013. (Plenary) 38. Necessary and Sufficient Conditions for Hölder Continuity of Gaussian Processes, Thiele Seminar, Aarhus University, Århus, Denmark, 26 March 2014. 39. Necessary and Sufficient Conditions for Hölder Continuity of Gaussian Processes, ICM 2014, Seoul, Republic of Korea, 19 August 2014. 40. Representing Gaussian Processes with Martingales with Application to MLE of Langevin Equation, Stochastic Visit Workshop of the FDPSS, Tartu, Estonia, 12 September 2014. 41. Gaussian Fredholm Processes, International Conference: Probability, Reliability and Stochastic Optimization, Kyiv, Ukraine, 8 April 2015. (Plenary) 42. Matematiikan täsmäopetuksella parempia insinöörejä, Interaktiivinen tekniikka koulutuksessa, ITK2015, Hämeenlinna, 17 April 2015. 43. Gaussian Fredholm Processes with Applications, Yu.V.Linnik Centennial Conference, Euler International Mathematical Institute, St. Petersburg, Russia, 16 September 2015. 44. Representing Gaussian Processes via Brownian Motion with Applications to Stochastic Analysis, talk at South Central University, Changsha, PRC, 30 October 2015. 45. A Celebration of Dynkin's Formula, tutorial at Hunan First Normal University, Changsha, PRC, 26 November 2015. 46. Walk On Spheres Algorithms for Helmholtz and Linearized Poisson-Boltzmann Equations, The Fifth Chinese-Finnish Seminar and Workshop on Modern Trends in Classical Analysis and Applications, Aalto University, Espoo, Finland, 9 February 2016. 47. Parameter Estimation for the Langevin Equation with Stationary-Increment Gaussian Noise, Lorentz Center Workshop Fractatility and Fractionality, University of Leiden, Leiden, The Netherlands, 19 May 2016. (Invited) 48. Gaussian (Fredholm) Processes, 37th Finnish Summer School on Stochastics and Statistics, Lammi, Finland, 30 May 2016. 49. Hedging under transaction costs in Gaussian models, Barcelona Workshop on Mathematical Finance, Barcelona, Spain, 29 March 2017. 50. Transfer Principle for nth-Order Fractional Brownian Motion with Applications to Prediction and Equivalence in Law, Modern Stochastics: Theory and Applications IV, Kyiv, Ukraine, 25 May 2018. 51. Prediction law of fractional Brownian motion, ICM 2018, Rio de Janeiro, Brazil, 6 August 2018. 52. Pretty Predictable Models, CMStatistics, London, UK, 15 December 2019. (Invited) 53. Integration-by-Parts Characterizations of Gaussian Processes, Finnish Mathematical Days, Oulu, Finland, 2 January 2020. 54. Option-Pricing without Probability: Good News and Bad News, IFAM Virtual Seminar, Liverpool, UK, 9 September 2020. 55. Integration-by-Parts Characterizations of Gaussian Processes, Modern Trends in Probability Theory and Mathematical Statistics III, Kyiv, Ukraine, 1 December 2020. (Invited) 56. Long-Range Dependent Completely Correlated Mixed Fractional Brownian motion, Modern Stochastics: Theory and Applications V, Kyiv, Ukraine, 4 June 2021. (Plenary) 57. Integration-by-Parts Characterizations of Gaussian Processes, 8ECM, Portoro&zcaron;, Slovenia, 22 June 2021. (Invited) 58. Conditional-mean hedging in Gaussian long-memory models with transaction costs, 10th General AMaMeF Conference, Padua, Italy, 25 June 2021. (Invited) 59. Finanssijohdannaisten gaussiset hinnat ja keskeisen raja-arvolauseen väärinymmärrys, MAL 60 vuotta juhlaseminaari, Helsinki, Finland, 12 November 2021. (Invited) 60. A New characterization of Brownian motion as isotropic i.i.d.-component Lévy process, Finnish Mathematical Days, Tampere, Finland, 4 January 2022. 61. Completely correlated mixed fractional Brownian motion, Stochastic processes with statistical applications and fractional stochastic calculus: International workshop dedicated to the anniversary of Yuliya Mishura, Kyiv, Ukraine (online), 17 May 2023. (Invited) 62. Completely correlated mixed fractional Brownian motion, Mathematical Finance and Stochastics: A Conference in Honor of David Nualart, Donostia, Spain, 31 May 2023. (Invited) 63. Integration-by-parts characterizations of Gaussian processes, Workshop on Stochastics, Memory and Roughness 2024, Oslo, Norway, 18 January 2024. (Invited) Ph.D. students 1. Dr. Mikko S. Pakkanen (University of Helsinki, 2010) 2. Dr. Lauri Viitasaari (Aalto University, 2014) 3. Dr. Adil Yazigi (University of Vaasa, 2015) 4. Dr. Foad Shokrollahi (University of Vaasa, 2019)
{"url":"http://lipas.uwasa.fi/~tsottine/research.html","timestamp":"2024-11-03T23:17:54Z","content_type":"text/html","content_length":"42026","record_id":"<urn:uuid:1a8a0894-0d47-4dee-badc-6fe7bd9acde8>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00776.warc.gz"}
CSE 417, Wi '05: Assignment #5, Due: Monday, February 28, 2005 Chapter 4. Chapter 6. This assignment is part written, part programming. It all focuses on the articulation points algorithm presented in class, and the related problem of decomposing a graph into its biconnected components. I strongly recommend that you do parts 1, 2 and 3 before you start the programming portion, and do them soon so you have time to ask questions before you get immersed in coding. A vertex in a connected undirected graph is an articulation point if removing it (and all edges incident to it, i.e. touching it) results in a non-connected graph. As important special cases, a graph having a single vertex has no articulation points, nor does a connected graph having just two vertices (which of course must be joined by an edge, otherwise it isn't connected). A connected graph is biconnected if it has no articulation points. A biconnected component of an undirected graph G = (V,E) is a maximal subset B of the edges with the property that the graph G[B] = (V[B],B) is biconnected, where V[B] is the subset of vertices incident to (touched by) edges in B. (Maximal means you can't enlarge B without destroying its biconnected property.) Above, I noted that a graph consisting of single edge is biconnected. Consequently, every edge in G is part of some biconnected component. In fact, every edge is part of exactly one biconnected components (basically because if it were part of two, their union would also be biconnected, contradicting the "maximality" condition). So, the biconnected components partition the edges. For example, the biconnected components of this graph • Component 1: A--D, A--E, D--E, D--I, E--J, I--J • Component 2: B--E • Component 3: C--H, C--I, H--I • Component 4: F--G, F--J, G--K, J--K, and • Component 5: K--L There is a very close relationship between biconnected components and articulation points. Namely, the articulation points are exactly the vertices at which two biconnected components are connected. Given this fact, it shouldn't be a big surprise that there is a linear time algorithm that identifies the biconnected components of a graph, and in fact this algorithm is quite similar to the articulation point algorithm. 1. Find the articulation points in this graph at vertex C and whenever the algorithm has a choice of which edge to follow next, pick the edge to the alphabetically first vertex among the choices. 2. Find the biconnected components of the graph in part 1. 3. Give a modification of the articulation-points algorithm that finds biconnected components. Describe the algorithm (in English; should only take a few sentences). Simulate it on the example in part 1, showing enough of a trace of the algorithm's execution to demonstrate that it finds exactly the components you found in part 2. [Hints: look carefully at the order in which the articulation points algorithm in part 1 explores edges and discovers articulation points, and relate this to which edges are part of which biconnected components. Initially, focus on the first biconnected component to be completely explored by the depth-first search.] 4. Implement, test, and time the algorithm you found in part 3. Input format: To keep things simple, assume the input will consist of a positive integer "n" followed by some number of pairs of integers in the range 0 to n-1. "N" represents the number of vertices in the graph, and each pair u v represents an edge between vertices u and v. Although a good program really should check for the following malformed inputs, you may simply assume that you never get: negative numbers, numbers outside the range 0 to n-1, an even number of integers in total (i.e. the last pair is unfinished), or a pair u v where u=v, or where either (u,v) or (v,u) duplicates a previous pair in the list. Output Format: Print out the number of nodes, number of edges, number of biconnected components, number of articulation points, list the articulation points, list the edges in each biconnected component, and the algorithm's run time (excluding input/output; see below). a. Testing: run it on this specific sample graph and turn in a printout of your result. (You should of course do more extensive testing as well, but you only need to turn in this one case.) b. Timing: Also run it on a variety of graphs of different sizes and different densities (average number of edges per vertex) and measure its run time on each. Write a brief report (2-3 pages, say) summarizing your measurements. Include a graph (old fashioned scatter plot) of run time versus problem size. Compare to the theoretical big-O bounds for your sample graphs. Are your observations in line with the dogma I've been spouting all quarter about the utility of big-O analysis? Are there discrepancies? Can you explain them? Does number of edges versus number of vertices have any bearing on performance? Also, if possible, give us a quick summary of the kind of processor on which you ran your timing tests. E.g., "233 Mhz Intel Pentium II with 64k cache and 96Mb RAM." To simplify your job doing the timing measurements, we have provided a program that generates random graphs of various sizes in that format. Two hints on timing measurements: it may be so fast on small graphs that you can't get accurate times. If so, rerun it several times on the same input and measure total time. Also, record separately or exclude from your timing measurements the time spent reading inputs, since this may dominate the interesting part. Similarly, except for the summary parameters giving numbers of vertices, edges and components, you may want to disable output formatting and printing during you timing runs. Electronic Turn In: Turn in your code online via one of the following web forms. (Please do not email them to me this time.) Paper turn in: parts 1-3, the the test case from 4a, the written report from 4b, and program listings. Extra Credit: 1. Also implement any other algorithm you like for solving the problem, e.g. the simple but slow one outlined below. It turns out that two (different) edges are in the same biconnected component if and only there is a simple cycle in G that includes both. So, you could do something like this: for every pair of edges, try to find a path from one to the other (maybe by DFS or BFS), and then try to find a path back, excluding the edges in the path already taken. 2. Give a big-O analysis of the running time for the algorithm you implemented in step 1. 3. Run this algorithm on the same set of graphs as you did the linear time algorithm (or at least the smaller graphs; it may be too slow on large ones). 4. Extend your report to include the run times for this algorithm in comparison to both the linear time algorithm and to the general big-O dogma. Computer Science & Engineering University of Washington Box 352350 Seattle, WA 98195-2350 (206) 543-1695 voice, (206) 543-2969 FAX [comments to cse417-webmaster@cs.washington.edu]
{"url":"https://courses.cs.washington.edu/courses/cse417/05wi/hw/hw5.html","timestamp":"2024-11-11T11:17:22Z","content_type":"text/html","content_length":"11764","record_id":"<urn:uuid:09af0bf2-a4a4-4ad8-a566-cc43355418d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00739.warc.gz"}
Behrend's trace formula 481 VIEWS Everipedia is now - Join the IQ Brainlist and our for early access to editing on the new platform and to participate in the beta testing. Behrend's trace formula In algebraic geometry, Behrend's trace formula is a generalization of the Grothendieck–Lefschetz trace formula to a smooth algebraic stack over a finite field, conjectured in 1993 ^[1] and proven in 2003 ^[2] by Kai Behrend. Unlike the classical one, the formula counts points in the "stacky way"; it takes into account the presence of nontrivial automorphisms. The desire for the formula comes from the fact that it applies to the moduli stack of principal bundles on a curve over a finite field (in some instances indirectly, via the Harder–Narasimhan stratification, as the moduli stack is not of finite type.^[3]^[4]) See the moduli stack of principal bundles and references therein for the precise formulation in this case. Deligne found an example^[5] that shows the formula may be interpreted as a sort of the Selberg trace formula. A proof of the formula in the context of the six operations formalism developed by Laszlo and Olsson^[6] is given by Shenghao Sun.^[7] By definition, if C is a category in which each object has finitely many automorphisms, the number of points inis denoted by with the sum running over representatives p of all isomorphism classes in C. (The series may diverge in general.) The formula states: for a smooth algebraic stack X of finite type over a finite field and the"arithmetic" Frobenius, i.e., the inverse of the usual geometric Frobeniusin Grothendieck's formula,^[8]^[9] Here, it is crucial that the cohomology of a stack is with respect to the smooth topology (not etale). When X is a variety, the smooth cohomology is the same as etale one and, via the Poincaré duality, this is equivalent to Grothendieck's trace formula. (But the proof of Behrend's trace formula relies on Grothendieck's formula, so this does not subsume Grothendieck's.) Consider, theclassifying stackof the multiplicative group scheme (that is,). By definition,is the category ofprincipal-bundles over, which has only one isomorphism class (since all such bundles are trivial byLang's theorem). Its group of automorphisms is, which means that the number of-isomorphisms is. On the other hand, we may compute the l-adic cohomology ofdirectly. We remark that in the topological setting, we have(wherenow denotes the usual classifying space of a topological group), whose rational cohomology ring is a polynomial ring in one generator (Borel's theorem), but we shall not use this directly. If we wish to stay in the world of algebraic geometry, we may instead "approximate"by projective spaces of larger and larger dimension. Thus we consider the mapinduced by the-bundle corresponding toThis map induces an isomorphism in cohomology in degrees up to 2N. Thus the even (resp. odd) Betti numbers ofare 1 (resp. 0), and the l-adic Galois representation on the *(2n)*th cohomology group is the nth power of the cyclotomic character. The second part is a consequence of the fact that the cohomology ofis generated by algebraic cycle classes. This shows that Multiplying by, one obtains the predicted equality. Citation Linkwww.math.ubc.caBehrend, K. The Lefschetz Trace Formula for the Moduli Stack of Principal Bundles. PhD dissertation. Sep 26, 2019, 10:52 PM Citation Linkwww.math.ubc.caBehrend, K. Derived l-adic categories for algebraic stacks. Memoirs of the American Mathematical Society Vol. 163, 2003 Sep 26, 2019, 10:52 PM Citation Linkarxiv.orgK. Behrend, A. Dhillon, Connected components of moduli stacks of torsors via Tamagawa numbers Sep 26, 2019, 10:52 PM Citation Linkwww.math.harvard.eduhttp://www.math.harvard.edu/~lurie/282ynotes/LectureIII-Cohomology.pdf Sep 26, 2019, 10:52 PM Citation Linkopenlibrary.org, Proposition 6.4.11 Sep 26, 2019, 10:52 PM Citation Link//arxiv.org/abs/math/0512097v2*Laszlo, Yves; Olsson, Martin (2006). "The six operations for sheaves on Artin stacks I: Finite Coefficients". arXiv:math/0512097v2. Sep 26, 2019, 10:52 PM Citation Linkopenlibrary.orgTo define Frobenius on a stack X, let . Then we have , which is the Frobenius on X, also denoted by . Sep 26, 2019, 10:52 PM Citation Linkopenlibrary.org, Corollary 6.4.10 Sep 26, 2019, 10:52 PM Citation Link//arxiv.org/abs/1008.36891008.3689 Sep 26, 2019, 10:52 PM Citation Link//doi.org/10.2140%2Fant.2012.6.4710.2140/ant.2012.6.47 Sep 26, 2019, 10:52 PM Citation Linkwww.math.ubc.caThe Lefschetz Trace Formula for the Moduli Stack of Principal Bundles. Sep 26, 2019, 10:52 PM Citation Linkwww.math.ubc.caDerived l-adic categories for algebraic stacks. Sep 26, 2019, 10:52 PM Citation Linkarxiv.orgConnected components of moduli stacks of torsors via Tamagawa numbers Sep 26, 2019, 10:52 PM Citation Linkwww.math.harvard.eduhttp://www.math.harvard.edu/~lurie/282ynotes/LectureIII-Cohomology.pdf Sep 26, 2019, 10:52 PM Citation Linken.wikipedia.orgThe original version of this page is from Wikipedia, you can edit the page right here on Everipedia.Text is available under the Creative Commons Attribution-ShareAlike License.Additional terms may apply.See everipedia.org/everipedia-termsfor further details.Images/media credited individually (click the icon for details). Sep 26, 2019, 10:52 PM
{"url":"https://everipedia.org/wiki/lang_en/Behrend%2527s_trace_formula","timestamp":"2024-11-14T12:14:09Z","content_type":"text/html","content_length":"101732","record_id":"<urn:uuid:d08f5f84-4ecc-4330-a9c3-cb7cc5983e5e>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00774.warc.gz"}