content
stringlengths
86
994k
meta
stringlengths
288
619
How to Prepare for the Next-Generation ACCUPLACER Math Test? The Next-Generation ACCUPLACER test is an assessment system for measuring students’ readiness for college courses in reading, writing, and mathematics. The test is a multiple-choice format and is used to precisely place you at the correct level of introductory classes. The Next-Generation ACCUPLACER uses computer-adaptive technology and the questions you see are based on your skill level. Your response to each question drives the difficulty level of the next There are five sub-sections on the ACCUPLACER test: • Arithmetic (20 questions) • Quantitative Reasoning, Algebra, And Statistics (QAS) (20 questions) • Advanced Algebra and Functions (20 questions) • Reading (20 questions) • Writing (25 questions) The ACCUPLACER test does NOT permit the use of personal calculators on the Math portion of the placement test. ACCUPLACER expects students to be able to answer certain questions without the assistance of a calculator. Therefore, they provide an on-screen calculator for students to use on some questions. The Absolute Best Book to Ace the ACCUPLACER Math Test How to Study for the Next-Generation ACCUPLACER Math Test? If you are a new college student, it is important to know that for some new college students, the first step is to enroll in suitable courses for their skill level is the ACCUPLACER test. This test is a placement test that determines whether you are ready for credit-earning courses or whether you need to modify several classes. This is why this test is important for some colleges. So this test can play a decisive role for you. If you do well in the ACCUPLACER test, you can start credit-earning courses right away. Therefore, for many new students, this test can be a little stressful, especially the math part. But do not worry, you can pass this test by following simple tips. Here we will guide you with simple but important tips to prepare for this test. 1. Choose your study program Many useful ACCUPLACER Math books and study guides can help you prepare for the test. All major test preparation companies have some offerings for ACCUPLACER Math, and the short-listing of the best book ends up being a puzzling phenomenon. There are also many online ACCUPLACER Math courses. If you just started preparing for the Accuplacer test and you need a perfect ACCUPLACER Math prep book, then ACCUPLACER Math for Beginners The Ultimate Step by Step Guide to Preparing for the ACCUPLACER Math Test is a perfect and comprehensive prep book for you to master all ACCUPLACER Next Generation Math concepts being tested right from scratch. It will help you brush up on your math skills, boost your confidence, and do your best to succeed on the ACCUPLACER Math Test. This one is an alternative book: If you just need an ACCUPLACER Math workbook to review the math topics on the test and measure your exam readiness, then try: “ ACCUPLACER Math Exercise Book 2021-2022 Student Workbook and Two Full-Length Accuplacer Math Practice Tests” Or if you think you are good at math and just need some ACCUPLACER Math practice tests, then this book is a perfect ACCUPLACER Math test book for you: “5 ACCUPLACER Math Practice Tests Extra Practice to Help Achieve an Excellent Score” Want to take the ACCUPLACER Math test in a few weeks or a few days? Then try: “Prepare for the ACCUPLACER Next Generation Math Test in 7 Days: A Quick Study Guide with Two Full-Length ACCUPLACER Math Practice Tests”. This quick study guide contains only the most vital math concepts an ACCUPLACER Math test taker will need to succeed on the ACCUPLACER Math test. You can also use our FREE ACCUPLACER Math worksheets: ACCUPLACER Math Worksheets. Have a look at our FREE ACCUPLACER Math Worksheets to assess your knowledge of Mathematics, find your weak areas, and learn from your mistakes. ACCUPLACER Math FREE Resources: 2. Having a positive attitude toward mathematics You may think this is not a big deal, but a positive outlook on math can make the difference between success and failure on your test. A negative outlook causes you to remain in this vicious cycle: negative view of math→ laziness in learning →lack of proper learning of math→ poor performance in math tests →more negative feelings about math. So let’s break this vicious cycle here. Start over and this time look at math with a positive attitude and find it a fun challenge. You will find that you can gradually get better at math and even enjoy it. 3. Having a better understanding of the concepts of the ACCUPLACER Math test For purposeful study, you must first know what concepts you are dealing with in this test. In the ACCUPLACER Math test, you are dealing with Arithmetic, Quantitative Reasoning, Algebra, and Statistics, Advanced Algebra, and Functions. In the next step, you should categorize these concepts into two categories: basic concepts and more complex concepts to make reading easier for you. First, learn the basic concepts of the test and then you can study the advanced concepts. This will make reading easier for you, save you from confusion, and speed up your studying process. 4. Having a daily schedule and sticking to it Improve your long-term study habits. Making and keeping good study habits is essential for you. You need to make studying an integral part of your day. To do this, set a schedule and stick to it. You can start with a small study time and gradually increase it. Another way is to divide the study time into smaller parts during the day. The more time passes, the easier and more controllable this program becomes for you. After a while, this program becomes an inseparable part of your day. It will be very unlikely to return to your former bad study habits. 5. Choose the best way to prepare yourself First of all, you need to know that it’s up to you which way you choose to prepare. Maybe a way is suitable for a test taker, but it doesn’t work for you, and vice versa. There are different ways to learn depending on your educational background. If you feel that your math base is strong and you are familiar with the basics of math, you can prepare for the ACCUPLACER test through self-study. You can use ACCUPLACER prep books and online resources for this purpose. In this method, you may lose motivation after a while. If so, you can hire a private tutor. Although this method of learning is expensive if you have a good tutor, you will be more motivated and you will become more familiar with the important tips and details of the ACCUPLACER test. Another method of preparation is participating in prep courses and prep classes for the ACCUPLACER test. This method may be right for you if you feel you want to prepare for the test with other test takers and the competitive environment increases your motivation to study better. Best ACCUPLACER Math Prep Resource 6. Know when and how to use formulas Although the ACCUPLACER math test provides math formulas for some questions, it does not include all the formulas you need. This means that you should memorize many of the necessary formulas. Here you can find all the formulas for the ACCUPLACER math test, as well as some explanations on how to use them and what they mean. Try to memorize the formulas as much as you can. This will speed up your action in the ACCUPLACER test. 7. Review what has been learned through simulated tests The next step is to take practice tests. These tests can be online or written and should be taken when you have mastered the concepts of the test. You can use the sample questions provided on the ACCUPLACER website, which are free and you can download them in pdf format. It is best to start your test with the sample questions available on the CollegeBoard website because CollegeBoard administers the ACCUPLACER test and provides standard sample questions. In this sample question, you will see questions that will be similar to those in the ACCUPLACER test. This will help you a lot because knowing the format and contents of the test is the first step to getting an excellent score on the ACCUPLACER test. You can also download the ACCUPLACER study app. This app is a free study app for the ACCUPLACER test that can be installed and used on a computer, tablets, and smartphones. It provides practice tests that are just like the actual ACCUPLACER test. The app reports the score immediately after the test. It also gives you explanations for right and wrong answers. 8. How can you register The ACCUPLACER test is offered by several high schools, colleges, and universities. Each institution has a special method for registration. In some colleges and universities, you can attend the exam, register, and take the ACCUPLACER test. The start time of the test may be provided on the site of some institutions. According to this schedule, you can take the test. In some institutions, it is possible to register online in addition to the schedule. You can register by visiting the site of these institutions, according to your desired time. If there is no test schedule or registration form on the site, you can contact the institute by phone or email and receive your opinion. 9. Follow the tips on the day of the test The ACCUPLACER test is offered at universities, colleges, and high schools. Each test center has its own guidelines on what to bring with you to the test. One of the things you should bring with you is a photo ID. However, this may not be necessary for high school students. Contact the school where you will take your ACCUPLACER test to make sure you know what to bring and what items you are not allowed to take with you. If you have a specific disability or medical condition, contact your school test center to find out what test sites are available. Before the test, make sure you know the test site well. You can even go there the day before the test to see how long it takes to get there. On the day of the test, try to be at the test site 30 minutes earlier. The difficulty of the questions in this test varies according to your ability to answer the questions, which means that for each correct question you answer, the next question will be a little more You will get points for each correct question you answer. If you do not know the answer to the question, make your best guess. In this way, you answer each question and move on to the next one that you may know the answer to with confidence. In the ACCUPLACER test, only WritePlacer is timed. All other ACCUPLACER exams give you unlimited time to complete the tests. The ACCUPLACER test does not allow the use of personal calculators in the math section of the test. ACCUPLACER expects students to be able to answer specific questions without the help of a calculator. Therefore, they provide an on-screen calculator for students to use in some questions. 10. Check your score reports One of the advantages of this test is that the test score is ready as soon as the exam is over. So after the test is over, you can see your score by clicking “degree check”. 11. Interpret your score ACCUPLACER exams are held online. Immediately after the test, the score related to the test will be provided to the test takers. This means after completing the test, you will know if you qualify for credit-earning. In addition to test scores, there is other information in the test taker’s score report, including messages from the university or college and additional information about course placement. The good news about the ACCUPLACER test is that you will not be failed because it is a placement test. The purpose of this test is to show if you need to take developmental classes or not. Getting a good result on the ACCUPLACER test has the advantage that you probably will no longer have to take developmental courses and can enter the introductory courses directly. As a result, taking the ACCUPLACER test will save you time and money. Each university determines what the “good” ACCUPLACER test score is. You must score at least 237 or higher. The best way to interpret your ACCUPLACER scores is to talk to an academic advisor about your college or university’s placement score requirements. The Best ACCUPLACER Quick Study Guide: Here are Some common questions about the ACCUPLACER test: How do I take the ACCUPLACER test online? This test is not available online. This test must be held at an approved testing location. What is the ACCUPLACER test? ACCUPLACER is a computerized placement test used by many colleges and technical schools to assess incoming student skills. Is the ACCUPLACER test hard? This is an adaptive test, which means that the more correctly you answer, the more difficult the questions become. What happens if you fail the ACCUPLACER test? Failure to take the exam means that the community college will require you to take at least one remedial no-credit course before they allow you to enroll in a credit course. What is a good score on the ACCUPLACER? Each university determines what the “good” ACCUPLACER test score is. You must score at least 237 or higher. Can you fail the ACCUPLACER test? No one succeeds or fails in the ACCUPLACER exams, but you must finish the exam with your best effort. What is the average score on the ACCUPLACER test? Scores of 221 to 250 are average, while scores between 250 and 270 are normally considered above average. Can I use a calculator on the ACCUPLACER test? You are not allowed to use personal calculators in the math section of the test. How do I check my ACCUPLACER score? You should click on “Degree Check!” at the bottom left-hand corner of the page. Then a page containing your scores and other information will appear. How many times can I take the ACCUPLACER? Some students can take the ACCUPLACER test four times. Still, other schools allow students to take the test a maximum of three or four times in a one-year period, provided they wait at least two weeks between taking the test. How long does the ACCUPLACER take? The ACCUPLACER reading and math tests are un-timed and the writing test is a 50 minutes timed test. How much does the ACCUPLACER test cost? This price is set by the administering institution. Some colleges charge a registration fee, but there may be an extra fee, which is usually around $ 15 to $ 50. Who needs to take the ACCUPLACER test? If you do not have ACT / SAT scores or your math scores are more than two years old, you should take the ACCUPLACER test. The math scores expire after two years. Looking for the best resource to help you or your student succeed on the ACCUPLACR Math test? The Best Books to Ace the ACCUPLACER Math Test Related to This Article What people say about "How to Prepare for the Next-Generation ACCUPLACER Math Test? - Effortless Math: We Help Students Learn to LOVE Mathematics"? No one replied yet.
{"url":"https://www.effortlessmath.com/blog/how-to-prepare-for-next-generation-accuplacer/","timestamp":"2024-11-06T14:46:51Z","content_type":"text/html","content_length":"128097","record_id":"<urn:uuid:2a2bb6a6-707a-4cfc-88f9-4f4e678a1759>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00311.warc.gz"}
STDEV.S function: Description, Usage, Syntax, Examples and Explanation November 13, 2024 - Excel Office STDEV.S function: Description, Usage, Syntax, Examples and Explanation What is STDEV.S function in Excel? STDEV.S function is one of Statistical functions in Microsoft Excel that estimates standard deviation based on a sample (ignores logical values and text in the sample).The standard deviation is a measure of how widely values are dispersed from the average value (the mean). Syntax of STDEV.S function The STDEV.S function syntax has the following arguments: • Number1: The first number argument corresponding to a sample of a population. You can also use a single array or a reference to an array instead of arguments separated by commas. • Number2, …(Optional): Number arguments 2 to 254 corresponding to a sample of a population. You can also use a single array or a reference to an array instead of arguments separated by commas. Explanation of STDEV.S function • STDEV.S assumes that its arguments are a sample of the population. If your data represents the entire population, then compute the standard deviation using STDEV.P. • The standard deviation is calculated using the “n-1” method. • Arguments can either be numbers or names, arrays, or references that contain numbers. • Logical values and text representations of numbers that you type directly into the list of arguments are counted. • If an argument is an array or reference, only numbers in that array or reference are counted. Empty cells, logical values, text, or error values in the array or reference are ignored. • Arguments that are error values or text that cannot be translated into numbers cause errors. • If you want to include logical values and text representations of numbers in a reference as part of the calculation, use the STDEVA function. • STDEV.S uses the following formula: Example of STDEV.S function Steps to follow: 1. Open a new Excel worksheet. 2. Copy data in the following table below and paste it in cell A1 Note: For formulas to show results, select them, press F2 key on your keyboard and then press Enter. You can adjust the column widths to see all the data, if need be. Formula Description Result =STDEV.S(A2:A11) Standard deviation of breaking strength. 27.46391572
{"url":"https://www.xlsoffice.com/excel-functions/statistical-functions/stdev-s-function-description-usage-syntax-examples-and-explanation/","timestamp":"2024-11-13T23:00:33Z","content_type":"text/html","content_length":"65579","record_id":"<urn:uuid:194080ef-f5c1-459a-96e9-27248a5abfe1>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00273.warc.gz"}
Texas Go Math Grade 4 Lesson 10.4 Answer Key Place the First Digit Refer to our Texas Go Math Grade 4 Answer Key Pdf to score good marks in the exams. Test yourself by practicing the problems from Texas Go Math Grade 4 Lesson 10.4 Answer Key Place the First Digit. Texas Go Math Grade 4 Lesson 10.4 Answer Key Place the First Digit Essential Question How can you use place value to know where to place the first digit in the quotient? Answer: place value is the value of the digit in its place. Unlock the Problem Jaime took 144 photos on a digital camera. The photos are to be placed equally in 6 photo albums. asked to find. How many photos will be in each album? • Underline what you are • Circle what you need to use. We need to find the number of photos will be there in each album. Example Divide. 144 ÷ 6 STEP 1 Use place value to place the first digit. Look at the hundreds in 144. 1 hundred cannot be shared among 6 groups without regrouping. Regroup 1 hundred as 10 tens. Now there are _________ tens to share among 6 groups. The first digit of the quotient will be in the ___________ place. The first digit of the quotient is in tens place. STEP 2 6 x 2 = 12 So, 14 – 12 = 2 Math Idea: After you divide each place, the remainder should be less than the divisor. STEP 3 Divide the ones. Regroup 2 tens as 20 ones. Now there are _________ ones to share among 6 groups. So, there will be _________ photos in each album. So, there are 24 photos in each album. Math Talk Mathematical Processes Explain how the answer change if Jaime had 146 photos. If Jamie had 146 photos, then Quotient : 24 Remainder : 2 So, The answer in remainder changes if Jacob have 146 photos. Share and Show Go Math Grade 4 Lesson 10.4 Answer Key Question 1. There are 452 pictures of dogs in 4 equal groups. How many pictures are in each group? Explain how you can use place value to place the first digit in the quotient. Quotient = 113 Remainder = 0 Question 2. Quotient = 041 Remainder= 2 Question 3. Quotient = 155 Remainder= 0 Math Talk Mathematical Processes Explain how you placed the first digit of the quotient in Exercise 2. We can place the first digit in the first place. Problem Solving Practice: Copy and Solve Divide. Question 4. 516 ÷ 2 Quotient = 258 Remainder= 0 Go Math Lesson 10.4 4th Grade Place The First Digit Question 5. 516 ÷ 3 Quotient = 172 Remainder= 0 Question 6. 516 ÷ 4 Quotient = 129 Remainder= 0 Question 7. 516 ÷ 5 Quotient = 103 Remainder= 1 Question 8. H.O.T. Look hack at your answers to Exercises 4-7. What happens to the quotient when the divisor increases? Explain. The value of remainder increases when the value of divisor increases. Question 9. Multi-Step Reggie has 192 pictures of animals. He wants to keep half and then divide the rest equally among three friends. How many pictures will each friend get? (A) 96 (B) 32 (C) 48 (D) 14 Total number of animal pictures Reggie have = 192 Given, he kept half of them Which means, = 96 Now, 96 animal pictures are shared equally among 3 friends. So, 96/3 = Therefore, each friend gets 32 pictures. Go Math Lesson 10.4 Homework Answers Grade 4 Question 10. H.O.T. Multi-Step There are 146 students, 5 teachers, and 8 chaperones going to the theater. To reserve their seats, they need to reserve entire rows. Each row has 8 seats. How many rows must they (A) 20 (B) 18 (C) 19 (D) 58 Answer: C Number of students = 146 Number of teachers = 5 Number of chaperones = 8 Total number of people = 146+5+8 = 159 Also give, the number of people for each row = 8 So, 159/8 Therefore, They must reserve 19 rows. = Unlock the Problem Question 11. H.O.T. Multi-Step Nan wants to put 234 pictures in an album with a blue cover. How many full pages will she have in her album? Number of pictures Nan have = 234 Pictures per blue page = 4 So, 234/4 Therefore, 58 full pages are there in her album. a. What do you need to find? We need to find the number of full pages in Nan album b. Communicate How will you use division to find the number of full pages? We can find the number of full pages by dividing number of pictures Nan have by the number of pictures for blue page. c. Show the steps you will use to solve the problem. d. Complete the following sentences. Nan has ________ pictures. She wants to put the pictures in an album with pages that each hold ________ pictures. She will have an album with ________ full pages and ________ pictures on another page. Nan has 234 pictures. She wants to put the pictures in an album with pages that each hold 4 pictures. She will have an album with 58 full pages and 4 pictures on another page. Question 12. Apply Juan wants to put his 672 pictures in an album with a green cover. how many full pages will he have in his album? For green color, pictures per page = 6 Number of pictures Juan in an album = 672 Therefore, he have 112 full pages in his album. Go Math Answer Key Grade 4 Lesson 10.4 Place The First Digit Question 13. H.O.T. Multi-Step Talla has 162 pictures to put in a photo album. If she wants only full pages, which color albums could she use? Number of pictures Talla have = 162 For blue color, number of pictures per page = 4 Remainder = 0 Which means there are only full pages. Therefore, She should use blue color. Daily Assessment Task Fill in the bubble completely to show your answer. Question 14. Danny makes a website to showcase images of unusual manhole covers. He takes 264 pictures for the site. Danny places 8 pictures on each page of the site. How many pages of pictures does Danny’s site have? (A) 33 pages (B) 330 pages (C) 3 pages (D) 30 pages Number of pictures Danny takes = 264 Number of pictures he kept on each page of the site = 8 Therefore, 33 pages of pictures does Danny’s site have Question 15. Debi celebrates her birthday with friends at an arcade. Debi’s parents want to share 300 tokens equally among the 9 friends. Debi uses division to find how many tokens each person gets. In which place is the first digit of the quotient? (A) thousands (B) hundreds (C) tens (D) ones Total number of token = 300 Number of friends = 9 Also given, Debi’s parents give each person the same number of tokens. Quotient = 033 Remainder = 3 The 0 is in hundreds place, 3 is in tens place. Question 16. Multi-Step Debi’s parents want to share 300 tokens equally among the 9 friends. After Debi’s parents give each person the same number of tokens, how many tokens are left over? (A) 0 tokens (B) 2 tokens (C) 1 token (D) 3 tokens Total number of token = 300 Number of friends = 9 Also given, Debi’s parents give each person the same number of tokens. Remainder = 3 Therefore, the number of tokens left = 3 TEXAS Test Prep Question 17. Kat wants to put 485 pictures in an album with a red cover. There are 9 pictures per page. She uses division to find out how many full pages she will have. In which place is the first digit of the (A) thousands (B) hundreds (C) tens (D) ones The number of pictures Kat wants to keep in album = 485 Number of pictures per page = 9 The first digit in quotient is in tens place. Texas Go Math Grade 4 Lesson 10.4 Homework and Practice Answer Key Question 1. Quotient = 090 Remainder = 0 Question 2. Quotient = 043 Remainder = 1 Question 3. Quotient = 073 Remainder = 0 Question 4. Quotient = 046 Remainder = 1 Question 5. Quotient = 064 Remainder = 4 Question 6. Quotient = 046 Remainder = 0 Practice and Homework Lesson 10.4 Answer Key 4th Grade Question 7. Quotient = 038 Remainder = 0 Question 8. Quotient = 019 Remainder = 7 Question 9. 428 ÷ 2 ____________ Quotient = 214 Remainder = 0 Question 10. 428 ÷ 3 ____________ Quotient = 142 Remainder = 2 Question 11. 428 ÷ 4 ____________ Quotient = 107 Remainder = 0 Question 12. 428 ÷ 5 ____________ Quotient = 107 Remainder = 0 Problem Solving Question 13. Camp Mesquite will provide 4 buses for 212 campers. If each bus carries the same number of campers, how many campers will ride in each bus? Total number of campers = 212 Number of buses = 4 If each bus carries the same number of campers, Therefore, 53 campers will ride in each bus. Question 14. The garden center received a shipment of 132 daisies. If each table displays the same number of daisies, how many daisies will be placed on each of 3 tables? Number of daisies shipped = 132 Number of tables = 3 If each table displays the same number of daisies, Therefore, 44 daisies will be placed on each of 3 tables. Lesson Check Fill in the bubble completely to show your answer. Question 15. Lauren collected 532 pennies in a donation box. She wants to divide the pennies equally into 4 containers. Lauren uses division to find how many pennies to put into each container. In which place is the first digit of the quotient? (A) thousands (B) hundreds (C) tens (D) ones The number of pennies Lauren collected = 532 Number of boxes = 5 The digit of the quotient is in Hundreds place.(1) Question 16. Mark opens a box of 500 cups for the football concession stand. Mark uses division to divide the cups equally among windows. In which place is the first digit of the quotient? (A) ones (B) tens (C) hundreds (D) thousands Question 17. Mrs. Samson bought 294 craft sticks for a class project. She divides her class into 7 groups and gives each group an equal number of sticks. If she uses all of the sticks, how many will each group (A) 40 (B) 43 (C) 41 (D) 42 Number of sticks Samson bought to the class = 294 Number of groups she divides her class = 7 Also given, She uses all the sticks in equal number. Therefore, Each group receive 42 sticks. Question 18. Cassie read 483 pages of her hook in one week. if she read the same number of pages each day, how many pages did she read in 1 day? (A) 78 (B) 79 (C) 69 (D) 68 Total number of pages Cassie read = 483 Number of days in a week = 7 If she read the same number of pages each day, Therefore, Cassie read 69 pages in 1 day. Question 19. Multi-Step Mr. Parsons bought 293 apples to make pies for his shop. Six apples are needed for each pie. If Mr. Parsons makes the most apple pies possible, how many apples will be left over? (A) 5 (B) 1 (C) 3 (D) 6 Number of apples Parsons bought = 293 Number of apples required to make 1 apple pie = 6 So, 293/6 = 48 Now, 6 x 48 = 288 So, Total – Number of apples required = leftover 293-288 = 5 Therefore, the number of apples left = 5. Question 20. Multi-Step At an art school, 112 students are in 4 equal-sized drawing classes. There are 120 students in 5 equal-sized painting classes. I low many more students are in one drawing class than are in one painting class? (A) 28 (B) 5 (C) 25 (D) 4 Total number of students in 4 equal sized drawing class = 112 Now, 112/4 = 28 Total number of students in 5 equal sized drawing class = 120 Now, 120/5 = 24 Now, The number of more students = 28 – 24 = 4 Therefore,4 students are in one drawing class than are in one painting class. Leave a Comment You must be logged in to post a comment.
{"url":"https://gomathanswerkey.com/texas-go-math-grade-4-lesson-10-4-answer-key/","timestamp":"2024-11-05T02:37:55Z","content_type":"text/html","content_length":"260193","record_id":"<urn:uuid:7a14e83f-8140-48e0-b4c3-2cf9dfc9c4c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00067.warc.gz"}
Transient matching requirement for control system tuning Use the TuningGoal.Transient object to constrain the transient response from specified inputs to specified outputs. This tuning goal specifies that the transient response closely match the response of a reference model. Specify the closeness of the required match using the RelGap property of the tuning goal (see Properties). You can constrain the response to an impulse, step, or ramp input signal. You can also constrain the response to an input signal given by the impulse response of an input filter you specify. Req = TuningGoal.Transient(inputname,outputname,refsys) requires that the impulse response from inputname to outputname closely matches the impulse response of the reference model refsys. Specify the closeness of the required match using the RelGap property of the tuning goal (see Properties). inputname and outputname can describe a SISO or MIMO response of your control system. For MIMO responses, the number of inputs must equal the number of outputs. Req = TuningGoal.Transient(inputname,outputname,refsys,inputtype) specifies whether the input signal that generates the constrained transient response is and impulse, step, or ramp signal. Req = TuningGoal.Transient(inputname,outputname,refsys,inputfilter) specifies the input signal for generating the transient response that the tuning goal constrains. Specify the input signal as a SISO transfer function, inputfilter, that is the Laplace transform of the desired time-domain input signal. The impulse response of inputfilter is the desired input signal. Input Arguments refsys — Reference system for target transient response tf model object | zpk model object | ss model object Reference system for target transient response, specified as a dynamic system model, such as a tf, zpk, or ss model. The desired transient response is the response of this model to the input signal specified by inputtype or inputfilter. The reference model must be stable, and the series connection of the reference model with the input shaping filter must have no feedthrough term. inputtype — Type of input signal that generates constrained transient response 'impulse' (default) | 'step' | 'ramp' Type of input signal that generates the constrained transient response, specified as one of the following values: • 'impulse' — Constrain the response at outputname to a unit impulse applied at inputname. • 'step' — Constrain the response to a unit step. Using 'step' is equivalent to using the TuningGoal.StepTracking design goal. • 'ramp' — Constrain the response to a unit ramp, u = t. inputfilter — Custom input signal for generating transient response tf model object | zpk model object Custom input signal for generating the transient response, specified as a SISO transfer function (tf or zpk) model that represents the Laplace transform of the desired input signal. inputfilter must be continuous, and can have no poles in the open right-half plane. The frequency response of inputfilter gives the signal spectrum of the desired input signal, and the impulse response of inputfilter is the time-domain input signal. For example, to constrain the transient response to a unit-amplitude sine wave of frequency w, set inputfilter to tf(w,[1,0,w^2]). This transfer function is the Laplace transform of sin(wt). The series connection of refsys with inputfilter must have no feedthrough term. ReferenceModel — Reference system for target transient response SISO or MIMO ss model object Reference system for target transient response, specified as a SISO or MIMO state-space (ss) model. When you use the tuning goal to tune a control system, the transient response from inputname to outputname is tuned to match this target response to within the tolerance specified by the RelGap property. The refsys argument to TuningGoal.Transient sets the value of ReferenceModel to ss(refsys). InputShaping — Input signal for generating the transient response SISO zpk model object Input signal for generating the transient response, specified as a SISO zpk model that represents the Laplace transform of the time-domain input signal. InputShaping must be continuous, and can have no poles in the open right-half plane. The value of this property is populated using the inputtype or inputfilter arguments used when creating the tuning goal. For tuning goals created using the inputtype argument, InputShaping takes the following values: inputtype InputShaping 'impulse' 1 'step' 1/s 'ramp' 1/s^2 For tuning goals created using an inputfilter transfer function, InputShaping takes the value zpk(inputfilter). The series connection of ReferenceModel with InputShaping must have no feedthrough term. RelGap — Maximum relative matching error 0.1 (default) | positive scalar Maximum relative matching error, specified as a positive scalar value. This property specifies the matching tolerance as the maximum relative gap between the target and actual transient responses. The relative gap is defined as: y(t) – y[ref](t) is the response mismatch, and 1 – y[ref(tr)](t) is the transient portion of y[ref] (deviation from steady-state value or trajectory). ${‖\text{\hspace{0.17em}}\cdot \text{\hspace {0.17em}}‖}_{2}$ denotes the signal energy (2-norm). The gap can be understood as the ratio of the root-mean-square (RMS) of the mismatch to the RMS of the reference transient Increase the value of RelGap to loosen the matching tolerance. Input — Input signal names cell array of character vectors Input signal names, specified as a cell array of character vectors that indicate the inputs for the transient responses that the tuning goal constrains. The initial value of the Input property is populated by the inputname argument when you create the tuning goal. Output — Output signal names cell array of character vectors Output signal names, specified as a cell array of character vectors that indicate the outputs where transient responses that the tuning goal constrains are measured. The initial value of the Output property is populated by the outputname argument when you create the tuning goal. Transient Response Requirement with Specified Input Type and Tolerance Create a requirement for the transient response from a signal named 'r' to a signal named 'u'. Constrain the impulse response to match the response of transfer function $refsys=1/\left(s+1\right)$, but allow 20% relative variation between the target and tuned responses. refsys = tf(1,[1 1]); Req1 = TuningGoal.Transient('r','u',refsys); When you do not specify a response type, the requirement constrains the transient response. By default, the requirement allows a relative gap of 0.1 between the target and tuned responses. To change the relative gap to 20%, set the RelGap property of the requirement. Examine the requirement. The dashed line shows the target impulse response specified by this requirement. You can use this requirement to tune a control system model, T, that contains valid input and output locations named 'r' and 'u'. If you do so, the command viewGoal(Req1,T) plots the achieved impulse response from 'r' to 'u' for comparison to the target response. Create a requirement that constrains the response to a step input, instead of the impulse response. Req2 = TuningGoal.Transient('r','u',refsys,'step'); Examine this requirement. Req2 is equivalent to the following step tracking requirement: Req3 = TuningGoal.StepTracking('r','u',refsys); Constrain Transient Response to Custom Input Signal Create a requirement for the transient response from 'r' to 'u'. Constrain the response to a sinusoidal input signal, rather than to an input, step, or ramp. To specify a custom input signal, set the input filter to the Laplace transform of the desired signal. For example, suppose you want to constrain the response to a signal of $\mathrm{sin}\omega t$. The Laplace transform of this signal is given by: $inputfilter=\frac{\omega }{{s}^{2}+{\omega }^{2}}.$ Create a requirement that constrains the response at 'u' to a sinusoidal input of natural frequency 2 rad/s at 'r'. The response should match that of the reference system $refsys=1/\left(s+1\right)$. refsys = tf(1,[1 1]); w = 2; inputfilter = tf(w,[1 0 w^2]); Req = TuningGoal.Transient('u','r',refsys,inputfilter); Examine the requirement to see the shape of the target response. Transient Response Goal with Limited Model Application and Additional Loop Openings Create a tuning goal that constrains the impulse response. Set the Models and Openings properties to further configure the tuning goal’s applicability. refsys = tf(1,[1 1]); Req = TuningGoal.Transient('r','u',refsys); Req.Models = [2 3]; Req.Openings = 'OuterLoop' When tuning a control system that has an input (or analysis point) 'r', an output (or analysis point) 'u', and another analysis point at location 'OuterLoop', you can use Req as an input to looptune or systune. Setting the Openings property specifies that the impulse response from 'r' to 'y' is computed with the loop opened at 'OuterLoop'. When tuning an array of control system models, setting the Models property restricts how the tuning goal is applied. In this example, the tuning goal applies only to the second and third models in an array. • When you use this tuning goal to tune a continuous-time control system, systune attempts to enforce zero feedthrough (D = 0) on the transfer that the tuning goal constrains. Zero feedthrough is imposed because the H[2] norm, and therefore the value of the tuning goal (see Algorithms), is infinite for continuous-time systems with nonzero feedthrough. systune enforces zero feedthrough by fixing to zero all tunable parameters that contribute to the feedthrough term. systune returns an error when fixing these tunable parameters is insufficient to enforce zero feedthrough. In such cases, you must modify the tuning goal or the control structure, or manually fix some tunable parameters of your system to values that eliminate the feedthrough term. When the constrained transfer function has several tunable blocks in series, the software’s approach of zeroing all parameters that contribute to the overall feedthrough might be conservative. In that case, it is sufficient to zero the feedthrough term of one of the blocks. If you want to control which block has feedthrough fixed to zero, you can manually fix the feedthrough of the tuned block of your choice. To fix parameters of tunable blocks to specified values, use the Value and Free properties of the block parametrization. For example, consider a tuned state-space block: C = tunableSS('C',1,2,3); To enforce zero feedthrough on this block, set its D matrix value to zero, and fix the parameter. C.D.Value = 0; C.D.Free = false; For more information on fixing parameter values, see the Control Design Block reference pages, such as tunableSS. • This tuning goal imposes an implicit stability constraint on the closed-loop transfer function from Input to Output, evaluated with loops opened at the points identified in Openings. The dynamics affected by this implicit constraint are the stabilized dynamics for this tuning goal. The MinDecay and MaxRadius options of systuneOptions control the bounds on these implicitly constrained dynamics. If the optimization fails to meet the default bounds, or if the default bounds conflict with other requirements, use systuneOptions to change these defaults. When you tune a control system using a TuningGoal, the software converts the tuning goal into a normalized scalar value f(x), where x is the vector of free (tunable) parameters in the control system. The software then adjusts the parameter values to minimize f(x) or to drive f(x) below 1 if the tuning goal is a hard constraint. For TuningGoal.Transient, f(x) is based upon the relative gap between the tuned response and the target response: y(t) – y[ref](t) is the response mismatch, and 1 – y[ref(tr)](t) is the transient portion of y[ref] (deviation from steady-state value or trajectory). ${‖\text{\hspace{0.17em}}\cdot \text{\hspace {0.17em}}‖}_{2}$ denotes the signal energy (2-norm). The gap can be understood as the ratio of the root-mean-square (RMS) of the mismatch to the RMS of the reference transient Version History Introduced in R2016a R2016a: Functionality moved from Robust Control Toolbox Prior to R2016a, this functionality required a Robust Control Toolbox™ license.
{"url":"https://la.mathworks.com/help/control/ref/tuninggoal.transient.html","timestamp":"2024-11-10T22:48:25Z","content_type":"text/html","content_length":"145713","record_id":"<urn:uuid:fcd962f2-305b-4c93-b76f-6ccc053eb0e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00340.warc.gz"}
Question 1 (10 pts) Consider the following market. Demand is given by Qp 5-P where Qp... Answer #1 Similar Homework Help Questions • The market for meat is represented by the following demand and supply equations: Demand: Qp = 400 - 10 P Supply: Qs... • Suppose demand for automobiles in the United States is given by: P= 100−0.09QD where P is... Suppose demand for automobiles in the United States is given by: P= 100−0.09QD where P is the price for new vehicles in dollars and QD is the quantity demanded per month. Assume the supply of automobiles is given by P= 4 + 0.03QS where again P is the price in thousands of dollars and QS is the quantity sold per month in hundreds of thousands. a.) Solve for the market equilibrium price and quantity. b.) Depict this market graphically, and... Free Homework Help App Download From Google Play Scan Your Homework to Get Instant Free Answers Need Online Homework Help? Ask a Question Get Answers For Free Most questions answered within 3 hours.
{"url":"https://www.homeworklib.com/question/1022852/question-1-10-pts-consider-the-following-market","timestamp":"2024-11-03T04:01:31Z","content_type":"text/html","content_length":"55769","record_id":"<urn:uuid:05060aa3-3162-4db5-9dd4-35adec762bb2>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00783.warc.gz"}
Rational Roots - Definition, Solved Example Problems | Theory of Equations Rational Roots If all the coefficients of a quadratic equation are integers, then Δ is an integer, and when it is positive, we have, √Δ is rational if, and only if, Δ is a perfect square. In other words, the equation ax2 + bx + c = 0 with integer coefficients has rational roots, if, and only if, Δ is a perfect square. What we discussed so far on polynomial equations of rational coefficients holds for polynomial equations with integer coefficients as well. In fact, multiplying the polynomial equation with rational coefficients, by a common multiple of the denominators of the coefficients, we get a polynomial equation of integer coefficients having the same roots. Of course, we have to handle this situation carefully. For instance, there is a monic polynomial equation of degree 1 with rational coefficients having 1/2 as a root, whereas there is no monic polynomial equation of any degree with integer coefficients having 1/2 as a root. Example 3.11 Show that the equation 2x2 - 6x + 7 = 0 cannot be satisfied by any real values of x. ∆= b2 − 4ac = −20 < 0 . The roots are imaginary numbers. Example 3.12 If x2 + 2 (k + 2)x + 9k = 0 has equal roots, find k. Here Δ = b2 - 4ac = 0 for equal roots. This implies 4 (k + 2)2 = 4 (9) k .This implies k = 4 or 1. Example 3.13 Show that, if p, q, r are rational, the roots of the equation x2 - 2 px + p2 - q2 + 2qr - r2 = 0 are rational. The roots are rational if Δ = b2 - 4ac = (-2 p)2 - 4 ( p2 - q2 + 2qr - r2 ) . But this expression reduces to 4 (q2 - 2qr + r2 ) or 4 (q - r )2 which is a perfect square. Hence the roots are rational.
{"url":"https://www.brainkart.com/article/Rational-Roots_39121/","timestamp":"2024-11-14T00:26:37Z","content_type":"text/html","content_length":"45141","record_id":"<urn:uuid:4763f54b-57ee-41d8-a060-c580b20393df>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00596.warc.gz"}
5 Best Ways to Check if Any Large Number is Divisible by 19 in Python π ‘ Problem Formulation: Checking divisibility by a particular number, such as 19, can be a common necessity in various computer science and mathematics problems. In Python, largenumbers can be handed off using different methods to ascertain if they are divisible by 19. For instance, if the input is 361, the desired output would be True because 361 is a multiple of 19. Method 1: Using Modulo Operator This method involves the modulo operator, which computes the remainder of a division. If a number is divisible by another, the remainder of their division is zero. In this context, to check if a number is divisible by 19, we simply check if number % 19 == 0. This method is straightforward and efficient for any size of number. Here’s an example: large_number = 951 result = large_number % 19 == 0 Output: True The code snippet assigns the value 951 to the variable large_number and tests if it is divisible by 19. The expression large_number % 19 evaluates to zero if it’s divisible, leading the variable result to be True, which gets printed. Method 2: Using divmod Function The divmod() function in Python returns a tuple containing the quotient and the remainder of the division. To check if the number is divisible by 19, we can test if the remainder is zero. This method has the extra benefit of providing the quotient directly should it be needed. Here’s an example: large_number = 3620 quotient, remainder = divmod(large_number, 19) print(remainder == 0) Output: True By using divmod on large_number and 19, the code calculates both the quotient and remainder in one go. It checks if the remainder equals zero to determine divisibility by 19, with the result showing that 3620 is indeed divisible by 19. Method 3: Using Recursion Recursion can be used to chunk down the number by subtracting multiples of 19 and check divisibility on a smaller number. This is more theoretical in nature, but it showcases an algorithmic approach to the problem, utilizing the function call stack. Here’s an example: def is_divisible_by_19(n): if n < 19: return False if n == 19: return True return is_divisible_by_19(n - 19) large_number = 798 Output: True The function is_divisible_by_19 subtracts 19 repeatedly from the input number until it either becomes less than 19, not divisible, or equals 19, indicating divisibility. The function calls itself recursively to perform this check, here confirming that 798 is divisible by 19. Method 4: Using Iterative Subtraction Similar to the recursive method, an iterative approach involves continuously subtracting 19 from the number until what remains is less than 19. This approach replaces the recursive calls with a loop which some may find clearer and which avoids potential stack overflow issues with recursion on very large numbers. Here’s an example: large_number = 19457 while large_number >= 19: large_number -= 19 print(large_number == 0) Output: True This snippet iteratively subtracts 19 from large_number until it is less than 19. If the final value of large_number is zero after the loop ends, it indicates divisibility. In this case, 19457 is divisible by 19. Bonus One-Liner Method 5: Using Lambda and Modulo For those who prefer a concise, functional approach, a lambda function can be used. This one-liner defines a lambda that takes a number and returns True if it’s divisible by 19 and False otherwise, using the modulo operator. Here’s an example: is_divisible_by_19 = lambda x: x % 19 == 0 Output: True By assigning a lambda to is_divisible_by_19, we create a function in a single line that checks divisibility. The example confirms that 1900 is divisible by 19 using this compact notation. • Method 1: Using Modulo Operator. Simple and direct. Best for most cases. Efficiency can vary with the size of numbers. • Method 2: Using divmod Function. Provides additional quotient. Slightly less straightforward than method 1 but useful when the quotient is also required. • Method 3: Using Recursion. Illustrative of an algorithmic approach. Not very practical for large numbers due to recursion depth limits. • Method 4: Using Iterative Subtraction. More practical and safer for large numbers than recursion. More verbose and less efficient than the modulo approach. • Bonus One-Liner Method 5: Using Lambda and Modulo. Concise for one-off checks. Not as readable for those unfamiliar with lambda syntax. Less efficient if called repeatedly due to lambda
{"url":"https://blog.finxter.com/5-best-ways-to-check-if-any-large-number-is-divisible-by-19-in-python/","timestamp":"2024-11-11T08:04:59Z","content_type":"text/html","content_length":"70846","record_id":"<urn:uuid:5d36bede-13cf-4b1f-b26e-4f9582cfca8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00301.warc.gz"}
Water Stick springs in Fantastic Contraption (the game) Here is what is cool about [Fantastic Contraption](http://fantasticcontraption.com/) - it's like a whole new world, a world ready for exploring. I am Newton, and I can see if this world follows the models that I propose. In this post, I am going to explore the elastic nature of the "water-sticks". If you have played fantastic contraption, I am sure you noticed that the water-sticks are springy. How does these springy sticks work? Are they just like the springs we have in the real world? An excellent model for springs in the real world is Hooke's law. It says the force exerted by a spring is proportional to its Obviously, this is the magnitude (not the actual force, because that would be a vector). k is the "spring constant" or the stiffness of the spring (in N/m). s is the amount the spring is either compressed or stretched from its natural length. The minus sign is sort of silly. It is there to show that the force exerted by the spring is in opposite direction as the stretch. Another important aspects of springs (in the real world) is the energy stored in a spring. So, now what about the FC-world (Fantastic Contraption)? To explore this question, I created a machine that has a ball falling while attached to a series water-sticks. I will analyze this in terms of energy. As the ball drops, the system consisting of the ball, the water-sticks and the Earth (or whatever planet it is on) will have constant energy. There is no external work on the system so: Where the gravitational potential energy is: It doesn't matter where *y* is measured from since the only thing that shows up is the CHANGE in potential. So, what two positions will I consider? I will consider position 1 to be right when the ball is released. Position 2 will be when the ball reaches its lowest point. These are nice points to choose since the kinetic energy for both cases is zero. This gives an energy equation of: s[1] is zero (it starts off with no stretch). I also will place the origin at the lowest point such that y[2] is also zero. This gives: Now solving for k: I can get values for everything except the mass of the ball (well, I can get the mass in terms of mass of the ball - like I did before). I will use [video tracker](http://www.cabrillo.edu/~dbrown/ tracker/) to get positions (I took a screen shot of the game). I get the following: This gives me a spring constant of: Ok, but I really didn't test if the water-sticks obey hooke's law (since I only have one data point). I could repeat the experiment, but drop it from a different height and see if I get the same spring constant. (I will leave that as an exercise for a student) There is one other way I can test this spring with the set up I have. After the mass stops bouncing, it is equilibrium. The final stretched length of the water sticks is 4.61 U. If Hookes law is working here, then the upward force from the spring should be the same as the downward force of gravity: And, adding the model for a spring: Ok - not the same thing. Something weird is going on. Truthfully, I already knew this. Suppose I replace the many small water sticks with two larger ones (of about the same total length) It essentially does not bounce at all. I have an idea that the water sticks ARE NOT springy. Perhaps it is the joints between sticks that are springy. This would mean that this last set up has very few springs where as the previous had a lot. More like this The fun part about exploring the physics of [Fantastic Contraption](http://fantasticcontraption.com/) is coming up with new setups to test ideas. Torque is not too difficult to set up. Here is what I did: … One of my students showed me this game, [Fantastic Contraption](http://fantasticcontraption.com/). The basic idea is to use a couple of different "machine" parts to build something that will move an object into a target area. Not a bad game. But what do I do when I look at a game? I think - hey… Maybe you know I like numerical calculations, well I do. I think they are swell. [VPython](http://vpython.org) is my tool of choice. In the post [Basics: Numerical Calculations](http:// scienceblogs.com/dotphysics/2008/10/basics-numerical-calculation…) I used vpython and excel to do something… **Pre Reqs:** [Work-Energy](http://scienceblogs.com/dotphysics/2008/10/basics-work-energy.php) You need to be familiar with work and energy to understand this. If you are not familiar, look at the pre requisite link. Ok? Now, let's begin. Suppose a ball moves from point A (3 m, 3 m) to B (1 m, 1… By the way, could you delete the "website" from my above comment (or the comment itself)? Didn't mean to advertise myself for spam. I just deleted your previous comment - I didn't see how to just remove the website. Hope that is ok. Thanks. Just to repeat my comment, I was suggesting you could write some of your Fantastic Contraption series up for the American Journal of Physics. As Chad Orzel over on Uncertain Principles remarked, it's an excellent example of the scientific method. It's interesting because we can't count on the laws of physics in "simulation world" being exactly the same as ours. I made with photoshop anime myspace pics. take a look at them: Thank you for your site ;-) xoxoxo I find FC a little limited, have you tried Dax Phyz? It's not really a game, but you can do much more than attach rods to wheels. If FC makes me feel like Newton, Phyz makes me feel like God ;->. I think the reason for the perceived springiness of water sticks in FC is due to the iterative nature of the constraint solver, where, in each time-step, the sum of all forces results in movements of connected objects, whose positions then are incrementally adjusted to better fit all constraints (such as stick lengths) in a number of relaxation steps. The number of relaxation steps are most likely fixed to some value like 10 in FC, which means that objects several constraints apart requires more than one time-step to adjust, resulting in soft, springy movement.
{"url":"https://scienceblogs.com/dotphysics/2008/10/28/water-stick-springs-in-fantastic-contraption-the-game","timestamp":"2024-11-09T13:25:58Z","content_type":"text/html","content_length":"50035","record_id":"<urn:uuid:1af2489b-a15c-4d7b-b90a-188945676b1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00456.warc.gz"}
图像算法代写|ECE 232E Project 4 Graph Algorithms - ITCS代写 图像算法代写|ECE 232E Project 4 Graph Algorithms In this project we will explore graph theory theorems and algorithms, by applying them on real data. In the first part of the project, we consider a particular graph modeling correlations between stock price time series. In the second part, we analyse traffic data on a dataset provided by Uber. Third part of the project asks you to define your own task. Fourth part of the project is related to shortest paths, and is UNGRADED, OPTIONAL. However, it is suggested that you complete it before finishing part, especially if you struggle with the runtimes for questions 19-24. 1. Stock Market In this part of the project, we study data from stock market. The data is available on this Dropbox Link. The goal of this part is to study correlation structures among fluctuation patterns of stock prices using tools from graph theory. The intuition is that investors will have similar strategies of investment for stocks that are effected by the same economic factors. For example, the stocks belonging to the transportation sector may have different absolute prices, but if for example fuel prices change or are expected to change significantly in the near future, then you would expect the investors to buy or sell all stocks similarly and maximize their returns. Towards that goal, we construct different graphs based on similarities among the time series of returns on different stocks at different time scales (day vs a week). Then, we study properties of such graphs. The data is obtained from Yahoo Finance website for 3 years. You’re provided with a number of csv tables, each containing several fields: Date, Open, High, Low, Close, Volume, and Adj Close price. The files are named according to Ticker Symbol of each stock. You may find the market sector for each company in Name sector.csv. We recommend doing this part of the project (Q1 – Q8) in R. 1. Return correlation In this part of the project, we will compute the correlation among log-normalized stock-return time series data. Before giving the expression for correlation, we introduce the following nota- tion: Then with the above notation, we define the correlation between the log-normalized stock-return time series data of stocks i and j as where ⟨·⟩ is a temporal average on the investigated time regime (for our data set it is over 3 years). QUESTION 1: What are upper and lower bounds on ρij? Provide a justification for using log- normalized return (ri(t)) instead of regular return (qi(t)). 2. Constructing correlation graphs In this part,we construct a correlation graph using the correlation coefficient computed in the previous section. The correlation graph has the stocks as the nodes and the edge weights are given by the following expression Compute the edge weights using the above expression and construct the correlation graph. QUESTION 2: Plot a histogram showing the un-normalized distribution of edge weights. 3. Minimum spanning tree (MST) In this part of the project, we will extract the MST of the correlation graph and interpret it. QUESTION 3: Extract the MST of the correlation graph. Each stock can be categorized into a sector, which can be found in Name sector.csv file. Plot the MST and color-code the nodes based on sectors. Do you see any pattern in the MST? The structures that you find in MST are called Vine clusters. Provide a detailed explanation about the pattern you observe. QUESTION 4: Run a community detection algorithm (for example walktrap) on the MST ob- tained above. Plot the communities formed. Compute the homogeneity and completeness of the clustering. (you can use the ’clevr’ library in r to compute homogeneity and completeness). 4. Sector clustering in MST’s In this part, we want to predict the market sector of an unknown stock. We will explore two methods for performing the task. In order to evaluate the performance of the methods we define the following metric where Si is the sector of node i. Define where Qi is the set of neighbors of node i that belong to the same sector as node i and Ni is the set of neighbors of node i. Compare α with the case where QUESTION 5: Report the value of α for the above two cases and provide an interpretation for the difference. 5. Correlation graphs for weekly data In the previous parts, we constructed the correlation graph based on daily data. In this part of the project, we will construct a correlation graph based on WEEKLY data. To create the graph, sample the stock data weekly on Mondays and then calculate ρij using the sampled data. If there is a holiday on a Monday, we ignore that week. Create the correlation graph based on weekly data. QUESTION 6: Repeat questions 2,3,4,5 on the WEEKLY data. 6. Correlation graphs for MONTHLY data In this part of the project, we will construct a correlation graph based on MONTHLY data. To create the graph, sample the stock data Monthly on 15th and then calculate ρij using the sampled data. If there is a holiday on the 15th, we ignore that month. Create the correlation graph based on MONTHLY data. QUESTION 7: Repeat questions 2,3,4,5 on the MONTHLY data. QUESTION 8: Compare and analyze all the results of daily data vs weekly data vs monthly data. What trends do you find? What changes? What remains similar? Give reason for your observations. Which granularity gives the best results when predicting the sector of an unknown stock and why? Service Scope C|C++|Java|Python|Matlab|Android|Jsp|Prolog|MIPS|Haskell|R|Linux|C#|PHP|SQL| .Net|Hadoop|Processing|JS|Ruby|Scala|Rust|Data Mining|数据库|Oracle|Mysql| Sqlite|IOS|Data Mining|网络编程|多线程编程| Linux编程|操作系统| 计算机网络|留学生|编程|程序|代写|加急|个人代写|作业代写|Assignment Recent Case
{"url":"https://www.itcsdaixie.com/%E5%9B%BE%E5%83%8F%E7%AE%97%E6%B3%95%E4%BB%A3%E5%86%99%EF%BD%9Cece-232e-project-4-graph-algorithms/","timestamp":"2024-11-03T22:18:42Z","content_type":"text/html","content_length":"70874","record_id":"<urn:uuid:ec89d082-046d-4c6a-8e3b-2a9c89c624be>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00462.warc.gz"}
Browsing by Subject "Convergence of numerical methods" Browsing by Subject "Convergence of numerical methods" Now showing items 1-20 of 33 • Article A new numerical method for solving linear elliptic boundary value problems with constant coefficients in a polygonal domain is introduced. This method produces a generalized Dirichlet-Neumann map: given the derivative of ... • Conference Object Atherosclerosis is the major cause of heart attack and stroke in the western world. In this paper we present a computerized method for segmenting the athrerosclerotic carotid plaque from ultrasound images. The method uses ... • Conference Object We investigate the performance of a classical constrained optimization (CCO) algorithm and a constrained optimization Genetic Algorithm (GA) for solving the Bandwidth Allocation for Virtual Paths (BAVP) problem. We compare ... • Article The authors consider the problem of computing tunneling matrix elements for bridge-mediated electron transfer reactions using the Löwdin [J. Math. Phys. 3, 969 (1962) • Article We compare two numerical methods for the solution of elliptic problems with boundary singularities. The first is the integrated singular basis function method (ISBFM), a finite-element method in which the solution is ... • Article Both the axisymmetric and the planar Newtonian extrudate-swell problems are solved using the standard and the singular finite element methods. In the latter method, special elements that incorporate the radial form of the ... • Article The determination of the boundary of a cavity, defined here as a perfectly insulated inclusion, within a conducting medium from a single voltage and current flux measurements on the accessible boundary of the medium, can ... • Conference Object (ACM, 1997) Lower and upper bounds on convergence complexity, under varying degrees of locality, for optimistic, rate-based flow control algorithms are established. It is shown that randomness can be exploited to yield an even simpler ... • Article Two important performance parameters of distributed, rate-based flow control algorithms are their locality and convergence complexity. The former is characterized by the amount of global knowledge that is available to their ... • Conference Object The purpose of this paper is to investigate in an infinite dimensional space, the first passage problem with a risk-sensitive performance criterion, and to illustrate the asymptotic behavior of the associated value function, ... • Article A Legendre spectral Galerkin method is presented for the solution of the biharmonic Dirichlet problem on a square. The solution and its Laplacian are approximated using the set of basis functions suggested by Shen, which ... • Article We consider the solution of various boundary value problems for the Helmholtz equation in the unit square using a nodal cubic spline collocation method and modifications of it which produce optimal (fourth-) order ... • Article In this article, we propose a simple method for detecting an inclusion Ω2 embedded in a host electrostatic medium Ω1 from a single Cauchy pair of voltage and current flux measurements on the exterior boundary of Ω1. A ... • Conference Object (IEEE, 1998) In this paper, we show that for all unknown Multi-Input (MI) nonlinear system that affected by external disturbances, it is possible to construct a semi-global state-feedback stabilizer when the only information about the ... • Conference Object • Conference Object A new class of linear controllers for linear time varying systems are proposed. These controllers are based on nonlinear tools such as integrator backstepping and nonlinear damping, and on a new filter structure that ... • Article We present guidelines on how to choose the mesh and polynomial degree distribution for the approximation of reaction-diffusion problems in polygonal domains, in the context of the hp finite element method. These guidelines ... • Article We consider the numerical approximation of singularly perturbed problems by the h version of the finite element method on a piecewise uniform, Shishkin mesh. It is well known that this method yields uniform approximations, ... • Conference Object In this paper the solutions to the optimal multiobjective H2/H∞ problem are characterized using Banach space duality theory, and shown to satisfy a flatness or allpass condition. Dual and predual spaces are identified, and ... • Conference Object In this paper we consider the optimal disturbance attenuation problem and robustness for linear time-varying (LTV) systems. This problem corresponds to the standard optimal H ∞ problem for LTI systems. The problem is ...
{"url":"https://gnosis.library.ucy.ac.cy/browse?type=subject&value=Convergence%20of%20numerical%20methods","timestamp":"2024-11-11T03:57:21Z","content_type":"text/html","content_length":"75973","record_id":"<urn:uuid:1e563a2c-e8dc-4551-8714-be181d03b3b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00677.warc.gz"}
The Direction of the Acceleration Vector Since acceleration is a vector quantity, it has a direction associated with it. The direction of the acceleration vector depends on two things: · whether the object is speeding up or slowing down · whether the object is moving in the + or – direction The general principle for determining the acceleation is: If an object is slowing down, then its acceleration is in the opposite direction of its motion. This general principle can be applied to determine whether the sign of the acceleration of an object is positive or negative, right or left, up or down, etc. Consider the two data tables below. In each case, the acceleration of the object is in the positive direction. In Example A, the object is moving in the positive direction (i.e., has a positive velocity) and is speeding up. When an object is speeding up, the acceleration is in the same direction as the velocity. Thus, this object has a positive acceleration. In Example B, the object is moving in the negative direction (i.e., has a negative velocity) and is slowing down. According to our general principle, when an object is slowing down, the acceleration is in the opposite direction as the velocity. Thus, this object also has a positive acceleration. This same general principle can be applied to the motion of the objects represented in the two data tables below. In each case, the acceleration of the object is in the negative direction. In Example C, the object is moving in the positive direction (i.e., has a positive velocity) and is slowing down. According to ourprinciple, when an object is slowing down, the acceleration is in the opposite direction as the velocity. Thus, this object has a negative acceleration. In Example D, the object is moving in the negative direction (i.e., has a negative velocity) and is speeding up. When an object is speeding up, the acceleration is in the same direction as the velocity. Thus, this object also has a negative acceleration. Observe the use of positive and negative as used in the discussion above (Examples A – D). In physics, the use of positive and negative always has a physical meaning. It is more than a mere mathematical symbol. As used here to describe the velocity and the acceleration of a moving object, positive and negative describe a direction. Both velocity and acceleration are vector quantities and a full description of the quantity demands the use of a directional adjective. North, south, east, west, right, left, up and down are all directional adjectives. Physics often borrows from mathematics and uses the + and – symbols as directional adjectives. Consistent with the mathematical convention used on number lines and graphs, positive often means to the right or up and negative often means to the left or down. So to say that an object has a negative acceleration as in Examples C and D is to simply say that its acceleration is to the left or down (or in whatever direction has been defined as negative). Negative accelerations do not refer acceleration values that are less than 0. An acceleration of -2 m/s/s is an acceleration with a magnitude of 2 m/s/s that is directed in the negative direction
{"url":"https://mechanicalengineering.softecksblog.in/620/","timestamp":"2024-11-14T14:30:43Z","content_type":"text/html","content_length":"122004","record_id":"<urn:uuid:b63102aa-d9c5-45c9-85af-f46976455258>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00247.warc.gz"}
On the CFD Modelling of Slamming of the Metal Melt in High-Pressure Die Casting Involving Lost Cores | CASTMAN by Sebastian Kohlstädt^ 1,2^,Michael Vynnycky^ 1,3,*^ andStephan Goeke^ 4^^1Division of Processes, Department of Materials Science and Engineering, KTH Royal Institute of Technology, Brinellvägen 23, 100 44 Stockholm, Sweden^2Volkswagen AG—Division of Components Manufacturing, Dr. Rudolf-Leiding-Platz 1, 34225 Baunatal, Germany^3Department of Mathematics and Statistics, University of Limerick, Limerick V94 T9PX, Ireland^4Institute of Mechanics, Kassel University, Mönchebergstr. 7, 34125 Kassel, Germany^*Author to whom correspondence should be addressed. Related Topics This paper describes a technique of manufacturing expendable salt cores developed to form undercut shaped aluminum closed deck cylinder block by high pressure die casting (HPDC) process. The salt cores This paper uses computational fluid dynamics (CFD), in the form of the OpenFOAM software package, to investigate the forces on the salt core in high-pressure die casting (HPDC) when being exposed to the impact of the inflowing melt in the die filling stage, with particular respect to the moment of first impact—commonly known as slamming. The melt-air system is modelled via an Eulerian volume-of-fluid approach, treating the air as a compressible perfect gas. The turbulence is treated via a Reynolds-averaged Navier Stokes (RANS) approach. The RNG k-ε and the Menter SST k-ω models are both evaluated, with the use of the latter ultimately being adopted for batch computations. A study of the effect of the Courant number, with a view to establishing mesh independence, indicates that meshes which are finer, and time steps that are smaller, than those previously employed for HPDC simulations are required to capture the effect of slamming on the core properly, with respect to existing analytical models and empirical measurements. As a second step, it is then discussed what response should be expected when this force, with its spike-like morphology and small force-time integral, impacts the core. It is found that the displacement of the core due to the spike in the force is so small that, even though the force is high in value, the bending stress inside the core remains below the critical limit for fracture. It can therefore be concluded that, when assuming homogeneous crack-free material conditions, the spike in the force is not failure-critical. Korea Abstract 이이 백서는 OpenFOAM 소프트웨어 패키지 형태의 전산유체역학(Computational Fluid Dynamics, CFD)을 사용하여 고압 다이캐스팅(HPDC)에서 유입되는 용융물의 영향에 노출될 때 솔트 코어 코어에 가해지는 힘을 조사합니다. 일반적으로 슬래밍으로 알려진 첫 번째 충격의 순간과 관련하여 단계. 용융 공기 시스템은 공기를 압축 가능한 완전 기체로 취급하는 오일러식 유체 부피 접근 방식을 통해 모델링됩니다. 난류는 RANS(Reynolds-averaged Navier Stokes) 접근 방식을 통해 처리됩니다. RNG k-ε 및 Menter SST k-ω 모델이 모두 평가되며 후자의 사용은 궁극적으로 배치 계산에 채택됩니다. 메시 독립성을 확립하기 위한 관점에서 Courant 수의 효과에 대한 연구는 코어에 대한 슬래밍 효과를 포착하기 위해 HPDC 시뮬레이션에 이전에 사용된 것보다 더 미세한 메시와 더 작은 시간 단계가 필요함을 나타냅니다. 기존 분석 모델 및 경험적 측정과 관련하여 적절하게. 두 번째 단계로, 스파이크와 같은 형태와 작은 힘-시간 적분을 가진 이 힘이 코어에 영향을 미칠 때 어떤 반응이 예상되어야 하는지에 대해 논의합니다. 힘의 스파이크에 의한 코어의 변위가 너무 작아 힘의 값이 높더라도 코어 내부의 굽힘 응력이 파괴 임계 한계 이하로 유지되는 것으로 나타났습니다. 따라서 균질한 균열이 없는 재료 조건을 가정할 때 힘의 스파이크가 파손에 치명적이지 않다고 결론지을 수 있습니다. compressible two-phase flow; slamming; OpenFOAM; high-pressure die casting; lost salt cores; solid continuum mechanics 1. Introduction High-pressure die casting (HPDC) is an important process for the manufacture of high-volume and low-cost automotive components, such as automatic transmission housings, crank cases and gear box components [1,2,3]. Liquid metal, generally aluminium or magnesium, is injected through a complex system of gates and runners and into the die at speeds of between 50 and 100 ms−1 at the ingate, and under pressures as high as 100 MPa [4].From an economic point of view, HPDC is typically constrained by a huge base investment in machinery and tooling, although this is alleviated by low incremental costs for each additional unit produced; in short, the process scales very well with increasing output. On the other hand, this complicates the task for the design engineer, who has to be sure about the viability of a process and manufactured parts before any budget is invested in tooling and machinery. One technological constraint to date is that there is no serial production, via HPDC, of parts with inlying hollow shapes or undercuts formed by lost cores, even though other casting techniques have employed lost cores for decades [1,4]. Nevertheless, several ideas for such products do already exist [5,6,7], and one material that has been put forward as a candidate for HPDC with lost cores is salt [6,8,9]. The basic idea of using salt cores is to block parts of the die volume by inserting cores as placeholders; in so doing, the melt will not penetrate into this space. The cores may then be removed after solidification and one creates undercuts or hollow sections with them, which may then later act as cooling or oil-flow channels [6,8,9].One way to determine whether lost cores are indeed a viable option for a given geometry is to employ numerical simulation—in particular, computational fluid dynamics (CFD) [10,11,12,13]. However, whereas the papers just mentioned focused on the flow of a melt past a core in more general terms, the particular focus of this paper shall be on determining the load on the solid core during the melt’s initial impact. This concept is often referred to as slamming, and is a phenomenon that is investigated more commonly in the context of naval architecture [14,15,16]; it occurs when a two-phase interface with one phase of high density—water—and another of lower density—air—hits an obstacle, for example, a ship. This can also be the case when investigating lost cores in high-pressure die casting, when the air surrounding the core is replaced by the much heavier melt. The particular scientific and technological issue of interest is the initial peak in the load that all previous simulations of high-pressure die casting and lost cores have predicted [10,11,12,13]; more precisely, we wish to investigate the core’s response to this load peak in more detail, in order to determine whether the core will fail as a result of it. 2. Model Equations and Simulation Methodology 2.1. Model Equations As in our earlier CFD modelling of high-pressure die casting [12,17], we model the two-phase flow of molten metal and air by using the volume-of-fluid (VOF) method [18], wherein a transport equation for the VOF function, γ, of each phase is solved simultaneously with a single set of continuity and Navier-Stokes equations for the whole flow field; note also that γ, which is advected by the fluids, can thus be interpreted as the liquid fraction. Considering the molten melt and the air as Newtonian [19], compressible and immiscible fluids, the governing equations can be written as [20,21 ]∂ρ∂t+∇⋅(ρU)=0,(1)∂∂t(ρU)+∇⋅(ρUU)=−∇p+∇⋅{(μ+μtur)(∇U+(∇U)T)}+ρg+Fs,(2)∂γ∂t+∇⋅(γU)+∇⋅(γ(1−γ)Ur)=−γρg(∂ρg∂t+U⋅∇ρg),(3)where t is the time, U the mean fluid velocity, p the pressure, g the gravity vector, Fs the volumetric representation of the surface tension force and T denotes the transpose. In particular, Fs is modelled as a volumetric force by the Continuum Surface Force (CSF) method [22 ]. It is only active in the interfacial region and formulated as Fs=σκ∇γ, where σ is the interfacial tension and κ=∇⋅(∇γ/|∇γ|) is the curvature of the interface; the significance of the term Ur in Equation (3) is explained shortly in Section 2.2. The material properties ρ and μ are the density and the dynamic viscosity, respectively, and are given byρ=γρl+(1−γ)ρg,(4)μ=γμl+(1−γ)μg,(5)where the subscripts g and l denote the gas and liquid phases, respectively. We take ρl,μg and μl to be constant, but assume the air to be an ideal gas, that is, its density changes with pressure and temperature; hence, the equation of state for our model readsρg=MpRT,(6)where M is the molecular weight for air, R is the universal gas constant and T is the temperature. This is a new unknown in the system and needs to be solved for via the heat equation [20,21],∂∂t(ρT)+∇⋅(ρTU)=∇⋅(αeff∇T)−(γcvl+1−γcvg)(∇⋅(pU)+∂(ρK)∂t+∇⋅(ρKU)),(7)where K=12U⋅U is the kinetic energy, cvg and cvl denote the specific heat capacities at constant volume for the gas and liquid phases, respectively, αeff is given byαeff=γklcvl+(1−γ)kgcvg+μturσtur,(8)where kg and kl denote the thermal conductivities for the gas and liquid phases, respectively, and σtur is the turbulent Prandtl number, whose value is set to 0.9 [23]. Note that αeff resembles a phase-averaged thermal diffusivity that includes the contribution of turbulence, although it lacks a density term in the denominator. Furthermore, μtur in Equation (2) denotes the turbulent eddy viscosity, which will be calculated via the Menter SST k –ω model [24]; the equations for this model are summarized in the Appendix A.The major purpose of this paper is to calculate the forces on the core during slamming. Those are governed by the following formulae. For i=x,y, the total force acting on the core in direction i, Fi,tot, is given byFi,tot=Fi,p+Fi,s,(9)where Fi,p and Fi,s are pressure and viscous shear forces, respectively, and are given byFi,p=−∫∫AcpnidA,(10)Fi,s=∫∫Ac(μ+μtur)(∂Ui∂xj+∂Uj∂xi)njdA,(11)where U=(Ux,Uy) as the velocity and its components and Ac is the surface area of the core.The model two-dimensional (2D) geometry shown in Figure 1. This consists of two channel walls, an inlet, an outlet and a salt core that is located about the horizontal axis of symmetry of the channel. A summary of the mathematical expressions of the applied boundary conditions is presented in Table 1. In summary: the channel walls and the surface of the core are no-slip, zero heat-flux boundaries; the inlet velocity and temperature are prescribed; the outlet is at ambient pressure and heat conduction is negligible. In addition, there is always melt at the inlet, so that γ=1 there; at all other boundaries, γ satisfies a zero flux condition. Figure 1. The geometry for investigating the slamming on a salt core in channel; all dimensions are in mm.Table 1. Boundary conditions for the presented model. n denotes the outward unit normal As for the initial conditions of the die, the cavity is filled with air (γ = 0) that is at rest (U = 0), at ambient temperature (T = Tamb) and at atmospheric pressure (p = pamb). The initial conditions for the turbulence model are set via the length scale of the largest eddies and the turbulence intensity. Those quantities were set to 2 mm and 5 %, respectively.What is now still missing are the material properties of the phases. Those are presented in Table 2. Note that we have taken ρl and μl to be constant, even though they would in general depend on the temperature. The reason for taking them to be constant is that the temperature of the melt will not change appreciably from Tmelt during the filling, in view of the zero heat flux boundary conditions that are being applied; on the other hand, ρg may change appreciably, as the gas will heat up from its initial ambient temperature and experience a significant increase in pressure, both of which are accounted for via Equation (6). One additional comment concerns the specific heat capacities cpg and cpl: the index p means that their values have been measured and documented for constant pressure, which are more commonly tabulated rather than cvg or cvl. Equation (7), however, requires cv. This conversion can be conducted via the value of the isentropic expansion factor or heat capacity factor, which can for air be assumed to be constant and equal to cpg/cvg = 1.4 in the given temperature range [12,25,26]. For the aluminum melt, this additional step is not necessary, as it was treated as being incompressible; thus, cvl = cpl.Table 2. Model parameters. The parameters for gas are those for air; those for metal are for the alloy AlSi9Cu3 [12,19,27]. 2.2. Simulation Methodology To solve the above equations, the OpenFOAM software package [28,29,30,31] was used, due to the niche nature of the field of application, as well as to make use of its extendability. OpenFOAM has most of its applications designed for solving fluid mechanics problems [29,32] and applies the finite volume method [23,33,34] to solve them. With the later aim of scaling the model to complex industrial 3D-geometries in mind, OpenFOAM is also a very powerful tool, since the distribution of a case to a huge number of CPU-cores is not limited due to license restrictions, as is the case with commercial CFD codes. In particular, we used the compressibleInterFoam solver, which handles two compressible, non-isothermal immiscible fluids using a VOF interface-capturing approach. The solver also uses the multi-dimensional universal limiter with explicit solution (MULES) scheme for interface compression, which introduces a supplementary velocity field, Ur, in the vicinity of the interface, as seen in Equation (3); in doing so, the local flow steepens the gradient and the interface becomes sharper and more pristine. A typical form for Ur is Ur=min(U,max(U)), as given by Reference [20]. Further benefits that accrue from using OpenFOAM for this application that its implementation of the Menter SST k-ω model has previously been shown to be robust, and to give results that are in excellent agreement with experimental data [35,36]. Moreover, by use of a function object called forces that is already pre-implemented in OpenFOAM, the forces required in Equations (10) and (11) can be readily calculated.The corresponding mesh for the 2D-geometry shown in Figure 1 is illustrated in Figure 2. This is a structured mesh consisting of one layer of hexahedral cells. It is also parameterized and can therefore easily be refined for later mesh independence studies. For this purpose, the OpenFOAM utility refineMesh was used that, by default, halves the mesh spacing upon execution, that is, quadrupling the cell count for a 2D mesh as in Figure 2. This simplified model was used for efficiency reasons in order to establish that the newly installed tool yields sensible Figure 2. An example of a computational grid created with the utilities blockMesh and mirrorMesh for a mesh spacing of 2 mm.Lastly, as this is a rather complex flow problem in the context of CFD research, it is worthwhile to look at the solution algorithm a little more closely. In order to do that, Figure 3 is presented. Here, the standard PIMPLE algorithm, a combination of PISO (Pressure Implicit with Splitting of Operator) [37] and SIMPLE (Semi-Implicit Method for Pressure-Linked Equations) [38], was modified and, for this purpose, an additional step was added that solved the transport equation of the phase variable γ according to Equation (3) at the beginning of the solution algorithm, that is, before the momentum predictor. Figure 3 shows the original PISO algorithm on the left and on the right the reader sees the implemented extension of it including the additional step for the phase field for handling the two-phase flow. The corrector loops can either be broken by a residual or a maximum number criterion. One additional comment for the sake of completeness is that the outermost loop for the increasing time step is omitted, as the chart only shows the processes that occur within each time step. Figure 3. Pressure Implicit with Splitting of Operator (PISO) algorithm before and after the adjustments. 3. Results and Discussion In this section, we present the results of the numerical simulations. The principal aim of these is to determine the magnitudes of the forces which act on the salt core, with a view to being able to infer whether an actual salt core would be able to withstand such forces in practice. As one of the diagnostics of the computations, we consider first the slamming factor, so as to link our results to analytical expressions for this quantity that are available in the field of marine technology. 3.1. The Concept of the Dimensionless Slamming Factor Following the results of recently published articles on salt cores in high-pressure die casting [10,12,13,17], in almost every simulation where the core was treated as rigid, the signature peak mentioned in Section 1 appeared in a plot of the force exerted on the core as a function of time. The ambition here was therefore to investigate its nature in more detail. This will be done by introducing the dimensionless slamming factor, Cs, which is defined asCs=FρRU2l,(12)where F is the force, and is given byF=F2x,tot+F2y,tot−−−−−−−−−−√,(13)with Fx,tot and Fy,tot being computed according to Equations (9)–(11); Cs also depends on the density, ρ, the undisturbed melt velocity U, the radius of the core, R, and its length, l. Analytical progress was made by von Karman [39] and Wagner [40] to determine this slamming factor in the context of marine technology, with the result that according to von Karman the slamming factor for a cylinder was π, while Wagner put it at 2π. More recent studies [14,41], however, indicate that the value according to von Karman represents a lower limit of the slamming factor, while the Wagner model acts as an upper cap. Both models also fail to provide an evolution of the value over time. Campbell and Weynberg [15,42] present a model that offers results based on empirical data to determine the plot over time, according to the formulaCs=F(t)ρRU2l=5.151+9.5UtR−1+0.275UtR−1.(14)The letters in Equation (14) represent the same quantities as in Equation (12).The introduced dimensionless slamming factor makes the calculated results independent of the ingate velocity or geometric dimensions. This is further illustrated in the following plot. Figure 4 shows that the calculated slamming factor is the same no matter whether it is being evaluated for an ingate velocity of 10 or 20 ms−1. Those values for Uin were used for U for calculating the slamming factor according to Equations (12)–(14). Figure 4. Normalized forces on the core for a mesh resolution of 0.3 mm. 3.2. Determining the Slamming Factor with Respect to Mesh Resolution The model setup was calibrated with existing data on similar cases, such as the water entry of an axisymmetric projectile [43] with similar Reynolds numbers and also for the feature of the pile-up effect [41,44,45], wherein a jet is formed near the intersection with the initial water surface.Applying the model to the previously introduced case, Figure 5, Figure 6 and Figure 7 show the temporal development of the phase distribution, the pressure and the velocity at impact on the core. Please note that, although the model is non-isothermal, it does not take the temperature difference between the core, the walls and the melt into account, in the sense that zero heat-flux boundary conditions are applied at all solid boundaries (see Table 1). The actual fluid flow and heat transfer in the near-wall area may therefore differ in reality due to heat transfer-induced effects. Figure 5. Phase distribution of the melt at impact and immediately afterwards: (a) t⋅UR = 0.2; (b) t⋅UR = 0.205; (c) t⋅UR = 0.21. Here, U=Uin with Uin = 20 ms−1. Figure 6. The pressure field at the times of impact and immediately afterwards: (a) t⋅UR = 0.2; (b) t⋅UR = 0.205; (c) t⋅UR = 0.21. Here, U=Uin with Uin = 20 ms−1. Figure 7. The velocity magnitude field at the times of impact and immediately afterwards: (a) t⋅UR = 0.2; (b) t⋅UR = 0.205; (c) t⋅UR = 0.21. Here, U=Uin with Uin = 20 ms−1.After calibrating the setup, it was time to proceed to evaluating the forces on the core and to benchmark them with previously published data. The first thing to determine was the necessary mesh resolution in order to compute plausible results for the slamming factor. The results of this mesh resolution study are presented in Figure 8. A meticulous investigation of this phenomenon of slamming showed that, with generally used meshing standards in industry-oriented computations and mesh spacings in the range of 1 mm [10], the CFD simulation underpredicts the value of the slamming factor (see Figure 8). It becomes evident that the simulation requires a mesh as fine as a spacing of 0.3 mm in order for the computed result for Cs to be above the lower limit of the slamming factor according to the von Karman model. This is significantly lower than the typical mesh spacing that was used in Reference [10] and requires a mesh for this rather small geometry to consist in total of 60,000 cells, albeit being a 2D mesh. Transferring these results onto a real-world casting geometry that includes salt cores and designing a 3D mesh according to these findings would, in turn, produce a mesh that it would be completely impractical to use with currently available computational capacity. Figure 8. Mesh study of the slamming factor in comparison with the models by von Karman [39] and Wagner [40]. Here, Uin = 20 ms−1.While Figure 8 showed that coarser meshes underpredict the peak in the force, the stationary value of the force tends to be overpredicted the coarser the mesh is. From a design engineer’s point of view, this is a beneficial outcome as it includes a built-in safety factor into the design of a devised product. Note also that the stationary or steady-state solution in Figure 8 refers to the constant value for Cs that is obtained once the melt front is well past the salt core; see, for example, Figure 4.Figure 9 shows this study’s result in comparison with the findings of previously published articles [15,39,40,46,47]. The results are plotted for a mesh spacing of 0.025 mm. Technically, only a refinement of the cells in the near core area would be necessary. However, given the fact that then the structured nature and the numerical benefits that come along with it would have to be sacrificed, this opportunity was forgone here as the domain was sufficiently small and calculation efficiency was not a benchmark in this academic study. It should be noted that the analytical models start with t = 0 at the point of impact, whereas for our numerical simulation t=0 denotes the time a which melt start to enter the cavity; for this reason, the simulation time values were therefore adjusted so that t=0 in Figure 9 corresponds to the time when melt first hits the core, with the consequence that negative values of t appear in this figure. Figure 9. Comparison of the computed result with reference studies in previously published articles; mesh cell spacing 0.025 mm.Figure 9 shows also the values for several other models, among them the static von Karman [39] and Wagner [40] models, as well as more recent models that also represent the development of the force over time [46,47]. It can be concluded that this study’s assessment of the slamming factor agrees very well with earlier authors’ findings. One interesting observation of the result from the present volume-of-fluid model is that its increase is, compared to the mostly analytical models, more gradual. The other models feature a sharp step-like increase in value, resembling the morphology of a Heaviside function [48].The reason for this gradual increase is to be found most likely inside the volume-of-fluid approach itself. As Equations (4) and (5) illustrated, the material properties are being averaged by their volume fraction, γ. Another reason for this feature is to be found in the numerical approach of this study, where the numerics are known to smear out the interface of volume-of-fluid simulations [49] and therefore artificial interface compression terms are to be introduced into the partial differential equation for the volume fraction [12,49], Equation (3). This effect of a non-discrete interface causes additional problems in stability of the interFoam solver-family in the sense of spurious parasitic velocities at the interface, and several strategies to mitigate the effects towards a sharper interface have since been proposed [50,51,52,53]. We can conclude that, taking into account Figure 8 and Figure 9, one has, for the given boundary conditions, to maintain at least a mesh fineness corresponding to a spacing of 0.2 mm, in order to reach a value for Cs above the limit defined by von Karman in Reference [39], if one wants to achieve a proper result for the slamming factor. Transferring this finding to more applicable designs in industry, it underscores the need for mesh refinement near a salt core boundary if the load is to be evaluated correctly. 3.3. Effect of Turbulence The aspect of turbulence treatment on the slamming factor in the simulations was also evaluated during the investigations. Figure 10 features this result. Two different turbulence models were benchmarked against each other for the given case and the different mesh spacings, as illustrated in Figure 10. The benchmarked turbulence models are Menter’s SST k-ω model [24,54] and the RNG k-ε model [55]. The reason why those two were picked is that the Menter model was proven to be of excellent stability and accuracy proven in studies published by other authors [35] and our own results published in previous papers [11,12]. The RNG-model was selected as it is the standard of the commercial CFD casting software Flow-3D Cast. Both turbulence models belong to the RANS-model family (Reynolds-Averaged-Navier-Stokes equations). Figure 10. Influence of the selected turbulence model on the computed result for the slamming factor.As is evident from Figure 10, the selection of the turbulence model is of minor importance. Even with different mesh spacings, the result of the slamming factor stayed more or less the same. This is plausible, as the main contribution to the slamming results from the pressure forces on the core—an effect documented in more detail in Reference [12]. For those forces, the momentum of the melt is more important compared to other factors in the fluid. This is also easy to comprehend with the introduced formulae in Equations (10) and (11). One sees here that the only quantity that is influenced by the turbulence model is μtur, while all the others do not directly depend on it. This turbulent eddy viscosity, however, only appears in the formula for the viscous forces, which are of negligible importance as seen in Figure 4. 3.4. Effect of Courant Number One might now argue that the presented strong mesh dependency of the results is the consequence of the simulation time-step size being Courant number-controlled [56], and hence that an automatic reduction of the mesh spacing should lead to a shorter time step, Δt, in consequence. In order to eliminate this possibility, studies with different Courant numbers were also conducted; in particular, each cell’s local Courant number is calculated, and the solver then sets a value for Δt so that these local Courant numbers are not greater than some maximum allowable value that is specified by the user. The result of this investigation is shown in Figure 11. In Figure 11a, the values for Fstat are shown, whereas Figure 11b shows the values for Fmax. Fstat calculates the slamming factor value via the average force computed while the denser phase surrounds the cores, while Fmax uses the peak force value. Although the latter in particular oscillates somewhat, there is no obviously visible trend that would support the hypothesis that only reducing Δt would also lead to a more physical value for the slamming factor on a coarser mesh. The values for Fstat and Fmax are the same for a Courant number range from 0.1 to 2, while at the same time the difference caused by the different mesh resolution was clearly visible. In conclusion, a mesh spacing of 0.6 mm with a Courant number of 2 leads to values much closer to reality for the peak in the force than a mesh resolution of 1 mm with a maximum Courant number of 0.1. But, even for a mesh resolution of 0.6 mm, the computed values for the peak in the force are significantly below the lower bound given by the von Karman model (π). Figure 11. Results of the time step size Δt study: (a) Results for Fstat; (b) Results for Fmax. 3.5. Response of the Core to the Spike-Like Force Impact Although it is academically interesting to determine the value for the slamming factor as precisely as possible, the general underlying question is also whether such a small force-time interval is failure-critical for salt cores in high-pressure die casting in general. We will therefore in the following present a couple of considerations to assess the necessity for the engineer to keep the slamming factor below a certain level.If one, for this purpose, leaves the field of 2D considerations and imagines how a beam of length 70 mm that is situated within the die facing the inflowing melt stream orthogonally would react to the inflowing melt and the slamming, one would make the following observations. The consideration is of course based on the assumption that the inflowing melt stream stays as thin as in the test case, a fact that has emerged in more application-oriented studies [12,17]. In general, any force impacting on the core at one point cannot travel through the solid body faster than the speed of sound,vs=Kρs−−−√,(15)with vs being the speed of sound in salt, K being the bulk modulus for salt cores and ρs being the density of salt cores. K is related to the documented quantities for salt cores [17] via References [57,58]K=E3(1−2ν),(16)with E as the Young’s modulus and ν as the Poisson ratio. Assuming ρs as 2056 kg m−3, ν as 0.21 and E as 1.5×1010 Pa [17 ], Equations (15) and (16) yield the speed of sound in salt as 2048 ms−1. Based on Figure 4, one can now assume 0.02 s as a scale for the dimensionless filling time, yielding 3.5×10−6 s as an absolute time scale during which the peak force acts on the core. Together with the calculated speed of sound, this means that a stress signal travelling a distance below 10 mm through the core, that is, referring back to the beam with 70 mm in length, would not even be able to reach the bearing.If we further assume a slamming factor of 32π, as reported in Figure 9, the absolute force according to Equation (12) is slightly below 1400 N. Now, applying Newton’s second law,F=ma,(17)where F is the force, m is the mass and a is the acceleration, integrating (17) twice with respect to t, and assuming the initial velocity to be zero and a to be constant, to obtains=12at2,(18)where s denotes distance, one may get an idea of how far the centre of the core travels. Based on the calculations above for the distance travelled in two directions of the beam and the reported dimensions of the core in Figure 1, neglecting the round edges, one obtains a displaced mass of 0.02 kg leading, with the previously calculated value for the force, to an acceleration of 6.6×10−6 ms−2 and in turn to a displacement of the core section of 4×10−7 m in the direction of the mean flow, according to Equation (18). Three-point bending tests with salt cores, conducted by the authors [17], show that the tested specimens manufactured via pressing and sintering [59] are, even at room temperature, capable of undergoing displacements of this scale without cracking, provided that they are free of internal cracks, that is, meaning that the continuum mechanics approach can be applied.In turn, this leads to the conclusion that, due to the short time scales of the slamming effects, they are not to be considered failure-critical for the salt core. Based on the presented knowledge, only a small segment of the core will be displaced by the spike in the force, causing the core in turn to vibrate. The time scale of the impact will be too small to reach the core bearings and produce a counterforce. Due to the displacement, the core will start to vibrate before being permanently deformed by the steady-state force. Since this steady-state force will be present for a longer period of time, this may in the end be failure-critical. To finally conclude the discussion, the reader should bear two additional things in mind. The results presented, although relying heavily on dimensionless numbers, are hugely geometry-, parameter- and, most notably, ingate/impact velocity-dependent. One must therefore be careful to transfer these findings to other geometries and setups; on the other hand, it is highly recommended to apply the presented CFD approach to the geometry at hand. Secondly, the assumption of the core being crack-free may also be flawed with respect to reality. It may therefore be possible that salt cores, as the material salt is of a ceramic nature, will most often contain small cracks. It thus provides scope for future research work to develop a model for assessing how long potential cracks are allowed to be in order to withstand the slamming impact. Another interesting direction will be to measure not only the displacement of a die casting, as was done in Reference [17], but also the forces on it.As a practical conclusion, this also suggests that pre-heated cores that are warmer than room temperature may yield a higher fraction of castings with intact cores and thus yield channel geometries within the tolerance limit. Higher temperatures naturally make it more likely that the material will fail in a ductile way, rather than in a brittle way. 4. Conclusions In the course of this paper, the presented research work showed that the slamming phenomenon is something that is in general underestimated by state-of-the-art CFD simulations in industry that have used typical mesh spacings of 1 mm. We have found that the mesh resolution has to be increased by a factor of 40 in order for the computed slamming factor to be in the range of existing analytical results of empirical measurements. Interestingly, the reduction of the time step size, Δt, did not show any impact on the proper estimation of the slamming factor’s value. The contribution of the turbulence in this study was found to be negligible and the mathematical justification for this was shown with the presented formulae; this echoes the findings and conclusions of more application-related studies [12]. It remains a topic for scientific debate as regards what response of the core the slamming peak in the force actually causes inside the material. This paper provides a reasoning based on simple continuum mechanics formulae that the core will, for the given parameters, be only slightly displaced and thus in turn only vibrate—thus rendering the slamming impact to be a non-failure-critical phenomenon. The reader is reminded that the provided results are only valid for the presented boundary conditions and transferring them to other setups has to be handled with care. It is never possible, without the detailed geometric constraints and boundary conditions, to decide whether a salt core solution for high-pressure die casting will be viable. Future research will be undertaken to measure the force inside a die during filling and also to combine the concept of slamming with the laws of fracture mechanics. Author Contributions Conceptualization, S.K.; methodology, S.K.; software, S.K. and S.G.; validation, S.K.; formal analysis, S.K., M.V.; investigation, all; writing–original draft preparation, S.K. and M.V.; writing–review and editing, S.K. and M.V.; supervision, M.V.; project administration, S.K. and M.V.; funding acquisition, S.K. All authors have read and agreed to the published version of the This research received no external funding. Informed Consent Statement Not applicable. Data Availability Statement The data presented in this study are available on request from the first author. Conflicts of Interest The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results. Appendix A. The Menter SST k-ω Model (Menter 1994) The equations for the Menter Shear Stress Transport (SST) k-ω turbulence model, where k is the turbulence kinetic energy and ω is the specific rate of dissipation of the turbulence kinetic energy, areD(ρk)Dt=τ:∇U−β∗ρωk+∇⋅[(μ+σkμtur)∇k],(A1)D(ρω)Dt=ρΓμturτ:∇U−βρω2+∇⋅[(μ+σωμtur)∇ω]+2(1−F1)ρσω2ω∇k⋅∇ω,(A2)whereDDt=∂∂t+U⋅∇,and withτ=μtur(∇U+(∇U)T−23(∇⋅U)δ)−23ρkδ,(A3)where δ is the Kronecker delta and β∗=0.09. The model constants σk,σω,β and Γ are given byφ=F1φ1+(1−F1)φ2for φ=(σk,σω,β,Γ), withσk1=0.85,σω1=0.5,β1=0.0750,σk2=1.0,σω2=0.856,β2=0.0828,Γi=βi/β∗−σωiκ2/β∗−−√,i=1,2,where κ=0.41. In addition,F1=tanh(arg41),(A4)witharg1=min[max(k−−√0.09ωy,500μy2ρω),4ρσω2kCDkωy2],(A5)where y is the distance to the nearest surface and CDkω is the positive portion of the cross-diffusion term in Equation (10), that is,CDkω=max(2ρσω2ω∇k⋅∇ω,10−20),(A6)and the turbulent viscosity, μtur, is given byμtur=a1ρkmax(a1ω,ΩF2),(A7)where Ω is the absolute value of the vorticity, that is, Ω=|∇×U|, and F2 is given byF2=tanh(arg22),(A8)wherearg2=max(2k−−√0.09ωy,500μy2ρω),(A9)and a1=0.31. Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license ( http:
{"url":"https://castman.co.kr/on-the-cfd-modelling-of-slamming-of-the-metal-melt-in-high-pressure-die-casting-involving-lost-cores-2/","timestamp":"2024-11-08T22:17:23Z","content_type":"text/html","content_length":"133616","record_id":"<urn:uuid:088ce336-b6bf-4f25-926a-a77a54ff9636>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00449.warc.gz"}
Representativeness Heuristic - Psynso The representativeness heuristic is a psychological term wherein people judge the probability or frequency of a hypothesis by considering how much the hypothesis resembles available data as opposed to using a Bayesian calculation. While often very useful in everyday life, it can also result in neglect of relevant base rates and other cognitive biases. The representative heuristic was first proposed by Amos Tversky and Daniel Kahneman. In causal reasoning, the representativeness heuristic leads to a bias toward the belief that causes and effects will resemble one another (examples include both the belief that “emotionally relevant events ought to have emotionally relevant causes”, and magical associative thinking). Tom W. In a study done in 1973, Kahneman and Tversky gave their subjects the following information: “Tom W. is of high intelligence, although lacking in true creativity. He has a need for order and clarity, and for neat and tidy systems in which every detail finds its appropriate place. His writing is rather dull and mechanical, occasionally enlivened by somewhat corny puns and by flashes of imagination of the sci-fi type. He has a strong drive for competence. He seems to feel little sympathy for other people and does not enjoy interacting with others. Self-centered, he nonetheless has a deep moral sense.” The subjects were then divided into three groups who were given different decision tasks: One group of subjects was asked how similar Tom W. was to a student in one of nine types of college graduate majors (business administration, computer science, engineering, humanities/education, law, library science, medicine, physical/life sciences, or social science/social work). Most subjects associated Tom W. with an engineering student, and thought he was least like a student of social science/social work. A second group of subjects was asked instead to estimate the probability that Tom W. was a grad student in each of the nine majors. The probabilities were in line with the judgments from the previous A third group of subjects was asked to estimate the proportion of first-year grad students there were in each of the nine majors. The second group’s probabilities were approximated by how much they thought Tom W. was representative of each of the majors, and less on the base rate probability of being that kind of student in the first place (the third group). Had the subjects approximated their answers by the base rates, their estimated probability that Tom W. was an engineer would have been much lower, as there were few engineering grad students at the time. The Taxicab problem In another study done by Tversky and Kahneman, subjects were given the following problem: “A cab was involved in a hit and run accident at night. Two cab companies, the Green and the Blue, operate in the city. 85% of the cabs in the city are Green and 15% are Blue. A witness identified the cab as Blue. The court tested the reliability of the witness under the same circumstances that existed on the night of the accident and concluded that the witness correctly identified each one of the two colors 80% of the time and failed 20% of the time. What is the probability that the cab involved in the accident was Blue rather than Green knowing that this witness identified it as Blue?” Most subjects gave probabilities over 50%, and some gave answers over 80%. The correct answer, found using Bayes’ theorem, is lower than these estimates: • There is a 12% chance (15% times 80%) of the witness correctly identifying a blue cab. • There is a 17% chance (85% times 20%) of the witness incorrectly identifying a green cab as blue. • There is therefore a 29% chance (12% plus 17%) the witness will identify the cab as blue. • This results in a 41% chance (12% divided by 29%) that the cab identified as blue is actually blue. • Representativeness is cited in the similar effect of the gambler’s fallacy, the regression fallacy and the conjunction fallacy. Representativeness, Extensionality, and Bayes’ Theorem The Representativeness Heuristic violates one of the fundamental properties of probability: extensionality. For example, participants were provided with a description of Linda who resembles a feminist. Then participants were asked to evaluate the probability of her being a feminist, the probability of her being a bank teller, or the probability of being both a bank teller and feminist. Probability theory dictates that the probability of being both a bank teller and feminist (the conjunction of two sets) must be less than or equal to the probability of being either a feminist or a bank teller. However, participants judged the conjunction (bank teller and feminist) as being more probable than being a bank teller alone. The use of the Representativeness Heuristic will likely lead to violations of Bayes’ Theorem. Bayes’ Theorem states: P(H|D) = \frac{P(D | H)\, P(H)}{P(D)}. However, judgments by Representativeness only look at the resemblance between the hypothesis and the data, thus inverse probabilities are equated: P(H | D) = P(D | H) As can be seen, the base rate P(H) is ignored in this equation, leading to the base rate fallacy. This was explicitly tested by Dawes, Mirels, Gold and Donahue (1993) who had people judge both the base rate of people who had a particular personality trait and the probability that a person who had a given personality trait had another one. For example, participants were asked how many people out of 100 answered true to the question “I am a conscientious person” and also, given that a person answered true to this question, how many would answer true to a different personality question. They found that participants equated inverse probabilities (e.g., P(conscientious | neurotic) = P(neurotic | conscientious)) even when it was obvious that they were not the same (the two questions were answered immediately after each other). Disjunction Fallacy In addition to extensionality violation, base-rate neglect, and the conjunction fallacy, the use of Representativeness Heuristic may lead to a Disjunction Fallacy. From probability theory the disjunction of two events is at least as likely as either of the events individually. For example, the probability of being either a physics or biology major is at least as likely as being a physics major, if not more likely. However, when a personality description (data) seems to be very representative of a physics major (e.g., pocket protector) over a biology major, people judge that it is more likely for this person to be a physics major than a natural sciences major (which is a superset of physics). Further evidence that the Representativeness Heuristic may be causal to the Disjunction Fallacy comes from Bar-Hillel and Neter (1986). They found that people judge a person who is highly representative of being a statistics major (e.g., highly intelligent, does math competitions) as being more likely to be a statistics major than a social sciences major (superset of statistics), but they do not think that he is more likely to be a Hebrew language major than a humanities major (superset of Hebrew language). Thus, only when the person seems highly representative of a category is that category judged as more probable than its superordinate category. These incorrect appraisals remained even in the face of losing real money in bets on probabilities. Alternative Explanations Jon Krosnick, a professor in Communication at Stanford, in his work[which?] has proposed that the effects that Kahneman and Tversky saw in their work may be partially attributed to information order effects. When the order of information was reversed – with probability figures coming later, a lot of the effects were mitigated.
{"url":"https://psynso.com/representativeness-heuristic/","timestamp":"2024-11-09T13:14:04Z","content_type":"text/html","content_length":"120520","record_id":"<urn:uuid:334a3351-a88f-445b-8954-7158da42f7df>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00482.warc.gz"}
Sine of angle calculator Our sine of angle calculator makes it easy for you to find the sine of any angle. Simply enter the angle value into the calculator choose the between degrees or radians, and it will automatically calculate the sine of the angle for you. This tool is perfect for students, teachers, and anyone else who needs to calculate the sine of angles quickly and accurately. What is the Sine of an Angle? The sine of an angle is a trigonometric function that relates the length of the side opposite the angle to the length of the hypotenuse in a right triangle. It is represented by the symbol 'sin'. Sine of Angle Formula The formula for finding the sine of an angle is: sin α = \dfrac{\text{opposite}}{\text{hypotenuse}} Where α is the angle, opposite is the length of the side opposite the angle, and hypotenuse is the length of the hypotenuse of the right triangle. Example 1: Let's say we have a right triangle with an angle of 30 degrees, and the opposite side is 5 units while the hypotenuse is 10 units. We can find the sine of the angle using the formula as follows: sin 30 = 5/10 Therefore, the sine of 30 degrees is 0.5. Example 2: Suppose we have another right triangle with an angle of 45 degrees. We can use our sine of angle calculator to find the sine of the angle as follows: Enter the angle value into the calculator choose degree from the select box. The calculator will show that the sine of 45 degrees is 0.7. In conclusion, the sine of an angle is an important trigonometric function that is used in various fields, including mathematics, physics, and engineering. Our sine of angle calculator makes it easy for you to calculate the sine of any angle quickly and accurately. Whether you are a student or a professional, our calculator will help you get the job done. So, use our calculator today and make your calculations easier!
{"url":"https://owlcalculator.com/geometry/sine-of-angle","timestamp":"2024-11-06T05:33:49Z","content_type":"text/html","content_length":"527410","record_id":"<urn:uuid:c4972103-7fba-4256-a397-ab59ff654010>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00494.warc.gz"}
BMS Key Functions Cell BalancingOvercharging and over discharging of Liion cells causes Essay BMS Key Functions: Cell BalancingOvercharging, and over discharging, of Li-ion cells causes permanent damage, also is a significant safety issue. Therefore, reliable control of each cell voltage is mandatory. Battery cells are connected in parallel and in series to obtain greater energy capacity and voltage. The parallel connections are simple to operate since the cells emerge as a full single cell, and cell management is simple. The current divides corresponding to the internal resistance of the individual cell, and the terminal voltage of each cell is matched (Einhorn et al. ,2011). In serially connected cells there is a characteristic between cells in charging and discharging. During applying load, the voltage of the smallest cell capacity is the early reaches discharge voltage limit (DVL). The capacity of the whole battery back is thus restricted by the narrowest cell in the module. Charging of the battery module cannot be extended when the cell with the smallest capacity is fully charged despite some cells are not completed. Discharging of the battery module stops when also the cell with the smallest capacity is empty, despite the others cells still charged (Einhorn et al. ,2011).One further vital influence when individual cells are formed together to a battery module, each cell has a specific thermal behavior even with a proper-designed cooling strategy, thus cells age differently from each other especially after frequently charging and discharging. Performance of a battery module with various single-cell capacities can further be developed when the charge from the cells is balanced with an electronic circuit. They mainly use two concepts to balance battery module. First approach is passive cell balancing which uses a resistor to discharge the cell with the largest cell voltage so that charging can be maintained till all cells are fully charged. This technique is applicable during the charging process and not efficient due to power dissipation and energy lost. secondly is the active cell balancing, it can transfer the charge between the cells in a battery module using a small-time storage component, which can be a capacitor or an inductor (Einhorn et al. ,2011). A more in-depth insight with experimented results and simulated structural analysis design of the battery balancing system can be found in literatures carried out by Einhorn and by Roessler and Fleig. Battery balancing design is very significant as it is maintaining all battery cells in the same state of charge. Battery ModelingBMS has a significant function in HEVs, as the battery pack is the heart of the vehicle and must work in an energy awareness way. Accordingly, it expects a proper monitoring and supervision of the battery pack. In order to achieve the required efficiency, an exact calculation of the battery state of charge (SOC), state of health (SOH), and remaining useful life (RUL) is required. Since battery parameters cannot be precisely measured with a sensor, a numerical model along with a robust estimation algorithm is necessary. In request to implement the model according to battery chemical characteristics, there are too many parameters to consider when applying them in real time models. A trade-off between model accuracy and complexity, thus models must be specific to read the system energetics while being easy to be implemented in available embedded microprocessors (Ahmed, 2014). According to (Farag, 2013), the models can be arranged ascendingly into four categories corresponding to their complexity and the number of presented parameters. These models are Ideal, Behavioral, Equivalent circuit, and Electrochemical. Ideal ModelIdeal model neglects the internal parameters of the battery thus it is extremely simple to model. Furthermore, this model is considering the battery as purely unlimited voltage source (Farag, 2013). Behavioral ModelBehavioral model or Black-box Model does not extend in battery specific electrochemical parameters. However, it simulates terminal voltage performance. Model working method is based on phenomenological functions, which require specific data measures. Factor measures could be obtained from neural networks, empirical functions or data tables (Farag, 2013). Peukert’s law formula is frequently applied in behavioral models for batteries. An empirical function is adopted to represent the dependency of the battery’s remaining capacity on the discharge rate. I^PC t=Constant (1,1)I is the discharge current, t is the maximum discharge time, PC is the Peukert’s Coefficient. Thus, battery capacity calculated as follow: cn_1=cn*(In/In1 )^(Pc-1) (1,2)Battery remaining capacity is represented by Cn1 at discharging current I (Farag, 2013).Other methods for a behavioral model were presented by Shepherd and Plett to forecast the terminal voltage during charging/discharging conditions, can be found in the literature carried out by Farag. These models can explain cell hysteresis, polarization time constants, and ohmic loss effects. Equivalent-Circuit ModelResistors and capacitors are the major components of the lumped-element equivalent-circuit to simulate battery cell performance. RC circuits are frequently applied in BMS for its clarity, a few parameters to optimize, and simple implementation. Furthermore, RC circuits implemented as first-order, second-order, or third-order corresponding to model complexity and accuracy besides the hysteresis effect. The third-order model is as shown below in Fig 2. However, this model still cannot provide data about the internal electrochemical reactions in the unit cell, thus it is incapable of predicting electrochemical phenomena like cell degradation, capacity fading, and power fading (Farag, 2013).Fig (2) Third-Order RC Battery Model. R0 act in place of the battery internal resistance, and the equivalent circuit applies RC branches to model battery dynamics. The parallel RC components show the dynamic order of the circuit (Ahmed, 2014). Electro-Chemical ModelElectro-Chemical Model (ECM) or physics-based model studies battery electrochemistry parameters in details, what makes it very complex is it can clarify the electrochemical reactions using partial differential equations (PDE) with consistently many unknown parameters to solve. PDE allows a compromise analysis and high accuracy in parameters data. To implement ECM in actual BMS, it requires model reduction calculations. Several techniques for ECM reduction have been introduced in literature. Author noticed that plenty of the computational complexity associated with ECMs comes from solving PDEs for the lithium concentration in the solid particles of the electrodes (Spherical Diffusion). A typical approach is to do approximations and simplifications for this calculation. ECM also needed to generate battery aging by a state of health estimation (Farag, 2013). A full-Order electrochemical model, and a reduced-order electrochemical model were established, as an improvement approaches to ECM. During the battery design stage, the full-order model can predict the physical interactions inside the cell such as the potential distribution and electrochemical species diffusion. To implement the electrochemical model for real-time applications on a BMS for a state of charge and state of health estimation, a reduced-order form has to be performed. Electrode average model, and the states value model are two simple forms of reduced order electrochemical models (Ahmed, 2014). Artificial Neural Network Model (ANN)Artificial Neural Networks (ANNs) shown in fig 3, are a mathematical model similar to the human brain in learning and observing patterns. Through a set of neurons that are connected with weights, ANNs can map input to output datasets. Furthermore, ANNs can apply to battery modeling and for state of charge estimation applications. Nerveless they require training of battery data specific behavior to perform with proper estimations and techniques (Ahmed, 2014).Fig (3) Artificial Neural Network Diagram, as (x1, x2, x3,.., xn) represent neuron inputs, their weights are (w1j, w2j, w3j ,.., wnj), b is bias and denotes a nonlinear activation function (Farag, 2013).netj=X1w1j+X2w2j+X3w3j+‹Xnw2j+b (1,3) Oi=(netj) (1,4) Thermal Management SystemsThermal management of lithium ion batteries is a safety critical consideration, as it could lead to unsafe operating conditions because of battery overheating. To operate the battery within the safe range cells require to be monitored. Since cell temperature is complicated to measure so it must be estimated. Thermal models are combined with battery models to obtain detailed data of battery performance characterization (Farag, 2013). Battery Aging MechanismsBattery aging is complex and influenced by the battery operating conditions. the aging mainly described as a capacity or power fading to a pre-determined limit, where capacity fade is described as a loss of capacity and power fade is described as a rise in the battery internal resistance. Aging could appear because of operating the battery under excessive conditions such as high temperature, high charging rates, or great state of charge levels. Likewise, could take place due to battery storage (calendar aging) or usage (cycling aging). It is declared to result from several transforms where performance degradation happens due to irreversible chemical reactions (Farag, 2013). ConclusionAs lithium-ion battery has a small efficient working range in charging, discharging and thermal efficiency. Battery design is multipart, as battery cells connected in serial and parallel to gain the most capacity and efficiency with keeping battery size small as possible. To solve those conflicts a battery management system was introduced, through which the lithium-ion batteries can be controlled and maintained effectively, and every individual cell would be working under proper conditions, consequently every cell should be operated within the lithium-ion battery safety operating window. The main points in the design and management of a battery for an electric vehicle have been presented. After the explanation of BMS key functions. An overview of the main techniques for charge balancing and state of charge estimation has been pointed out. Battery models and their types were introduced, as a trade-off between the complexity of the model and the accuracy must be taken in consideration, thus increasing the computation and memory requirements of the model could compromise its suitability for real-time BMS applications. The push for efficient and large driving range hybrid electric vehicle will include a future with high energy lithium ion
{"url":"https://grandpaperwriters.com/bms-key-functions-cell-balancingovercharging-and-over-discharging-of-liion-cells-causes-essay/","timestamp":"2024-11-14T20:11:30Z","content_type":"text/html","content_length":"51356","record_id":"<urn:uuid:32c13c3e-4853-4d25-9f46-c12b72ab6a60>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00369.warc.gz"}
Using a horizontal force of 200 n, we intend to move a wooden cabinet across a floor at a constant velocity. what is the friction force that will be exerted on the cabinet? using a horizontal force of 200 n, we intend to move a wooden cabinet across a floor at a constant velocity. what is the friction force that will be exerted on the cabinet? Using a horizontal force of 200 N, we intend to move a wooden cabinet across a floor at a constant velocity. What is the friction force that will be exerted on the cabinet? To determine the friction force exerted on the wooden cabinet, we need to understand the concept of constant velocity in the context of Newton’s laws of motion. Key Concepts: 1. Newton’s First Law of Motion: □ An object at rest will stay at rest, and an object in motion will stay in motion at a constant velocity unless acted upon by a net external force. 2. Constant Velocity: □ If an object moves at a constant velocity, the net force acting on it is zero. This implies that the applied force must be balanced by the friction force. Solution By Steps: 1. Understand Constant Velocity: □ When the cabinet is moved at a constant velocity, there is no acceleration (a = 0). According to Newton’s second law, F_{\text{net}} = m \cdot a, where m is the mass of the cabinet and a is the acceleration. □ Since a = 0, F_{\text{net}} = 0. This means that the horizontal applied force (F_{\text{applied}}) and the friction force (F_{\text{friction}}) must be equal in magnitude but opposite in 2. Determine the Friction Force: □ Given the applied force is 200 N, and since the object is moving at a constant velocity (thus no acceleration), the friction force must exactly oppose the applied force to maintain this constant velocity. F_{\text{applied}} = F_{\text{friction}} 3. Apply the Given Data: □ The applied force is given as 200 N. F_{\text{friction}} = 200 \, \text{N} Final Answer: The friction force exerted on the cabinet is \boxed{200 \, \text{N}}.
{"url":"https://en.sorumatik.co/t/using-a-horizontal-force-of-200-n-we-intend-to-move-a-wooden-cabinet-across-a-floor-at-a-constant-velocity-what-is-the-friction-force-that-will-be-exerted-on-the-cabinet/22316","timestamp":"2024-11-08T08:22:56Z","content_type":"text/html","content_length":"25014","record_id":"<urn:uuid:0c3a777e-0ed8-4822-bec2-b09c9bb543d1>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00877.warc.gz"}
Which of the following equations is not an identity? Explain. a) \(\frac{x^{2}-1}{2} \cdot \frac{2}{x-1}=x+1\) b) \(\frac{x-1}{x^{2}-1}=x+1\) c) \(x^{2}-1=(x-1)(x+1)\) d) \(\frac{1}{x^{2}-1} \div \ Short Answer Expert verified Equation b is not an identity because it does not hold true for all permissible values of x. Step by step solution Understand What an Identity Is An identity is an equation that is true for all values of the variable for which both sides of the equation are defined. We need to check each equation for equality over the range of permissible values for x. Analyze Equation a \(\frac{x^{2}-1}{2} \cdot \frac{2}{x-1} = x+1\) First, simplify the left-hand side: \( \frac{(x^{2}-1)\cdot 2}{2 \cdot (x-1)} = x+1\) Simplifying further we get: \( \frac{(x+1)(x-1)}{(x-1)} = x+1 \) Provided \(x eq 1\): \(x+1 = x+1\), which is always true. Analyze Equation b \(\frac{x-1}{x^{2}-1} = x+1\) First, expand the denominator: \(x^{2}-1 = (x-1)(x+1)\) Then simplify the left-hand side: \(\frac{x-1}{(x-1)(x+1)} = x+1 \) Provided \(x eq \pm 1\): \(\frac{1}{x+1} = x+1 \). This is not true for all permissible values of \(x\), so this is not an identity. Analyze Equation c \(x^{2}-1 = (x-1)(x+1)\) Expand the right-hand side: \((x-1)(x+1) = x^{2} - 1\) This equation is always true for all \(x\), so it is an identity. Analyze Equation d \(\frac{1}{x^{2}-1} \div \frac{1}{x+1} = \frac{1}{x-1}\) First, rewrite the division as multiplication by the reciprocal: \(\frac{1}{x^{2}-1} \cdot \frac{x+1}{1} = \frac{1}{x-1}\) Simplify the left-hand side by expanding \(x^{2}-1\): \(\frac{1}{(x-1)(x+1)} \cdot (x+1) = \frac{1}{x-1}\) Provided \(x eq \pm 1\): \(\frac{1}{x-1} = \frac{1}{x-1}\), which is always true. Key Concepts These are the key concepts you need to understand to accurately answer the question. equation analysis When tackling algebraic problems, it's essential to analyze each equation thoroughly. Equation analysis involves dissecting an equation to understand its structure and to identify values for which it holds true. We check each equation's validity by simplifying both sides and testing various values of the variable, usually denoted as x. For instance, let's consider the equation \(\frac{x^2 - 1}{2} \times \frac{2}{x - 1} = x + 1\). To analyze this, we simplify the left-hand side: \(\frac{(x^2 - 1) \times 2}{2 \times (x - 1)} = x + 1\). Simplifying further, we get \(\frac{(x + 1)(x - 1)}{x - 1} = x + 1\). This equation holds true for all x ≠ 1, confirming that it is an identity in terms of equation analysis. simplifying expressions Simplifying expressions is a key skill in algebra that helps verify the truth of given equations. To simplify an algebraic expression, we factorize or combine like terms to make the expression easier to work with. Let's consider the equation \(\frac{x-1}{x^2 - 1} = x + 1\). First, we expand the denominator: \((x^2 - 1) = (x - 1)(x + 1)\) and then the left-hand side becomes: \(\frac{x - 1}{(x - 1) (x + 1)}\). Simplifying this, we get \(\frac{1}{x + 1} eq x + 1\) for all permissible values of x. As such, simplifying this expression reveals that it is not an identity. identities in algebra In algebra, an identity is an equation that holds true for all permissible values of the variable. Unlike conditional equations, identities are universally valid and help simplify complex problem solving. For example, the equation \(x^2 - 1 = (x - 1)(x + 1)\) is an identity. To verify, we expand the right-hand side: \((x - 1)(x + 1) = x^2 - 1\). This simplification shows the equation holds for all values of x. Identifying algebraic identities is crucial for further simplifying and solving algebraic problems efficiently. variable constraints Variable constraints refer to the limitations on the values that a variable can take in order to ensure the equation is defined and potentially true. These constraints are critical when validating an identity. For instance, in the equation \(\frac{1}{x^2 - 1} \times \frac{x + 1}{1} = \frac{1}{x - 1}\), we first rewrite the division as multiplication and simplify: \(\frac{1}{(x^2 - 1)} \times (x + 1) = \frac{1}{x - 1}\). For simplification, suppose \(x^2 - 1 = (x - 1)(x + 1)\), then we have: \(\frac{1}{(x - 1)(x + 1)} \times (x + 1) = \frac{1}{x - 1}\). The constraint here is that x cannot be ±1, as this would make the denominator zero, invalidating the equation. Such constraints are essential for determining whether an equation serves as a valid identity.
{"url":"https://www.vaia.com/en-us/textbooks/math/algebra-for-college-students-5-edition/chapter-6/problem-96-which-of-the-following-equations-is-not-an-identi/","timestamp":"2024-11-08T01:51:39Z","content_type":"text/html","content_length":"251580","record_id":"<urn:uuid:20f0a127-ff19-4438-b508-4422f5dcf175>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00539.warc.gz"}
seminars - Introduction to spectral network ※ 일시/장소 - 6월 3일(월), 17:00~18:15, 129동 301호 - 6월 5일(수), 17:00~18:15, 129동 309호 - 6월 11일(화), 17:00~18:15, 129동 301호 - 6월 12일(수), 17:00~18:15, 129동 301호 Abstract: Spectral networks were introduced in a seminal article by Davide Gaiotto, Gregory W. Moore and Andrew Neitzke published in 2013. These are networks of trajectories on surfaces that naturally arise in the study of various four-dimensional quantum field theories. From a purely geometric point of view, they yield a new map between flat connections over a Riemann surface and flat abelian connections on a spectral covering of the surface. At the same time, these networks of trajectories provide local coordinate systems on the moduli space of flat connections that are valuable in the study of higher Teichmüller spaces. In the first part of this mini-course, I will review key concepts from geometric group theory, including hyperbolic groups and boundaries at infinity of hyperbolic groups and spaces. Following this, I will discuss the theory of vector bundles and the Riemann-Hilbert correspondence. In the second part, I will define spectral networks explicitly for surfaces with punctures. I will also present and discuss their most prominent applications in geometry: non-abelianization and abelianization, which connect higher Teichmüller spaces of a base surface to abelian character varieties of its ramified cover, the spectral curve.
{"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&l=en&page=87&sort_index=Time&order_type=asc&document_srl=1237018","timestamp":"2024-11-03T18:29:46Z","content_type":"text/html","content_length":"45754","record_id":"<urn:uuid:ec6c0789-7b3b-4bfb-a77a-d161719cd0b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00400.warc.gz"}
Scalable data storage and structures Scalable data storage and structures¶ When dealing with big data, minimizing the amount of memory used is critical to avoid having to use disk-based access, which can be 100,000 times slower than random access. This notebook deals with ways to minimizee data storage for several common use cases: • Large arrays of homogenous data (often numbers) • Large string collections • Counting distinct values • Yes/No responses to queries Methods covered range from the mundane (use numpy arrays rather than lists), to classic but less well-known data structures (e.g. prefix trees or tries) to algorithmically ingenious probabilistic data structures (e.g. bloom filter and hyperloglog). import sys import numpy as np Selective retrieval from disk-based storage¶ We have already seen that there are many ways to retrieve only the parts of the data we need now into memory at this particular moment. Options include • generators (e.g. to read a file a line at a time) • numpy.memmap • HDF5 via h5py • Key-value stores (e.g. redis) • SQL and NoSQL databases (e.g. sqlite3) Storing numbers¶ Less memory is used when storing numbers in numpy arrays rather than lists. Using only the precision needed can also save memory¶ Storing strings¶ def flatmap(func, items): return it.chain.from_iterable(map(func, items)) def flatten(xss): return (x for xs in xss for x in xs) Using a list¶ with open('data/Ulysses.txt') as f: word_list = list(flatten(line.split() for line in f)) %timeit -r1 -n1 word_list.index(target) 6.33 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) Using a sorted list¶ import bisect %timeit -r1 -n1 bisect.bisect(word_list, target) 8.48 µs ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) Using a set¶ word_set = set(word_list) %timeit -r1 -n1 target in word_set 1.2 µs ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) Using a trie (prefix tree)¶ %load_ext memory_profiler from hat_trie import Trie %memit word_trie = Trie(word_list) peak memory: 70.50 MiB, increment: 0.10 MiB %timeit -r1 -n1 target in word_trie 3.73 µs ± 0 ns per loop (mean ± std. dev. of 1 run, 1 loop each) Data Sketches¶ A sketch is a probabilistic algorithm or data structure that approximates some statistic of interest, typically using very little memory and processing time. Often they are applied to streaming data, and so must be able to incrementally process data. Many data sketches make use of hash functions to distribute data into buckets uniformly. Typically, data sketches have the following desirable • sub-linear in space • single scan • can be parallelized • can be combined (merge) Some statistics that sketches have been used to estimate include • indicator variables (event detection) • counts • quantiles • moments • entropy Packages for data sketches in Python are relatively immmature, and if you are interested, you could make a large contribution by creating a comprehensive open source library of data sketches in Morris counter¶ The Morris counter is used as a simple illustration of a probabilistic data structure, with the standard trade-off of using less memory in return for less accuracy. The algorithm is extremely simple - keep a counter \(c\) that represents the exponent - that is, when the Morris counter is \(c\), the estimated count is \(2^c\). The probabilistic part comes from the way that the counter is incremented by comparing a uniform random variate to \(1/2^c\). from random import random class MorrisCounter: def __init__(self, c=0): self.c = c def __len__(self): return 2 ** self.c def add(self, item): self.c += random() < 1/(2**self.c) print('True\t\tMorris\t\tRel Error') for i, word in enumerate(word_list): if i%int(.2e5)==0: print('%8d\t%8d\t%.2f' % (i, len(mc), 0 if i==0 else abs(i - len(mc))/i)) True Morris Rel Error 0 2 0.00 20000 32768 0.64 40000 32768 0.18 60000 32768 0.45 80000 65536 0.18 100000 65536 0.34 120000 65536 0.45 140000 65536 0.53 160000 131072 0.18 180000 131072 0.27 200000 131072 0.34 220000 131072 0.40 240000 131072 0.45 260000 131072 0.50 Increasing accuracy¶ A simple way to increase the accuracy is to have multiple Morris counters and take the average. These two ideas of using a probabilistic calculation and multiple samples to improve precision are the basis for the more useful probabilisitc data structures described below. mcs = [MorrisCounter() for i in range(10)] print('True\t\tMorris\t\tRel Error') for i, word in enumerate(word_list): for j in range(10): estimate = np.mean([len(m) for m in mcs]) if i%int(.2e5)==0: print('%8d\t%8d\t%.2f' % (i, estimate, 0 if i==0 else abs(i - estimate)/i)) True Morris Rel Error 0 2 0.00 20000 20480 0.02 40000 38502 0.04 60000 45875 0.24 80000 72089 0.10 100000 134348 0.34 120000 163840 0.37 140000 176947 0.26 160000 176947 0.11 180000 203161 0.13 200000 203161 0.02 220000 229376 0.04 240000 255590 0.06 260000 255590 0.02 Distinct value Sketches¶ The Morris counter is less useful because the degree of memory saved as compared to counting the number of elements exactly is not much unless the numbers are staggeringly huge. In contrast, counting the number of distinct elements exactly requires storage of all distinct elements (e.g. in a set) and hence grows with the cardinality \(n\). Probabilistic data structures known as Distinct Value Sketches can do this with a tiny and fixed memory size. Examples where counting distinct values is useful: • number of unique users in a Twitter stream • number of distinct records to be fetched by a databse query • number of unique IP addresses accessing a website • number of distinct queries submitted to a search engine • number of distinct DNA motifs in genomics data sets (e.g. microbiome) A hash function takes data of arbitrary size and converts it into a number in a fixed range. Ideally, given an arbitrary set of data items, the hash function generates numbers that follow a uniform distribution within the fixed range. Hash functions are immensely useful throughout computer science (for example - they power Python sets and dictionaries), and especially for the generation of probabilistic data structures. A simple hash function mapping¶ Note the collisions. If not handled, there is a loss of information. Commonly, practical hash functions return a 32 or 64 bit integer. Also note that there are an arbitrary number of hash functions that can return numbers within a given range. Note also that because the hash function is deterministic, the same item will always map to the same bin. def string_hash(word, n): return sum(ord(char) for char in word) % n sentence = "The quick brown fox jumps over the lazy dog." for word in sentence.split(): print(word, string_hash(word, 10)) The 9 quick 1 brown 2 fox 3 jumps 9 over 4 the 1 lazy 8 dog. 0 Built-in Python hash function¶ Help on built-in function hash in module builtins: hash(obj, /) Return the hash value for the given object. Two objects that compare equal must also have the same hash value, but the reverse is not necessarily true. for word in sentence.split(): print('{:<10s} {:24}'.format(word, hash(word))) The -4859935776507312418 quick 9157615745031482514 brown 4123312298496538273 fox -2015214628178477320 jumps -71379956079029581 over -6974446915587241323 the -5638214675285202096 lazy 1423964815621844201 dog. -1983643758301440122 Using a hash function from the MurmurHash3 library¶ Note that the hash function accepts a seed, allowing the creation of multiple hash functions. We also display the hash result as a 32-bit binary string. import mmh3 for word in sentence.split(): print('{:<10} {:+032b} {:+032b}'.format(word.ljust(10), mmh3.hash(word, seed=1234), mmh3.hash(word, seed=4321))) The +0001000011111110001001110101100 +1110110100100101010111100011010 quick -0101111111011110110101100101000 +1000100001101010110000101101100 brown +1000101010000110110010001110101 -1101101110000000010001100010100 fox -1000000010010010000111001111011 +0111011111000011001001001110111 jumps +0000010111000011010000100101010 +0010010001111110100010010110011 over -0110101101111001001101011111011 -1101110111110010000101101000100 the -1000000101110000000110011111001 +0001000111100111011000011100101 lazy -1101011000111111110011111001100 +0010101110101100001000101110000 dog. +0100110101101111101011110111111 -0101111000110000001011110001011 LogLog family¶ The binary digits in a (say) 32-bit hash are effectively random, and equivalent to a sequence of fair coin tosses. Hence the probability that we see a run of 5 zeros in the smallest hash so far suggests that we have added \(2^5\) unique items so far. This is the intuition behind the loglog family of Distinct Value Sketches. Note that the biggest count we can track with 32 bits is \(2^{32} = The accuracy of the sketch can be improved by averaging results with multiple coin flippers. In practice, this is done by using the first \(k\) bit registers to identify \(2^k\) different coin flippers. Hence, the max count is now \(2 ** (32 - k)\). The hyperloglog algorithm uses the harmonic mean of the \(2^k\) flippers which reduces the effect of outliers and hence the variance of the for i in range(1, 15): k = 2**i hashes = [''.join(map(str, np.random.randint(0,2,32))) for i in range(k)] print('%6d\t%s' % (k, min(hashes))) from hyperloglog import HyperLogLog hll = HyperLogLog(0.01) # accept 1% counting error print('True\t\tHLL\t\tRel Error') s = set([]) for i, word in enumerate(word_list): if i%int(.2e5)==0: print('%8d\t%8d\t\t%.2f' % (len(s), len(hll), 0 if i==0 else abs(len(s) - len(hll))/i)) True HLL Rel Error 1 1 0.00 6585 6560 0.00 11862 11777 0.00 15390 15318 0.00 18358 18236 0.00 24705 24712 0.00 28693 28750 0.00 30791 30946 0.00 34530 34677 0.00 36002 36077 0.00 41720 42091 0.00 45842 46384 0.00 46389 46979 0.00 49524 50226 0.00 Bloom filters¶ Bloom filters are designed to answer queries about whether a specific item is in a collection. If the answer is NO, then it is definitive. However, if the answer is yes, it might be a false positive. The possibility of a false positive makes the Bloom filter a probabilistic data structure. A bloom filter consists of a bit vector of length \(k\) initially set to zero, and \(n\) different hash functions that return a hash value that will fall into one of the \(k\) bins. In the construction phase, for every item in the collection, \(n\) hash values are generated by the \(n\) hash functions, and every position indicated by a hash value is flipped to one. In the query phase, given an item, \(n\) hash values are calculated as before - if any of these \(n\) positions is a zero, then the item is definitely not in the collection. However, because of the possibility of hash collisions, even if all the positions are one, this could be a false positive. Clearly, the rate of false positives depends on the ratio of zero and one bits, and there are Bloom filter implementations that will dynamically bound the ratio and hence the false positive rate. Possible uses of a Bloom filter include: • Does a particular sequence motif appear in a DNA string? • Has this book been recommended to this customer before? • Check if an element exists on disk before performing I/O • Check if URL is a potential malware site using in-browser Bloom filter to minimize network communication • As an alternative way to generate distinct value counts cheaply (only increment count if Bloom filter says NO) pip install git+https://github.com/jaybaird/python-bloomfilter.git from pybloom import ScalableBloomFilter # The Scalable Bloom Filter grows as needed to keep the error rate small # The default error_rate=0.001 sbf = ScalableBloomFilter() for word in word_set: test_words = ['banana', 'artist', 'Dublin', 'masochist', 'Obama'] for word in test_words: print(word, word in sbf) banana True artist True Dublin True masochist False Obama False ### Chedck for word in test_words: print(word, word in word_set) banana True artist True Dublin True masochist False Obama False
{"url":"https://people.duke.edu/~ccc14/sta-663-2018/notebooks/S14F_Data_Sketches.html","timestamp":"2024-11-12T07:15:16Z","content_type":"text/html","content_length":"74442","record_id":"<urn:uuid:80150394-d66b-4229-a63c-750902347518>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00325.warc.gz"}
Improper Fractions to Mixed Numbers Calculator An online improper fractions to mixed numbers calculator is specifically designed to convert improper fractions to mixed numbers in no time. Not only this, you can also resolve the proper fractions by using fraction to mixed number calculator. Let’s go through the article below to understand how to convert fraction into mixed number. Just stay focused! What Is An Improper Fraction? In the context of mathematics: “A particular fraction in which the numerator is greater than its denominator is known as the improper fraction” For example: The given below ratios are the best examples of the improper fractions: $$ \frac{3}{2} \hspace{0.25in} \frac{9}{7} \hspace{0.25in} \frac{65}{34} $$ All the above fractions are considered improper ones and can be turned to mixed number form by using our best improper fractions to Mixed numbers calculator. Steps Involved For Converting Improper Fractions To Mixed Numbers: Let’s recall the steps that you need to follow in order to perform improper fraction to mix numbers conversion: • First, go for dividing the numerator with the denominator • After that, write down the whole number separately that is the quotient • At last, write the remainder as the new numerator and take the same denominator as it was in the starting fraction Rules To Simplify Fractions: Whenever you want to simplify fractions, you need to remember couple of rules in your mind that are listed as below: • You must look for a number that will divide both numerator and denominator to simplify the fraction • If you want to convert fraction into mixed number, make sure that the fraction is the improper one. How To Convert Improper Fractions To Mixed Numbers? Here we will be resolving a couple of examples to perform conversions among improper fractions and mixed numbers. Let’s move ahead! Example # 01: How to convert improper fractions to mixed numbers given below: $$ \frac{8}{3} \hspace{0.25in} and \frac{9}{2} $$ Going to change improper fractions to mixed numbers as follows: $$ \frac{8}{3} $$ Step # 01: When we divide this fraction, we will get the remainder of 2. Step # 02: The quotient represents the whole number that is also 2 in our case. Step # 03: Now the next step is to write the remainder as the numerator. Also, choose the first ever denominator which is 3 and consider it as it is here. At last multiply the fraction with the whole number that is quotient. The whole process is shown as below: $$ Quotient \frac{Remainder}{Denominator} $$ $$ 2\frac{2}{3} $$ Which is the required mixed number form of the given fraction. Now we have: $$ \frac{9}{2} $$ Step # 01: By dividing the fraction given, the remainder of 1 is obtained Step # 02: The second step is to look for the whole number that is none other than the quotient which is 4. Step # 03: At last, take the remainder value as numerator, the denominator goes unchanged and is 2 again. After that, multiply the whole number with the fraction as shown below: $$ Quotient \frac{Remainder}{Denominator} $$ $$ 2\frac{1}{4} $$ You can also verify the results with the help of improper fraction to mixed number calculator. How Improper Fractions To Mixed Numbers Calculator Works? Get the instant and best possible simplified results by using our free calculator in a single click. Let us guide you properly about its usage! • Write down the numerator in the upper designated field • Also, go for writing the denominator in the lower designated field • Now tap the calculate button The free converting improper fractions to mixed numbers calculator determines: • Mixed number form of the given improper fraction • All steps involved during the calculations Is \(1\frac{0}{2}\) an improper fraction? The simplified form of the given equation is \(\frac{2}{2}\) which is equal to 1. So it is not an improper fraction. What is the improper fraction form of the mixed number \(2\frac{1}{2}\)? The equivalent improper fraction of the given mixed number is given as: $$ \frac{5}{2} $$ How do you convert mixed fractions to improper fractions? You can determine the corresponding improper fraction of the mixed number with the help of free online mixed numbers to improper fractions calculator. Can both improper fraction and mixed number show the same value? Yes, both of these quantities always give the same result. Changing improper fractions to mixed numbers makes it easy to apply arithmetic operations on them. By doing so, the complex fractions can be simplified in a very short time. But to make this process even faster, mathematicians use this free online improper fractions to mixed numbers calculator. From the source of Wikipedia: Fraction, Forms of fractions, Arithmetic with fractions, Fractions in abstract mathematics, Algebraic fractions, Radical expressions, Typographical variations From the source of Khan Academy: improper fractions as mixed numbers From the source of Lumen Learning: Converting Between Improper Fractions and Mixed Numbers, Equivalent Fractions
{"url":"https://calculator-online.net/improper-fractions-to-mixed-numbers/","timestamp":"2024-11-06T14:47:24Z","content_type":"text/html","content_length":"62044","record_id":"<urn:uuid:cb3ca240-2d33-4d2b-8175-133357b44419>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00370.warc.gz"}
Analysis of dental preparation scanning data Examensarbete för masterexamen Master Thesis When a tooth has been wounded, there are several methods to repair it. One way is to remove the wounded part by grinding the tooth and replace it by a dental crown. In the manufacturing process one of the steps is to scan the cast model of the grinded tooth. Assume now that a mechanical probe scanner has registered a set of discrete points from the cast model. With the help of these points, a triangulated surface that represents the surface of the grinded tooth is constructed. In this paper an analysis of the scanned data has been done to see how well the triangulated surface represents the original preparation. This is done by constructing two surfaces between which both the triangulated surface and the surface of the cast model sit. Then the distance between the two surfaces is studied. In my calculations I have assumed that the hit between the probe and the surface of the preparation occurs at the top of the probe in the direction of the probe axis. I have also approximated the volume, see page 17, this is done since the programming is otherwise very time consuming. The calculations in this paper has been done on an upper premolar which was represented by 39 747 triangles. When the volume between the triangles and the lower surface was divided with the area of the triangles, , 92% of the triangles were approved, by this I mean that the average distance between the surfaces is not greater than the glue padding between the preparation and the crown. When the volume between the triangles and the upper surface was divided with the area of the triangles, , 99% of the triangles was approved. In the last case the volume between the two surfaces, was divided with the area of the triangles, , 95% of the triangles were approved. This indicates that the volume between the triangles and the upper surfaces contributes to a greater volume than the volume between the triangles and the lower surfaces. By visual analysis of Figure 21, convex surfaces gives a smaller tolerance which means better fitting than the concave surfaces with a larger tolerance. Algebra och geometri , Medicinsk teknik , Oral protetik , Algebra and geometry , Medical technology , Oral prosthetics
{"url":"https://odr.chalmers.se/items/406581fc-adab-41bf-9ed2-26e61b5607d4","timestamp":"2024-11-03T13:01:13Z","content_type":"text/html","content_length":"340709","record_id":"<urn:uuid:ae3df1af-c9ea-4934-9fed-c152a54b223f>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00146.warc.gz"}
Data Structures & Algorithms - Altcademy Blog Introduction Calculating the all-pairs shortest paths in a graph is an essential problem in computer science, with applications in routing, social networks, and data analysis. This problem involves finding the shortest path between every pair of vertices in a graph. In this lesson, we will delve into the Floyd-Warshall algorithm,
{"url":"https://www.altcademy.com/blog/tag/algo/","timestamp":"2024-11-04T18:38:32Z","content_type":"text/html","content_length":"90544","record_id":"<urn:uuid:d4be907d-7dc7-47ed-a4fa-0583d603de60>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00565.warc.gz"}
Fundamentals of Heat and Mass Transfer, 6th Edition Fundamentals of Heat and Mass Transfer, 6th Edition, Frank P. Incropera Варианты приобретения Цена: 154620T о цене Наличие: Невозможна поставка. в Мои желания Frank P. Incropera Название: Fundamentals of Heat and Mass Transfer, 6th Edition (Основные принципы тепломассо передачи) ISBN: 0471457280 ISBN-13(EAN): 9780471457282 Обложка/Формат: Hardback Страницы: 1024 Вес: 1.772 кг. Дата издания: 01.09.2006 Язык: English Издание: 6 rev ed Иллюстрации: Illustrations Размер: 25.76 x 21.23 x 3.86 cm Читательская аудитория: Professional & vocational Поставляется из: Англии Описание: The de facto standard text for heat transfer - noted for its readability, comprehensiveness, and relevancy has been revised to address new application areas of heat transfer, while continuing to emphasize the fundamentals. The sixth edition, like previous editions, continues to support four student learning objectives: learn the meaning of the terminology and physical principles of heat transfer; identify and describe appropriate transport phenomena for any process or system involving heat transfer; use requisite inputs for computing heat transfer rates and/or material temperatures; and develop representative models of real processes and systems and draw conclusions concerning process/systems design or performance from the attendant analysis. Дополнительное описание: Кол-во стр.: 960 Дата издания: 2006 Круг читателей: undergraduate; postgraduate; research, professional Новое издание Автор: Incropera Название: Principles of Heat and Mass Transfer, 7th Edition International Student Version ISBN: 0470646152 ISBN-13(EAN): 9780470646151 Издательство: Wiley Цена: 33620 T Наличие на складе: Нет в наличии. Описание: Principles of Heat and Mass Transfer is the gold standard of heat transfer pedagogy for more than 30 years, with a commitment to continuous improvement by the authors. Using a rigorous and systematic problem-solving methodology pioneered by this text, it is abundantly filled with examples and problems that reveal the richness and beauty of the discipline. This edition maintains its foundation in the four central learning objectives for students and also makes heat and mass transfer more approachable with an additional emphasis on the fundamental concepts, as well as highlighting the relevance of those ideas with exciting applications to the most critical issues of today and the coming decades: energy and the environment. An updated version of Interactive Heat Transfer (IHT) software makes it even easier to efficiently and accurately solve problems. Старое издание Автор: Oats, Jeremy J N, MBBS, DM, FRCOG, FRANZCOG (Chair Victorian Consultative Council on Obstetric and Paediatric Mortality and Morbidity, Australia; Cons Название: Liwellyn-Jones Fundamentals Of Obstetrics & Gynecology 1 OE IE ISBN: 0702060658 ISBN-13(EAN): 9780702060656 Издательство: Elsevier Science Цена: 35670 T Наличие на складе: Нет в наличии. Описание: The tenth edition of Llewellyn-Jones Fundamentals of Obstetrics and Gynaecology carries on the mission of Derek Llewellyn-Jones (encapsulated in the first edition of this book, published in 1969) to support the WHO's target to ensure women worldwide have good healthcare, safe deliveries and healthy children. In his words: 'It is our hope to continue to meet the needs of today's medical students and students of nursing and midwifery; and to encourage self-learning skills while providing essential information in a readable manner'. Intervention rates continue to climb; women in the developed world embark on pregnancy later in life and with more complex medical disorders. The need to know and apply the best available standards of care is ever more critical. The tenth edition brings this highly regarded book completely up to date whilst retaining its characteristic concision and readability. New to this edition: Now on StudentConsult New ultrasounds New material covering: Diabetes and obesity in pregnancy Advances in IVF Complications arising for older first-time mothers Highly illustrated in full colour throughout Totally revised Автор: Boothroyd, G. Название: Fundamentals of machining and machine tools ISBN: 1574446592 ISBN-13(EAN): 9781574446593 Издательство: Taylor&Francis Цена: 80490 T Наличие на складе: Есть Описание: Reflecting changes in machining practice, Fundamentals of Machining and Machine Tools, Third Edition emphasizes the economics of machining processes and design for machining. This edition includes new material on super-hard cutting tool materials, tool geometries, and surface coatings. It describes recent developments in high-speed machining, hard machining, and cutting fluid applications such as dry and minimum-quantity lubrication machining. It presents analytical methods that outline the limitations of various appaches. This edition also features expanded information on tool geometries for chip breaking and control as well as improvements in cost modeling of machining processes. Автор: Zhang Название: Fundamentals of Environmental Sampling and Analysis ISBN: 0471710970 ISBN-13(EAN): 9780471710974 Издательство: Wiley Цена: 112510 T Наличие на складе: Есть у поставщика Поставка под заказ. Описание: This book provides a comprehensive overview on the fundamentals of environmental sampling and analysis for students in environmental science and engineering as well as environmental professionals in sampling and analytical work. It uses a know why rather than a know how approach. Автор: Kelleher John D., Macnamee Brian, D`Arcy Aoife Название: Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples, and Case Studies ISBN: 0262029448 ISBN-13(EAN): 9780262029445 Издательство: MIT Press Цена: 71280 T Наличие на складе: Нет в наличии. A comprehensive introduction to the most important machine learning approaches used in predictive data analytics, covering both theoretical concepts and practical applications. Machine learning is often used to build predictive models by extracting patterns from large datasets. These models are used in predictive data analytics applications including price prediction, risk assessment, predicting customer behavior, and document classification. This introductory textbook offers a detailed and focused treatment of the most important machine learning approaches used in predictive data analytics, covering both theoretical concepts and practical applications. Technical and mathematical material is augmented with explanatory worked examples, and case studies illustrate the application of these models in the broader business context. After discussing the trajectory from data to insight to decision, the book describes four approaches to machine learning: information-based learning, similarity-based learning, probability-based learning, and error-based learning. Each of these approaches is introduced by a nontechnical explanation of the underlying concept, followed by mathematical models and algorithms illustrated by detailed worked examples. Finally, the book considers techniques for evaluating prediction models and offers two case studies that describe specific data analytics projects through each phase of development, from formulating the business problem to implementation of the analytics solution. The book, informed by the authors' many years of teaching machine learning, and working on predictive data analytics projects, is suitable for use by undergraduates in computer science, engineering, mathematics, or statistics; by graduate students in disciplines with applications for predictive data analytics; and as a reference for professionals. Автор: Plawsky, Joel L. Название: Transport Phenomena Fundamentals, Second Edition ISBN: 1420062336 ISBN-13(EAN): 9781420062335 Издательство: Taylor&Francis Цена: 38210 T Наличие на складе: Нет в наличии. Описание: One of the foundations of chemical engineering is built upon transport phenomena. This text provides a unified treatment of heat, mass, and momentum transport based on a balance equation approach. It builds upon the balance equation description of diffusive transport by introducing convective transport terms. Автор: Incropera Название: Principles of Heat and Mass Transfer, 7th Edition International Student Version ISBN: 0470646152 ISBN-13(EAN): 9780470646151 Издательство: Wiley Цена: 33620 T Наличие на складе: Нет в наличии. Описание: Principles of Heat and Mass Transfer is the gold standard of heat transfer pedagogy for more than 30 years, with a commitment to continuous improvement by the authors. Using a rigorous and systematic problem-solving methodology pioneered by this text, it is abundantly filled with examples and problems that reveal the richness and beauty of the discipline. This edition maintains its foundation in the four central learning objectives for students and also makes heat and mass transfer more approachable with an additional emphasis on the fundamental concepts, as well as highlighting the relevance of those ideas with exciting applications to the most critical issues of today and the coming decades: energy and the environment. An updated version of Interactive Heat Transfer (IHT) software makes it even easier to efficiently and accurately solve problems. Автор: Maher, Michael, Название: Fundamentals of cost accounting ISBN: 0071318356 ISBN-13(EAN): 9780071318358 Издательство: McGraw-Hill Цена: 46950 T Наличие на складе: Нет в наличии. Описание: Fundamentals of Cost Accounting provides a direct, realistic, and efficient way to learn cost accounting, integrated with new technology learning tools. Fundamentals is short (approximately 700 pages) making it easy to cover in one semester. The authors have kept the text concise by focusing on the key concepts students need to master. The Decision opening vignettes and Business Application boxes show realistic applications of these concepts throughout. All chapters conclude with a Debrief that links the topics in the chapter to the decision problem faced by the manager in the opening vignette. Comprehensive end-of-chapter material provides students with all the practice they need to fully learn each concept.McGraw-Hill Connect Accounting Plus provides students every advantage as they strive to understand the key concepts of cost accounting and its role in business. Connect Accounting Plus offers a complete digital solution with a robust online learning and homework management system, an integrated media-rich eBook, assignable end-of-chapter material, algorithmic functionality, and reporting capabilities. Contained within Connect Accounting is McGraw-Hill's adaptive learning system, LearnSmart, which is designed to help students learn faster, study more efficiently, and retain more knowledge for greater success.
{"url":"https://www.logobook.kz/prod_show.php?object_uid=11068473","timestamp":"2024-11-07T14:03:48Z","content_type":"text/html","content_length":"46824","record_id":"<urn:uuid:b93748bf-621a-4fc7-890c-86dc959f0393>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00078.warc.gz"}
Maths with a Mouse - Number Bonds KIRF NB 1 Fingers up and fingers down Using two hands, show a random number of fingers. How many are up? How many are down? Sadly, I cannot use this method for number bonds to ten as I don’t have 10 fingers - only 8! Lego tens Create blocks of ten in as many ways as possible using two colours. Ten Frame Mania from Tang Math: click learn and all will become very clear. Time to put those number bonds to 10 to use to Save the Whale. Don't forget to turn the handle to release the water! Make 10 has a lot of arrows. What do they do? They simply move a digit to another and find the sum of those digits. In the middle image above, if I press the arrow beneath the 9, the 1 moves down and I Make 10. There are quite a few rules for playing Number Match but if you read and understand them, it will be worth it! Equalz from plays.org is one of those activities that looks basic and not much fun but when you play it, you will enjoy it as much as I do! Although you will often be looking for a pair of numbers, sometimes three numbers might be needed to make the total (which is at the top of the screen). I could not get the instructions to work for Tendo from calculators.org but felt that it could be a great game. When I discovered that any combo that makes ten in a row or column then disappears and gave me points, I knew that I loved this game! To make 10, the dice do not need to be next to each other; they simply need to be in the same row or column. Connect from toytheater.com is a lot of fun, but sometimes it can be frustrating: that’s all part of the fun! In this game, you can slide the blocks from left to right and even allow them to drop to a lower row if there is enough room to fall. Play a few times and you will get the hang of it. Can’t find a pack of cards easily? Use this great online tool to select random cards! Who can shout out the number bonds to 10/20 first? KIRF NB 1-4 Using triangles with missing information is a great way to represent the relationships between number bonds - simply put the target number (the sum) at the top of the triangle. Perhaps you could have your own whiteboard and have two pieces of information, challenging the learner to work out the missing value. My YouTube video will help you to understand how this can be done. Terrific Triangles - addition and subtraction Want to practise this skill? Try this activity from Tang Math. The diagrams are great as they are similar to my Terrific Triangles. Alternatively, you might want to catch some aliens in Aliens Addends. Numbots challenges - Challenges on Numbots are now accessible without completing every level proceeding them. This means that the number bond challenges are a great way to practise number bonds to 10, 20 and 100 (all). log-in details required Topmarks Hit the Button is a fast-paced game with quick-fire questions. Choose the skill you want to improve and then simple hit the button! Go! Go! Go! Looking for more ways to practise recalling your knowledge? click on Pick Your Skill. Check out my guide to see which games have options for the skills that you want to practise!
{"url":"https://www.mathswithamouse.co.uk/mental-maths-with-a-mouse-kirfs/number-bonds","timestamp":"2024-11-05T23:31:51Z","content_type":"text/html","content_length":"303018","record_id":"<urn:uuid:69e6c2bd-496b-4479-88fb-ae80f1606a72>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00238.warc.gz"}
Balanced Partial Entanglement and Mixed SciPost Submission Page Balanced Partial Entanglement and Mixed State Correlations by Hugo A. Camargo, Pratik Nandy, Qiang Wen and Haocheng Zhong This is not the latest submitted version. This Submission thread is now published as Submission summary Authors (as registered SciPost users): Qiang Wen · Haocheng Zhong Submission information Preprint Link: scipost_202202_00017v1 (pdf) Date submitted: 2022-02-10 11:23 Submitted by: Wen, Qiang Submitted to: SciPost Physics Ontological classification Academic field: Physics • Condensed Matter Physics - Theory Specialties: • High-Energy Physics - Theory • Quantum Physics Approach: Theoretical Recently in Ref.\cite{Wen:2021qgx}, one of the authors introduced the balanced partial entanglement (BPE), which has been proposed to be dual to the entanglement wedge cross-section (EWCS). In this paper, we explicitly demonstrate that the BPE could be considered as a proper measure of the total intrinsic correlation between two subsystems in a mixed state. The total correlation includes certain crossing correlations which are minimized on some balance conditions. By constructing a class of purifications from Euclidean path-integrals, we find that the balanced crossing correlations show universality and can be considered as the generalization of the Markov gap for canonical purification. We also test the relation between the BPE and the EWCS in three-dimensional asymptotically flat holography. We find that the balanced crossing correlation vanishes for the field theory invariant under BMS$_3$ symmetry (BMSFT) and dual to the Einstein gravity, indicating the possibility of a perfect Markov recovery. We further elucidate these crossing correlations as a signature of tripartite entanglement and explain their interpretation in both AdS and non-AdS holography. Current status: Has been resubmitted Reports on this Submission Report #2 by Anonymous (Referee 2) on 2022-3-20 (Invited Report) • Cite as: Anonymous, Report on arXiv:scipost_202202_00017v1, delivered 2022-03-20, doi: 10.21468/SciPost.Report.4731 Proposal of a measure of intrinsic quantum correlations which exhibits universal properties. No connection with an existing measure of quantum correlations in mixed states, the entanglement negativity. In this paper, the authors analyse a measure of the intrinsic correlations between two subsystems in a mixed state, the balanced partial entanglement (BPE) entropy. In particular, they prove that this quantity is independent from the purifications based on the Euclidean path-integral, even though it is still unclear if this holds for any kind of purification. They study the connection between the BPE and the reflected entropy and this allows the authors to define a generalisation of the Markov gap from the canonical purification to more generic purifications. Finally, they study the dual gravity side of the BPE, which is the entanglement wedge cross-section (EWCS). The paper is well-written and it contains some nontrivial results on a recently popular topic. The calculations are clearly explained and detailed. Moreover, the authors compare their results with new information-theoretic quantities, like the reflected entropy or the entanglement of purification. Therefore, I recommend the paper for publication. Requested changes Here is nevertheless a short list of comments/questions/typos: - Another relevant measure of quantum correlations in mixed states is the negativity. Could the authors clarify the difference between the two entanglement measures? There are proposals that claim that the EWCS is dual to the logarithmic negativity (see e.g. Phys. Rev. D 99 (2019) 106014): could the authors relate this to the duality between EWCS and BPE? - What is the relation between the BPE and other entanglement measures, like entanglement negativity, in non-holographic systems? - Before and in Eq. 10, why the authors use $S_{AA'}(A)$ rather than $s_{AA'}(A)$? Is this a typo? - Have the authors thought if a quantity similar to the partial entanglement entropy could be defined from the negativity contour rather than from the entanglement one? - Some typos could be: pag. 4 (before Eq. 6) each degrees $\to$ each degree, pag. 25 einstein gravity $\to$ Einstein gravity, pag. 26 tripartire $\to$ tripartite. The authors thank the referee very much for comments and recommendation. 1. Another relevant measure of quantum correlations in mixed states is the negativity. Could the authors clarify the difference between the two entanglement measures? There are proposals that claim that the EWCS is dual to the logarithmic negativity (see e.g. Phys. Rev. D 99 (2019) 106014): could the authors relate this to the duality between EWCS and BPE? Reply: Both the BPE and the negativity are measures of mixed state correlations which in holographic CFTs are captured by the same dual: the entanglement wedge cross section (EWCS). However, their field-theoretic definitions are very distinct and may therefore be seen a priori as independent correlation measures. The most evident difference between these quantities is the fact that while the negativity is defined intrinsically for mixed states via the partial transpose of the density matrix, the BPE is defined via certain purifications of the mixed state which satisfies the balance condition, and calculated by certain linear combination of subset entanglement entropies. Of course, the striking aspect that these quantities, alongside others, have been conjectured (or shown) to have the same gravitational dual in holographic CFTs points to the fact that they share a certain amount of information about the intrinsic correlations of mixed states which is highlighted in the holographic cases. It was demonstrated in [17] that both of the logarithmic negativity and the Renyi reflected entropy can be calculated by the correlation functions of twist operators, and the results match with each other. The coincidence between the negativity and the reflected entropy furthermore lead to the duality between the negativity and the EWCS. Although, the BPE is also closely related to the reflected entropy, its definition based on the density matrix of the mixed state is still not established. So far, the way we calculate the PEE or BPE is either the ALC proposal (linear combination of subset entanglement entropies) or the extensive mutual information proposal (directly solving the 7 requirements for PEE for certain theories), which are very different from the way using twister operators. Hence, a direct comparison between the BPE and negativity is hard to conduct currently. We hope to come back to this point in the future. We added this as a bullet point of section 6. 1. What is the relation between the BPE and other entanglement measures, like entanglement negativity, in non-holographic systems? Reply: In non-holographic systems, the reflected entropy is also a special case of the BPE defined on the canonical purification. Also, our discussion on the comparison between the BPE and EoP extend to the non-holographic cases. As we mentioned previously, the relations between BPE and other entanglement measures, such as odd entropy and logarithmic negativity in general CFTs or QFTs are currently unknown. In the set-up of thermal states or subsystems of ground states of non-holographic free QFTs, these quantities can efficiently computed using numerical method, which lead to a tractable comparison between these quantities. We hope to return to this question in future work. 1. Before and in Eq. 10, why the authors use $S_{AA’}(A)$ rather than $s_{AA’}(A)$? Is this a typo? Reply: Yes, the referee is right. Thanks for pointing it out. 1. Have the authors thought if a quantity similar to the partial entanglement entropy could be defined from the negativity contour rather than from the entanglement one? Reply: Entanglement negativity contour was previously examined in [54] where a version similar to the ALC proposal for negativity was introduced. It will be possible to impose balance conditions on the negativity, which might also provide a version of BPE for the negativity and possibly dual to the EWCS. We have added this comment to the main text as the final bullet point of section 6. 1. Reply: We have fixed all the typos pointed out by the referee. Thanks. Report #1 by Anonymous (Referee 1) on 2022-3-2 (Invited Report) • Cite as: Anonymous, Report on arXiv:scipost_202202_00017v1, delivered 2022-03-02, doi: 10.21468/SciPost.Report.4603 1. The authors propose a new quantity as the measure of the intrinsic correlation between two subsystems in a mixed state. 2. There are several positive examples supporting the proposal. 1. There is an example of puzzling negative partial entanglement entropy and mutual information without a good explanation. 2. It is still an open question whether the proposed balanced partial entanglement entropy is purification-independent. The authors study a special kind of the partial entanglement entropy (PEE), the balanced partial entanglement entropy (BPEE), as a measure of the intrinsic correlation between two subsystems $A$ and $B$ in a mixed state. For a fixed auxiliary system $A’\cup B’$ that purifies $A\cup B$, the authors vary the relative sizes of $A’$ and $B’$ and define the PEE under some balance condition as BPEE. It is argued that the BPEE is independent from the purification, and always equals the reflected entropy, which is defined in the canonical purification. It is also argued that in holographic theories the BPEE is dual to the entanglement wedge cross section (EWCS), as the reflected entropy is believed to be dual to the EWCS. Several examples are given to support the claim of in this paper, while there also exist examples that contradict the claim. Requested changes Though whether the BPEE is purification-dependent is an open question, I think the proposal and checks in this paper are interesting and meaningful and the paper deserves publication. I have several comments for the authors, while it is up to the authors whether to follow these comments. 1. In the purification the authors consider, it is not required that the Hilbert space of $A’\cup B’$ has the same dimension as the Hilbert space of $A\cup B$. It is also not required that the Hilbert space of $A’$ has the same dimension as the Hilbert space of $A$ or the Hilbert space of $B’$ has the same dimension as the Hilbert space of $B$. There is another way to impose the balance condition. One could consider the purifications $A’\cup B’$ with the condition that the Hilbert space of $A’$ has the same dimension as the Hilbert space of $A$ and the Hilbert space of $B’$ has the same dimension as the Hilbert space of $B$. There are an infinite number of such purifications, and one could choose the purification with the balance condition satisfied and define the PEE in such a purification as the BPEE. If needed, the minimal condition could also be imposed. The BPEE defined in this way is apparently intrinsic. If this definition of BPEE has any relation with the BPEE defined in this paper? 2. As the authors have stated for the negative PEE and mutual information, it is indeed puzzling. The Araki-Lieb inequality should hold for any reasonable quantum state. Could the authors comment more about the strange result? Is it because of the pathology of the model, the pathology of the state, or the problem of the calculation method? 3. When mentioning the entanglement negativity, the authors may consider mentioning the first works calculating the entanglement negativity in quantum field theory 1206.3092 and 1210.5359. 4. There are several small grammatical errors. For example, above eq (71) on page 18, there are “… can be trace back to …”, “… does not reduces to …”, “… one needs to more careful about …”. I suggest the authors check and correct similar errors in the whole paper. Though whether the BPEE is purification-dependent is an open question, I think the proposal and checks in this paper are interesting and meaningful and the paper deserves publication. I have several comments for the authors, while it is up to the authors whether to follow these comments. Reply: The authors thank the Referee very much for the recommendation. 1. In the purification the authors consider, it is not required that the Hilbert space of A′∪B′ has the same dimension as the Hilbert space of A∪B. It is also not required that the Hilbert space of A′ has the same dimension as the Hilbert space of A or the Hilbert space of B′ has the same dimension as the Hilbert space of B. There is another way to impose the balance condition. One could consider the purifications A′∪B′ with the condition that the Hilbert space of A′ has the same dimension as the Hilbert space of A and the Hilbert space of B′ has the same dimension as the Hilbert space of B. There are an infinite number of such purifications, and one could choose the purification with the balance condition satisfied and define the PEE in such a purification as the BPEE. If needed, the minimal condition could also be imposed. The BPEE defined in this way is apparently intrinsic. If this definition of BPEE has any relation with the BPEE defined in this paper? Reply: Thanks for the suggestion. We think the biggest disadvantage of the EoP and the reflected entropy is their restriction to certain specific purification. This makes them automatically intrinsic, but also hard to evaluate (especially for the EoP) and hard to investigate other properties like monotonic under inclusion. However, the information (density matrix) of the mixed state is entirely included in any purification, hence any intrinsic correlation in the mixed state could be somehow extracted in any purification. On the other hand, if someone gives a proposal that works for general purifications, then it comes with the problem of demonstrating its intrinsic nature or the independence from purification. The referee suggested a way to demonstrate the purification-independence of the BPE. If we require that A and A’ (B and B’) to have Hilbert space of the same dimensions, and choose those that satisfy the balance conditions, this should give us the BPE we have defined in this special class of purifications. According to our claim that the BPE is purification independent, the BPE in these configurations should all equal to the reflected entropy. One could arrive at such configurations by starting from the canonical purification and applying local unitary transformations in A’ and B’ respectively. The dimension of the Hilbert space of A’ and B’, as well as the BPE, are invariant under such local unitary transformations. However, the right way to demonstrate that the intrinsic nature of BPE is to go beyond this class of purifications. We can test this property in more generic purifications, while a general proof is not clear to us. Let us also introduce some related facts about the EoP. In exploratory numerical computations done for EoP using Gaussian techniques for lattice discretizations of free QFTs (SciPost Phys. 10 (2021) 3, 066), it was found that the value of the EoP is independent of the dimension of the purifying Hilbert space. This statement was verified for the most general Gaussian purifications of Gaussian mixed states defined in free bosonic and fermionic (1+1)-dimensional CFTs (e.g. free boson and critical transverse Ising). However, it is currently not known whether this result also applies to more general (i.e. non-Gaussian) purifications of Gaussian mixed states, or even if this holds for more general mixed states. It is an open problem to understand this situation for EoP in quantum many-body systems. We believe the situation for BPE should in principle be similar: i.e. invariant under the changing of the dimension of the purifying Hilbert space. Of course, this should be verified numerically in free QFTs, specially since there could exist a large class of purifications which satisfy the balance condition. 1. As the authors have stated for the negative PEE and mutual information, it is indeed puzzling. The Araki-Lieb inequality should hold for any reasonable quantum state. Could the authors comment more about the strange result? Is it because of the pathology of the model, the pathology of the state, or the problem of the calculation method? Reply: We admit that the negative PEE and MI in the vacuum state prepared by the optimized path integral looks puzzling. One possible reason that causes the puzzle may come from the finite cutoff after optimization. As we can see, in the optimized path integral the cutoff scale becomes finite and is comparable to the length of the sub intervals like A, B, B’. When the cutoff scale excels some critical value, the formula (29) to evaluate entanglement entropy may become invalid. Another possible explanation for this puzzling observation may be given by looking at the gate-counting perspective of path-integral optimization and the quantum state obtained thereof. In (1904.02713) authors studied whether it was possible to recover the Liouville action as a complexity cost functional for states constructed from Euclidean-time evolution in 2d CFTs from a quantum circuit perspective. Indeed this is possible by the inclusion of non-unitary gates in the quantum circuit. As a consequence, the inclusion of said non-unitary operations on the circuit acting on the initial (2d CFT ground) state may be responsible for yielding a final state (obtained by the optimization of the Euclidean path-integral) which violates the inequality. Also, one of the authors in [70] suggested an explanation for the puzzle. The equation of motion of the Liouville action allows a constant shift for the scalar field. After a proper choice for the constant, the negative PEE or MI can become zero or positive. Nevertheless, under this shift, the entanglement entropy $S_{AA’}$ also changes and differ from the EWCS by a constant of c/6 log2. Indeed this puzzle help us get rid of one counter-example that contradicts with the purification independence of the BPE. If the following two statements are true at the same time: 1, in the vacuum state from optimized path-integral $S_{AA’}=EWCS/4G$; 2, the PEE (or MI) cannot be negative; then the BPE in this state cannot capture the EWCS because it is smaller than $S_{AA’}$. Fortunately, the puzzle tells us that the above two statement cannot be true at the same time. 1. When mentioning the entanglement negativity, the authors may consider mentioning the first works calculating the entanglement negativity in quantum field theory 1206.3092 and 1210.5359. Reply: Thanks for pointing out the two references, we will add them in the next version. 1. There are several small grammatical errors. For example, above eq (71) on page 18, there are “… can be trace back to …”, “… does not reduces to …”, “… one needs to more careful about …”. I suggest the authors check and correct similar errors in the whole paper. Reply: Thanks, we will fix these typos, and further proofread the manuscript.
{"url":"https://scipost.org/submissions/scipost_202202_00017v1/","timestamp":"2024-11-13T07:35:11Z","content_type":"text/html","content_length":"55374","record_id":"<urn:uuid:39632c50-10a9-4a7b-92bd-a6aec05d9c9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00819.warc.gz"}
Gcse algebra questions online and solutions gcse algebra questions online and solutions Related topics: famous mathimatical equasion how to move a square root graph sideways precalculus chapter 1 test for each equation find 3 solutions discrete mathematics and its applications instructor's resource guide download roots of polynomial equations Transforming Trig factor 8m-104 Author Message rivsolj Posted: Wednesday 27th of Oct 08:48 Hi folks . I am in deep trouble . I simply don’t know whom to approach. You know, I am having difficulties with my math and need urgent help with gcse algebra questions online and solutions. Anyone you know who can save me with adding exponents, equivalent fractions and perfect square trinomial? I tried hard to get a tutor, but failed . They are hard to find and also cost a lot . It’s also difficult to find someone quick enough and I have this test coming up. Any advice on what I should do? I would very much appreciate a fast response . Back to top IlbendF Posted: Thursday 28th of Oct 13:59 Can you please be more detailed as to what sort of service you are expecting to get. Do you want to learn the principles and work on your math questions on your own or do you need a utility that would offer you a step-by-step solution for your math problems? Back to top MoonBuggy Posted: Friday 29th of Oct 20:27 Algebrator is used by almost every student in our class. Most of the students in my class work part-time. Our teacher introduced this tool to us and we all have been using it since then. Leeds, UK Back to top maodjalt Posted: Saturday 30th of Oct 12:28 Ok, now I’ve heard quite a few good things about Algebrator, I think it’s definitely is worth a try. How do I get hold of it ? Back to top nedslictis Posted: Saturday 30th of Oct 21:46 factoring expressions, inverse matrices and side-angle-side similarity were a nightmare for me until I found Algebrator, which is truly the best algebra program that I have come across. I have used it frequently through many algebra classes – Algebra 2, Remedial Algebra and Algebra 1. Simply typing in the algebra problem and clicking on Solve, Algebrator generates step-by-step solution to the problem, and my algebra homework would be ready. I highly recommend the program. Back to top MoonBuggy Posted: Sunday 31st of Oct 09:57 I'm sorry . I should have included the link our first time around : https://softmath.com/news.html. I don't have any knowledge about a test version, but the official proprietors of Algebrator, as opposed to some store fronts of imitation software , put up an out-and-out money back guarantee . So , you can buy the official copy, test the software package and send it back should one not be gratified by the execution and features. Although I suspect you're gonna love it , I'm very interested in hearing from you or anyone should there be a feature where the program does not work . I don't desire to advocate Algebrator for something it cannot do . Only the next one encountered will likely be the initial one! Leeds, UK Back to top
{"url":"https://www.softmath.com/algebra-software/exponential-equations/gcse-algebra-questions-online.html","timestamp":"2024-11-14T13:35:04Z","content_type":"text/html","content_length":"43219","record_id":"<urn:uuid:da5060a9-ca01-4fce-a3f5-c1a0faf0cfbe>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00498.warc.gz"}
American Mathematical Society On the core of a cone-preserving map HTML articles powered by AMS MathViewer by Bit Shun Tam and Hans Schneider Trans. Amer. Math. Soc. 343 (1994), 479-524 DOI: https://doi.org/10.1090/S0002-9947-1994-1242787-6 PDF | Request permission This is the third of a sequence of papers in an attempt to study the Perron-Frobenius theory of a nonnegative matrix and its generalizations from the cone-theoretic viewpoint. Our main object of interest here is the core of a cone-preserving map. If A is an $n \times n$ real matrix which leaves invariant a proper cone K in ${\mathbb {R}^n}$, then by the core of A relative to K, denoted by $ {\text {core}}_K(A)$, we mean the convex cone $\bigcap \nolimits _{i = 1}^\infty {{A^i}K}$. It is shown that when ${\text {core}}_K(A)$ is polyhedral, which is the case whenever K is, then ${\text {core}}_K(A)$ is generated by the distinguished eigenvectors of positive powers of A. The important concept of a distinguished A-invariant face is introduced, which corresponds to the concept of a distinguished class in the nonnegative matrix case. We prove a significant theorem which describes a one-to-one correspondence between the distinguished A-invariant faces of K and the cycles of the permutation induced by A on the extreme rays of ${\text {core}}_K(A)$, provided that the latter cone is nonzero, simplicial. By an interplay between cone-theoretic and graph-theoretic ideas, the extreme rays of the core of a nonnegative matrix are fully described. Characterizations of K-irreducibility or A-primitivity of A are also found in terms of ${\text {core}}_K(A)$. Several equivalent conditions are also given on a matrix with an invariant proper cone so that its spectral radius is an eigenvalue of index one. An equivalent condition in terms of the peripheral spectrum is also found on a real matrix A with the Perron-Schaefer condition for which there exists a proper invariant cone K suchthat ${\text {core}}_K(A)$ is polyhedral, simplicial, or a single ray. A method of producing a large class of invariant proper cones for a matrix with the Perron-Schaefer condition is also offered. References G. F. Frobenius, Über Matrizen aus nicht negativen Elementen, Sitzungsber. Kon. Preuss Akad. Wiss. Berlin, (1912), 456-477; Ges. Abh. 3 (1968), 546-567. G. H. Hardy and E. M. Wright, An introduction to the theory of numbers, 4th ed., Oxford Univ. Press, London, 1960. R. J. Jang and H. D. Victory, Jr., On nonnegative solvability of linear integral equations, preprint. —, Topological vector spaces, 4th printing, Springer, New York, 1980. —, On matrices with invariant proper cones, (in preparation). B. S. Tam and H. Schneider, On the invariant faces of a cone-preserving map, (in preparation). Similar Articles • Retrieve articles in Transactions of the American Mathematical Society with MSC: 15A48, 47B65 • Retrieve articles in all journals with MSC: 15A48, 47B65 Bibliographic Information • © Copyright 1994 American Mathematical Society • Journal: Trans. Amer. Math. Soc. 343 (1994), 479-524 • MSC: Primary 15A48; Secondary 47B65 • DOI: https://doi.org/10.1090/S0002-9947-1994-1242787-6 • MathSciNet review: 1242787
{"url":"https://www.ams.org/journals/tran/1994-343-02/S0002-9947-1994-1242787-6/?active=current","timestamp":"2024-11-04T14:56:57Z","content_type":"text/html","content_length":"93138","record_id":"<urn:uuid:0d893ebb-208b-4005-b0ab-a2706f294ada>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00704.warc.gz"}
5 of Spades | Lost in the Shuffle top of page 5 of Spades Don't be shy! Click here to begin hints for this puzzle. The first ordinal indicates an aspect of the second ordinal. The First of First is ā Fā . The Second of Second is ā Eā . The first ordinal tells what letter to use from the second ordinal. Then the third ordinal tells what position that letter should be placed in a final word. Doing so gives the letters O-R-D-E-R. The answer is ORDER. That's all the hints I have for this puzzle. The next click will reveal the answer as well as a quick breakdown of the intended method for arriving at the answer. Have you worked out what each message of the circular messages is indicating? Each message contains three ordinal numerals (i.e. FIRST, SECOND, THIRD, etc.), each serving a different function. Have you identified five letters that can spell out the one word answer? Don't be shy! Click here to begin hints for this puzzle. That's all the hints I have for this puzzle. The next click will reveal the answer as well as a quick breakdown of the intended method for arriving at the answer. bottom of page
{"url":"https://www.shuffle.spencerispuzzling.com/items/5-of-spades","timestamp":"2024-11-08T18:16:02Z","content_type":"text/html","content_length":"504850","record_id":"<urn:uuid:72445771-5bb2-44f8-80ae-6f7bcda87275>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00392.warc.gz"}
Fingerprint Positioning Method for Dual-Band Wi-Fi Based on Gaussian Process Regression and K-Nearest Neighbor Key Laboratory of Land Environment and Disaster Monitoring, MNR, China University of Mining and Technology, Xuzhou 221116, China School of Environmental Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China College of Surveying and Geo-Informatics, Shandong Jianzhu University, Jinan 250101, China Author to whom correspondence should be addressed. Submission received: 24 August 2021 / Revised: 7 October 2021 / Accepted: 13 October 2021 / Published: 15 October 2021 Since many Wi-Fi routers can currently transmit two-band signals, we aimed to study dual-band Wi-Fi to achieve better positioning results. Thus, this paper proposes a fingerprint positioning method for dual-band Wi-Fi based on Gaussian process regression (GPR) and the K-nearest neighbor (KNN) algorithm. In the offline stage, the received signal strength (RSS) measurements of the 2.4 GHz and 5 GHz signals at the reference points (RPs) are collected and normalized to generate the online dual-band fingerprint, a special fingerprint for dual-band Wi-Fi. Then, a dual-band fingerprint database, which is a dedicated fingerprint database for dual-band Wi-Fi, is built with the dual-band fingerprint and the corresponding RP coordinates. Each dual-band fingerprint constructs its positioning model with the GPR algorithm based on itself and its neighborhood fingerprints, and its corresponding RP coordinates are the label of this model. The neighborhood fingerprints are found by the spatial distances between RPs. In the online stage, the measured RSS values of dual-band Wi-Fi are used to generate the online dual-band fingerprint and the 5 GHz fingerprint. Due to the better stability of the 5 GHz signal, an initial position is solved with the 5 GHz fingerprint and the KNN algorithm. Then, the distances between the initial position and model labels are calculated to find a positioning model with the minimum distance, which is the optimal positioning model. Finally, the dual-band fingerprint is input into this model, and the output of this model is the final estimated position. To evaluate the proposed method, we selected two scenarios (A and B) as the test area. In scenario A, the mean error (ME) and root-mean-square error (RMSE) of the proposed method were 1.067 and 1.331 m, respectively. The ME and RMSE in scenario B were 1.432 and 1.712 m, respectively. The experimental results show that the proposed method can achieve a better positioning effect compared with the KNN, Rank, Coverage-area, and GPR algorithms. 1. Introduction Indoor positioning has attracted extensive attention due to its broad economic outlook. It can be realized with ultra-wideband (UWB) [ ], Bluetooth [ ], Wireless Fidelity (Wi-Fi) [ ], Radio-Frequency Identification (RFID) [ ], Computer Vision [ ], Ultrasonic [ ], Inertial Navigation System (INS) [ ], pseudolite [ ], the geomagnetic field [ ], etc. Among them, Wi-Fi positioning technology is a popular technology due to its lower cost. It mainly includes two categories: received signal strength (RSS) [ ] and Channel State Information (CSI) [ ]. The CSI-based positioning method requires special equipment, thus hindering its widespread use. In contrast, since most mobile devices, such as smartphones and computers, support the Wi-Fi protocol, the RSS measurement can be easily obtained. In addition, the widely used Wi-Fi routers also create a good environment for the acquisition of RSS. Hence, Wi-Fi RSS-based indoor positioning is the most used indoor localization method. The single-band Wi-Fi signal has been extensively studied in previous works, and the most used band is 2.4 GHz. However, the Wi-Fi signals currently have two bands, 2.4 GHz and 5 GHz, and most Wi-Fi devices in the market can already transmit two-band signals. These changes in Wi-Fi technology introduce some challenges to Wi-Fi RSS-based positioning technology. It is thus necessary to study how to use dual-band Wi-Fi instead of single-band Wi-Fi to realize higher-accuracy indoor positioning. However, there are few positioning studies based on dual-band Wi-Fi. Thus, this paper proposes a fingerprint positioning method for dual-band Wi-Fi based on GPR and KNN. This method includes two stages: offline and online. In the offline stage, the RSS measurements of 2.4 GHz and 5 GHz signals are collected to perform the normalization operation. The dual-band fingerprints are generated with the normalized RSS measurements. The dual-band fingerprints and their corresponding RP coordinates comprise a special fingerprint database for dual-band Wi-Fi, i.e., the dual-band fingerprint database. Then, based on the given neighborhood radius, the neighborhood fingerprints of each dual-band fingerprint are searched to construct the positioning model. Finally, all models are stored in the positioning model database, which is used for localization. In the online stage, the RSS measurements of dual-band signals are collected at the same time to generate the online dual-band fingerprint and 5 GHz fingerprint. Based on the 5 GHz fingerprint and KNN algorithm, an initial position is solved to obtain an optimal positioning model from the model database. Then, the online dual-band fingerprint is regarded as the input data of the selected positioning model. The output of the positioning model is the final estimated position. The contributions of this work can be listed as follows: In this paper, we propose a dedicated dual-band fingerprint for dual-band Wi-Fi, which is generated by the normalized values of the RSS measurements of the 2.4 and 5 GHz signals. The dual band can make full use of existing positioning information and achieve a better positioning effect than the 2.4 GHz, 5 GHz, and hybrid fingerprints. The function of the normalization algorithm is to eliminate the metrics of the RSS measurements to avoid their influence on the value calculation. A model construction method is used to build the positioning model of each dual-band fingerprint in this paper. In the proposed method, based on the GPR algorithm and neighborhood fingerprints, the positioning model of the dual-band fingerprint can be constructed. The proposed model construction method can avoid the decreases in the positioning model’s precision as the positioning area We propose a two-step positioning strategy considering the calculation amount and positioning effect. First, the 5 GHz fingerprint is used to solve a relatively high-precision initial position due to the better stability of the 5 GHz signal than the 2.4 GHz signal, and KNN with low complexity is chosen as the positioning algorithm. Then, the optimal positioning model can be chosen based on the initial position, which is employed to ensure a more accurate position. 2. Related Work The RSS-based positioning method mainly includes ranging [ ] and fingerprinting [ ]. The ranging method applies the path loss model to estimate the distances between the access points (APs) and the positioning terminal. The localization algorithms, such as the least-square (LS) [ ] and trilateration [ ] algorithms, solve the positioning result using the estimated distances. However, the ranging has a poorer positioning effect than fingerprinting since the RSS measurement is easily affected. The fingerprinting mainly includes two stages: offline and online. The main task of the offline stage is to construct the fingerprint database. In the online stage, the main purpose is to utilize the positioning algorithms, such as the K-nearest neighbor (KNN) [ ], neural network (NN) [ ], Gaussian process regression (GPR) [ ], Horus [ ], Rank [ ], and Coverage-area [ ] algorithms, to estimate the position of the terminal. Many methods have been proposed to enhance positioning accuracy since the invention of Wi-Fi fingerprinting. The clustering algorithm was introduced to the Wi-Fi fingerprint positioning to improve the positioning accuracy and decrease the computation. Machine learning and deep learning were applied to the field of Wi-Fi fingerprint positioning to improve the positioning effect. Interpolation and Crowdsourcing were employed to assist the fast construction of the fingerprint database. However, most of the current research studies on Wi-Fi fingerprint positioning are based on 2.4 GHz Wi-Fi, while few studies have focused on 5 GHz and dual-band Wi-Fi. For example, [ ] proved that the 5 GHz RSS is more stable than 2.4 GHz RSS. The performance of the fingerprint positioning of the 5 GHz Wi-Fi was obviously better than that of 2.4 GHz Wi-Fi [ ]. The authors of [ ] achieved indoor positioning with the path loss model and the RSS measurements of dual-band Wi-Fi. However, the researchers only studied the range-based positioning and neglected fingerprint-based positioning. More importantly, the ranging precision was not ideal. Literature [ ] utilized the RSS measurements of the 5 GHz signals to construct the fingerprint database and obtained a better positioning effect than the 2.4 GHz signals. However, it wasted the existing positioning information, namely the RSS measurements of the 2.4 GHz signals. The authors of [ ] distinguished the NLOS and LOS environment using the 2.4 GHz and 5 GHz RSS measurements and employed the capsule networks to derive the position. 3. Algorithm 3.1. Normalization Algorithm The normalization algorithm is an approach that transforms a dimensional expression into a non-dimensional expression. It can adjust the data with different metrics to the same metric. Generally, the normalization algorithm usually maps data between 0 and 1. Its computation method can be presented as: $R S S n o r = R S S − R S S m i n R S S m a x − R S S m i n$ $R S S n o r$ $R S S$ $R S S m i n$ , and $R S S m a x$ present the normalized RSS value, the measured RSS, the minimum RSS value, and the maximum RSS value, respectively. 3.2. KNN Algorithm The KNN algorithm is a common fingerprint positioning algorithm. It uses the distances between online and offline fingerprints to find K-nearest reference points (RPs) to realize the location estimation. The Euclidean distance, Manhattan distance, cosine distance, and other values can be used for the KNN algorithm. In this paper, we used the Euclidean distance to compute the distances between the online and offline fingerprints. The algorithm steps are as follows: Step (1): Traverse the fingerprint database and calculate the distances between the online fingerprint and fingerprints in the fingerprint database. Step (2): Sort these distances and find the K-nearest RPs with minimum distances. Step (3): Calculate the mean of the K-nearest RPs as the positioning result. The calculation method can be expressed as: $X k n n , Y k n n = ∑ i = 1 K x i , y i K$ Here, $X k n n , Y k n n$ is the estimated location and $x i , y i$ denotes the position of the ith nearest RP. 3.3. GPR Algorithm In this paper, we utilized the GPR algorithm to construct the positioning model of each dual-band fingerprint. The dual-band fingerprints are training data, which can be expressed as: $R i = R S S 1 i , R S S 2 i , R S S 3 i , ⋯ , R S S M i$ $R i$ represents the th dual-band fingerprint, $R S S j i$ presents the feature of the th dual-band Wi-Fi, and is the number of dual-band Wi-Fi. In this paper, the ideal output of the model is the position that corresponds to the dual-band fingerprint, which can be expressed as: $L = l 1 , , l 2 , l 3 , , ⋯ , l N , , l i = x i , y i$ is the number of RPs, which is also the amount of the dual-band fingerprints used to build the positioning model, denotes the RP coordinate collection in the model construction, and $x i , y i$ represents the coordinates of the th RP. Following the GPR algorithm, there is a mapping between the input and output data, as illustrated in Equation (5): follows a Gaussian distribution, namely $γ ~ N 0 , σ 2$ , and $l i$ represents the coordinates of the th RP. The main purpose of model training is to find the potential relation between the input and output data. GPR can solve the potential mapping relation of the input and output data. The Gaussian process is a set of random variables that are subject to a joint Gaussian distribution, which is determined by a mean function and covariance function, as shown in Equation (6): $f R ~ G P m R , K R , R$ $m R$ represents the mean function that can be set as zero without loss of generality, $K R , R$ denotes the covariance matrix, $k R i , R j$ presents the covariance function that can be calculated by Equation (9), and $f R$ represents the Gaussian process. $K R , R = k R 1 , R 1 k R 2 , R 1 k R 1 , R 2 k R 2 , R 2 ⋯ k R 1 , R N k R 2 , R N ⋮ ⋱ ⋮ k R N , R 1 k R N , R 2 ⋯ k R N , R N$ $k R i , R j = E f R i − m R i f R j − m R j$ Here, $E ·$ indicates the expectation operator. The main purpose of model training is to solve the hyperparameters $σ , δ f , λ l$. $δ f$ represents the signal standard deviation, $λ l$ is the length-scale parameter, and $R i$ presents the ith dual-band fingerprint. In this paper, the Euclidean distance was selected to calculate $k R i , R j$ , which is represented by $‖ R i − R j ‖$ $k R i , R j$ was calculated by the kernel function, as shown in Equation (10): $k R i , R j = δ f 2 e x p − ‖ R i − R j ‖ 2 2 λ l 2$ The prediction position $l *$ and training positions follow a multivariate Gaussian distribution jointly as follows: $L l * = N 0 , K R , R K R , R * K R * , R K R * , R *$ $R *$ are the test data and training data, respectively. The posterior distribution $P l * | L$ can be denoted as: $l * | L = N ~ K R * , R K R , R − 1 L , K R * , R − K R * , R K R , R − 1 K R , R *$ In the positioning estimation stage, the collected RSS measurements of dual-band Wi-Fi are utilized to produce the dual-band fingerprint, which is regarded as the input data of the positioning model. The output of the model is the estimated position. 3.4. Rank Algorithm The Rank algorithm is a fingerprinting algorithm based on the rankings of RSS measurements, which can avoid the influence of device heterogeneity. In this Rank algorithm, the rankings of RSS values of the offline and online stages were used to construct the fingerprint database and solve the positioning results. Suppose that there is a set of RSS values, $R i = R S S 1 i , R S S 2 i , R S S 3 i , ⋯ , R S S M i$ . The fingerprint of the Rank algorithm can be denoted as: $R R a n k i = S o r t R i = d 1 , d 2 , d 3 , ⋯ , d M$ $R R a n k i$ represents the fingerprint of the Rank algorithm, $d i$ is the rank value of the th RSS measurements at descending order, and is the number of RSS measurements. Then, the similarities between the online and offline rank vectors were calculated to find K nearest RPs with minimum similarities, which were used to solve the location. In the Rank algorithm, the spearman distance, Jaccard coefficient, hamming distance, Canberra distance, and other values can be used to compute the similarity. 3.5. Coverage-Area Algorithm The Coverage-area algorithm uses the MACs of the scanned APs to achieve the positioning. Its offline fingerprint can be expressed as: $R c o v e r a g e i = x i , y i , M A C 1 , M A C 2 , M A C 3 , ⋯ , M A C M$ $R c o v e r a g e i$ represents the fingerprint of the Coverage-area algorithm, and $M A C i$ is the MAC of the th AP, and is the number of scanned APs. In the online stages, there is a set of MACs, $M A C s c a n i = M A C 1 , M A C 2 , M A C 3 , ⋯ , M A C M$, which is the online fingerprint of the Coverage-area algorithm. Based on the MAC scanned online, the shared MAC of the offline and online fingerprints can be obtained. The RPs with the most shared MACs were chosen to estimate the position. 4. The Proposed Method 4.1. Stability Analysis of RSS Measurements of Dual-Band Wi-Fi Since dual-band Wi-Fi has different bands, there should be differences between the two RSS measurements. In this section, we analyze the stability of RSS measurements of two-band signals. The RSS measurements of the 2.4 GHz and 5 GHz signals were gathered to study their stability, as shown in Figure 1 . To conduct this experiment, the experimenter stood in a fixed position and gathered the RSS measurements of a dual-band Wi-Fi router from four directions. The collection frequency was 1 Hz, and the original RSS measurements of each direction were employed to analyze the stability of the 2.4 and 5 GHz signals. The original RSS measurements of each scan of the four orientations are shown in Figure 2 . In each direction, the stability of the RSS measurements of the 2.4 GHz and 5 GHz signals was different. The RSS measurements of the 5 GHz signals were more stable than those of the 2.4 GHz signals. For example, the RSS measurements of the 2.4 GHz signals had a few strong jumps at orientations 1 and 2. Therefore, it is necessary to study how to appropriately utilize the RSS measurements of dual-band Wi-Fi to achieve a better positioning effect than single-band Wi-Fi. 4.2. The Traditional Fingerprint Dual-band Wi-Fi can transmit 2.4 GHz and 5 GHz signals, meaning that more available positioning information (2.4 GHz and 5 GHz RSS) can be obtained when dual-band Wi-Fi is utilized for indoor positioning compared with single-band Wi-Fi. Generally, the RSS measurements of any band Wi-Fi can be employed to construct the fingerprint database. In other words, we can utilize the RSS values of the 2.4 GHz and 5 GHz signals to generate the 2.4 GHz and 5 GHz fingerprints, respectively, as shown in Figure 3 , where represents the media access control (MAC) address of AP, and superscript and subscript notations are the serial number and signal band, respectively, and denotes the number of the perceived APs. 4.3. The Proposed Dual-Band Fingerprint Given that the RSS measurements of the 2.4 GHz and 5 GHz signals have different metrics, it is necessary to adjust two RSS measurements to the same metric. In this paper, the normalization algorithm was utilized to preprocess the two RSS measurements, as shown in Equation (15). $R S S n o r i = N o r R S S 2.4 G H z i + N o r R S S 5 G H z i$ $N o r R S S 2.4 G H z i$ $N o r R S S 5 G H z i$ are the normalized values of the 2.4 GHz and 5 GHz RSS of the th dual-band Wi-Fi, respectively, and $R S S n o r i$ denotes the feature of the th dual-band Wi-Fi. In other words, the RSS measurements of the 2.4 GHz and 5 GHz signals of one dual-band Wi-Fi can generate a feature that represents this dual-band Wi-Fi. Multiple features are used to make up the dual-band fingerprint, as shown in Figure 4 4.4. Overview of the Proposed Method The proposed method includes two stages, offline and online, as shown in Figure 5 . In the offline stage, there were two steps: the fingerprint database construction and positioning model establishment. First, the RSS measurements of the 2.4 GHz and 5 GHz signals at RPs were collected. Then, these measurements were normalized and adjusted to the same metric. Based on these normalized RSS values, the dual-band fingerprints were generated. The dual-band fingerprint database was built with the dual-band fingerprints and their corresponding RP coordinates. Meanwhile, a 5 GHz fingerprint database can be also built with 5 GHz RSS measurements. Second, we constructed the positioning model. Considering that the positioning model trained by the GPR algorithm has poor accuracy in a larger indoor environment, this paper proposes a novel model construction method that establishes the positioning model of each dual-band fingerprint. In the proposed model construction method, the RP corresponding to the th dual-band fingerprint $R i$ is seen as a center to search the fingerprint in its neighborhood, namely the neighborhood fingerprint, as shown in Figure 6 Then, the dual-band fingerprint $R i$ and its neighborhood fingerprints were regarded as the training data to establish a positioning model, and its corresponding RP coordinates were the label of this positioning model. After the positioning model of $R i$ was built, we established the positioning model of the next fingerprint $R i + 1$ until all dual-band fingerprints had their positioning models, i.e., $i > N$ , as shown in Figure 7 . The positioning models of all dual-band fingerprints were stored in a database. In the online stage, the first step is to collect the RSS measurements of the 2.4 GHz and 5 GHz signals to generate the online dual-band and 5 GHz fingerprints. However, to generate the dual-band fingerprint, the RSS measurements must be normalized. The sum of the two sets of normalized RSS values is the dual-band fingerprint. The second step is the location estimation This step primarily aims to find an optimal positioning model to calculate the position of the terminal. The KNN algorithm is utilized to obtain an initial position based on the 5 GHz fingerprint. Then, the spatial distances between this initial position and model labels are computed, i.e., the distances between this initial position and RPs, which were employed to find an optimal positioning model. The model corresponding to the minimum distance is the final optimal localization model. The dual-band fingerprint is input into the model as input data, and the output is the final location estimation result. 5. Experimental Analysis 5.1. Experimental Environment To evaluate the performance of the proposed method, two scenarios, Scenario A and Scenario B, were chosen as the experimental areas, as shown in Figure 8 Figure 9 , respectively. Scenario A, as shown in Figure 8 , had a length of 56.82 m and an area of approximately 154.37 m . The blue circle and red rectangle in the Figure represent the RP and test point (TP), respectively. The distance between two adjacent RPs was 1.2 m. There were 10 dual-band Wi-Fi routers in Scenario A, which could all transmit dual-band signals. There were 114 RPs and 28 TPs in Scenario A. The XIAOMI 8 Smartphone was used to gather the data in this scenario. Scenario B was approximately 140.49 m , and its length was 16.8 m. The blue rectangle and red circle in Figure 9 represent the RP and TP, respectively. The distance between adjacent RPs was, at maximum, 0.7 m, and the distance between a small number of adjacent RPs was 0.6 m. There were five dual-band Wi-Fi routers with dual-band signal emissions in Scenario B. There were 183 and 24 RPs and TPs in Scenario B, respectively. In this scenario, the HUAWEI P20 Smartphone was used to collect data. The collection methods of the two scenarios shared the same acquisition frequency and time. The experimenters were based on the RP and collected the RSS measurements of the 2.4 GHz and 5 GHz signals. The acquisition time was 30 s, and the acquisition frequency was 1Hz. The RSS measurements of the dual-band signals were used to generate the 2.4 GHz fingerprint, 5 GHz fingerprint, hybrid fingerprint, and dual-band fingerprint. To test the performance of the proposed positioning method, the RSS measurements at each TP were gathered, and the same acquisition method was used as that of RP. However, the acquisition time was 10 5.2. Experimental Description To assess the performance of the proposed method, three experiments were conducted: the comparison of model precision of the proposed method and GPR; the study of the positioning effect of the 2.4 GHz, 5 GHz, hybrid, and dual-band fingerprints in Scenarios A and B; and the study of the contradistinction between the proposed method and the other classic methods (KNN, Rank, Coverage-area, and In this paper, the accuracy is measured with mean error (ME). The root-mean-square error (RMSE) is employed as the index of stability. In addition, the model loss is utilized as the index of model 5.3. Comparison of Model Precision To rate the model construction accuracy of the proposed method, Scenario B was used as the experimental area. The model losses of the proposed method and GPR were calculated to evaluate the model construction precision. Since there were multiple models due to the application of the proposed model’s construction method, the mean of multiple model losses was the model loss of the proposed method in this paper. The model losses of the proposed method and GPR are presented in Figure 10 . The model losses of the proposed method and GPR were 0.875 and 1.644 m, respectively. Compared with the GPR method, the model accuracy of the proposed method was improved by 46.78%. Thus, the constructed model of the proposed method is better than that of GPR. 5.4. Positioning Effect of the 2.4 GHz, 5 GHz, Hybrid, and Dual-Band Fingerprints in Scenario A In this section, Scenario A was used as a test area to study the positioning effect of the 2.4 GHz, 5 GHz, hybrid, and dual-band fingerprints. The GPR algorithm was chosen as the positioning algorithm to estimate the position. Figure 11 shows the positioning effect of the different fingerprints, mainly presenting the MEs and RMSEs of the 2.4 GHz, 5 GHz, hybrid, and dual-band fingerprints. When the GPR algorithm was the positioning algorithm, the MEs of the 2.4 GHz, 5 GHz, hybrid, and dual-band fingerprints were 2.318, 2.509, 2.238, and 2.020 m, respectively, and RMSEs were 2.793, 3.011, 2.922, and 2.404 m, respectively, as shown in Table 1 . We can conclude that the positioning stability and accuracy of hybrid and dual-band fingerprints are better than those of 2.4 GHz and 5 GHz fingerprints. The dual-band fingerprint had improvements of 0.66 and 0.218 m on the RMSE and on ME, respectively, compared to the hybrid fingerprint. The dual-band fingerprint was better than the hybrid fingerprint. The experimental results also indicate that the GPR algorithm would not have a good positioning effect in a large indoor area. 5.5. Positioning Effect of the 2.4 GHz, 5 GHz, Hybrid, and Dual-Band Fingerprints in Scenario B In this section, we introduce the positioning effect of the 2.4 GHz, 5 GHz, hybrid, and dual-band fingerprints in Scenario B. KNN was the positioning algorithm. Figure 12 represents the MEs of the 2.4 GHz, 5 GHz, hybrid, and dual-band fingerprints. The positioning effect of the dual-band fingerprint was better than that of the 4 GHz, 5 GHz, and hybrid fingerprints. Table 2 presents the statistical results of the positioning errors of the 2.4 GHz, 5 GHz, hybrid, and dual-band fingerprints in Scenario B. The ME and RMSE of the 2.4 GHz fingerprint were 2.366 and 2.589 m, respectively, and those of 5 GHz fingerprint were 2.352 and 2.559 m, respectively. The MEs of the hybrid and dual-band fingerprints were 2.12 m and 1.896 m, respectively, and RMSEs were 2.392 and 2.152 m, respectively. The ME of the dual-band fingerprint decreased by 0.47, 0.456, and 0.224 m, respectively. The RMSE of the dual-band fingerprint reduced by 0.435, 0.405 and 0.238 m, respectively. Thus, the positioning effect of the dual-band fingerprint is better than that of 2.4 GHz, 5 GHz, and hybrid fingerprints. 5.6. Positioning Effect of the Proposed Method in Two Scenarios In this section, the performance of the proposed method is analyzed by comparison with the other positioning algorithms (KNN, Rank, Coverage-area, and GPR). Scenario A and B are regarded as test areas to rate the accuracy and stability of the proposed method, respectively. At first, the proposed method and comparison algorithm were implemented in Scenario A. Then, the positioning errors of the above methods were analyzed, aiming to evaluate the localization performance of the positioning algorithm. Figure 13 shows the cumulative distribution functions (CDFs) of the positioning errors of KNN, GPR, Rank, Coverage-area, and the proposed method when the experimental area was Scenario A and the dual-band fingerprint was used. The proposed method had the best positioning effect among all methods, with a maximum error of 2.725 m. The maximum errors of KNN, Rank, Coverage-area, and GPR were 6.992, 11.724, 8.824, and 5.183 m, respectively, which indicates that the proposed method has a good ability to avoid positioning errors. Table 3 presents the statistical results of the positioning errors of KNN, GPR, and the proposed method. The results include the ME and RMSE of each positioning method, as well as the positioning errors corresponding to some important cumulative error probabilities, such as 50%, 70%, and 90%. The ME and RMSE of the proposed method were 1.067 m and 1.331 m, respectively. They decreased by 0.736 m and 0.931 m, respectively, compared with KNN. In addition, compared with GPR, the ME and RMSE decreased by 0.953 and 1.073 m, respectively. Compared with Rank, the accuracy of the proposed method was improved by 70.78%, and the stability was improved by 72.7%. The ME and RMSE of the proposed method were reduced by 66.22% and 65.31%, respectively, compared to the Coverage-area. Therefore, we can conclude that the proposed method is superior to KNN, Rank, Coverage-area, and GPR. Based on the experimental results of Scenario A, we can see that the ME of the proposed method decreased by 1.353 and 0.748 m compared with the 2.4 GHz and 5 GHz fingerprints using KNN. The ME of the proposed method was reduced by 1.251 and 1.442 m compared to that of the 2.4 GHz and 5 GHz fingerprints using GPR. The positioning effect of the proposed method was better than that of single-band Considering that Scenario A is a corridor and has a simple structure, we chose another scenario, Scenario B, to assess the proposed method. The positioning results of Scenario B are shown in Figure 14 , which presents the CDFs of the positioning errors of KNN, Rank, Coverage-area, GPR, and the proposed method. Their maximum errors are 4.692, 7.63, 6.874, 4.414, and 4.244 m, respectively. Table 4 shows the positioning effect in Scenario B. The proposed method had an ME of 1.432 m, which was decreased by 24.47%, 49%, 60.37%, and 27.71% compared to KNN, rank, coverage-area, and GRP, respectively. Meanwhile, compared with KNN, Rank, Coverage-area, and GRP, the RMSE of the proposed method had 20.48%, 48.7%, 57.7 %, and 25.27% reductions, respectively. The positioning results in Scenario B also prove that the proposed method is better than KNN, Rank, Coverage-area, and GPR. In this section, we evaluate the positioning performance of the proposed method according to the positioning error in two scenarios. However, the above analysis mainly focuses on a single scenario. Thus, we combined the positioning results of two scenarios to evaluate the positioning performance of the proposed method more comprehensively. Figure 15 shows the positioning effect of KNN, Rank, Coverage-area, GPR, and the proposed method in two scenarios. The lower quartile, median, upper quartile of positioning errors of the proposed method were better than those of the KNN, Rank, Coverage-area, and GPR algorithms. Both the positioning stability and accuracy of the proposed method were better than those of KNN, Rank, Coverage-area, and GPR. Based on the results presented in Table 5 , we can see that the positioning effect of the proposed method was better than that of KNN, Rank, Coverage-area, and GPR. The positioning stability of the proposed method was improved by 43.26%, 33%, 44.76%, and 35.44%, respectively, compared with KNN, Rank, Coverage-area, and GPR. The positioning accuracy of the proposed method was improved by 45.72%, 55.98%, 65.79%, 38.26%, respectively, compared to that of KNN, Rank, Coverage-area, and GPR. 6. Conclusions In this paper, we proposed a fingerprint positioning algorithm for dual-band Wi-Fi based on the Gaussian process regression and K-nearest neighbor algorithm, which has a better positioning effect than that of the 2.4 GHz fingerprint, 5 GHz fingerprint, and hybrid fingerprint. The proposed algorithm can make full use of the existing location information (2.4 GHz and 5 GHz RSS measurements), obtaining a better positioning accuracy compared with KNN, Rank, Coverage-area, and GPR. The experimental results also show that the proposed method is greatly improved compared with the traditional fingerprint positioning methods. However, this proposed method needs to determine the basic information of dual-band Wi-Fi devices to build the dual-band fingerprint database in advance, which may influence the application of this Author Contributions Conceptualization, Hongji Cao; Data curation, Jingxue Bi, Hongji Cao, Hongxia Qi, Shenglei Xu, Meng Sun; Formal analysis, Hongji Cao and Jingxue Bi; Funding acquisition, Yunjia Wang; Methodology, Jingxue Bi; Project administration, Yunjia Wang; Software, Jingxue Bi; Supervision, Yunjia Wang; Validation, Jingxue Bi, Hongxia Qi; Visualization, Hongji Cao; Writing—original draft, Hongji Cao. All authors have read and agreed to the published version of the manuscript. This research was funded by the National Key Research and Development Program of China [grant number 2016YFB0502102] and National Natural Science Foundation of China [grant number 42001397]. The research was also funded by the State Key Laboratory of Satellite Navigation System and Equipment Technology (CEPNT-2018KF-03), Doctoral Research Fund of Shandong Jianzhu University (XNBS1985), and Key Laboratory of Surveying and Mapping Science and Geospatial Information Technology of Ministry of Natural Resources (2020-3-4). Institutional Review Board Statement Not applicable. Informed Consent Statement Not applicable. Data Availability Statement Not applicable. The authors thank the editors and reviewers of this paper for their comments with which its quality was improved. Conflicts of Interest The authors declare no conflict of interest. Figure 7. The schematic diagram of the model construction, (a) show the model construction of dual-band fingerprint $R i$, (b) show the model construction of dual-band fingerprint $R i + 1$. Fingerprint ME RMSE 2.4 GHz fingerprint 2.318 2.793 5 GHz fingerprint 2.509 3.011 Hybrid fingerprint 2.238 2.922 Dual-band fingerprint 2.020 2.404 Fingerprint ME RMSE 2.4 GHz fingerprint 2.366 2.589 5 GHz fingerprint 2.352 2.559 Hybrid fingerprint 2.120 2.392 Dual-band fingerprint 1.896 2.154 Method 50% 70% 90% ME RMSE KNN 1.393 2.220 3.042 1.803 2.262 Rank 2.425 4.573 7.328 3.651 4.876 Coverage-area 2.179 3.900 6.294 3.159 3.837 GPR 1.631 2.311 3.562 2.020 2.404 Proposed method 0.748 1.423 2.008 1.067 1.331 Method 50% 70% 90% ME RMSE KNN 1.504 1.876 3.204 1.896 2.153 Rank 2.453 2.608 5.042 2.808 3.337 Coverage-area 3.023 4.808 5.760 3.613 4.047 GPR 1.542 2.354 3.551 1.981 2.291 Proposed method 1.037 1.432 2.249 1.432 1.712 Method ME RMSE KNN 2.278 2.677 Rank 3.262 4.235 Coverage-area 3.369 3.936 GPR 2.002 2.353 Proposed method 1.236 1.519 Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https: Share and Cite MDPI and ACS Style Cao, H.; Wang, Y.; Bi, J.; Sun, M.; Qi, H.; Xu, S. Fingerprint Positioning Method for Dual-Band Wi-Fi Based on Gaussian Process Regression and K-Nearest Neighbor. ISPRS Int. J. Geo-Inf. 2021, 10, 706. https://doi.org/10.3390/ijgi10100706 AMA Style Cao H, Wang Y, Bi J, Sun M, Qi H, Xu S. Fingerprint Positioning Method for Dual-Band Wi-Fi Based on Gaussian Process Regression and K-Nearest Neighbor. ISPRS International Journal of Geo-Information. 2021; 10(10):706. https://doi.org/10.3390/ijgi10100706 Chicago/Turabian Style Cao, Hongji, Yunjia Wang, Jingxue Bi, Meng Sun, Hongxia Qi, and Shenglei Xu. 2021. "Fingerprint Positioning Method for Dual-Band Wi-Fi Based on Gaussian Process Regression and K-Nearest Neighbor" ISPRS International Journal of Geo-Information 10, no. 10: 706. https://doi.org/10.3390/ijgi10100706 Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details Article Metrics
{"url":"https://www.mdpi.com/2220-9964/10/10/706","timestamp":"2024-11-14T11:57:01Z","content_type":"text/html","content_length":"496899","record_id":"<urn:uuid:ef826f32-34dc-44a9-8620-15a8c2519d25>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00466.warc.gz"}
Separative work unit The Separative Work Unit (SWU) is a unit that defines the effort required in the uranium enrichment process, in which uranium-235 and -238 are separated. The separative work unit is measured in units of kg (kilograms) and can then be manipulated to determine cost per SWU and kWh (kilowatt hour) per SWU (it is not work in the traditional physics sense). Natural uranium is 0.711% ^235U and needs to be enriched to between 3 and 5% for use in a light water reactor (a BWR or a PWR). In order to do this, the uranium undergoes an enrichment process that requires an input of separative work. This is due to the second law of thermodynamics which states that the entropy (disorder) of any isolated system must increase; but because separated isotopes represent more order (lower entropy) than mixed isotopes, effort must be put in to enrich the uranium.^[2] Calculating SWU The amount of uranium required to be fed into the enrichment plant in order to obtain a desired amount of enriched product depends on the desired enrichment of the product, the original enrichment of the feed and the enrichment of the depleted uranium or the 'tails'. Knowing this, a mass balance equation using these three masses and their respective percent concentrations can be written:^[2] ............. (Equation 1) Where x[F] is the concentration of the feed uranium and M[F] is the mass of the feed uranium, x[P] is the concentration of the product uranium and M[P] is the mass of the product uranium, x[T] is the concentration of the tails and M[T] is the mass of the tails. Applying conservation of mass (M[F] = M[P] + M[T]) and manipulating the equation a little bit we're left with:^[2] ............. (Equation 2) The concentration of the feed uranium will almost always be 0.711%, which is the concentration of natural uranium, therefore as long as the desired concentration of product and tails along with desired mass of product are known, mass of feed can be determined. Separative work can be expressed in terms of a function V(x) known as the value function, given by:^[2] ............. (Equation 3) Where x is the enrichment concentration. The SWU can then be determined using the masses of each uranium concentration found using equation 2 and the value function found for each respective concentration:^[2] ............. (Equation 4) For a better understanding of SWU, a online SWU calculator can be used. Example Calculation You want to make 20 kg of uranium enriched to 3.8% ^235U by weight. Your tailing fraction is 0.2% by weight. You can produce 1 kg SWU for 50 kWhr at a cost of $130.75 per 1 kg SWU. How much feed uranium do you need? Here we use Equation 2: Therefore we would need 140.9 kg of feed uranium at a concentration of 0.711% in order to produce 20 kg of 3.8% enriched uranium. How much energy does it take to make this uranium and what is the cost? Here we first use Equation 3 for each of the concentration percentages and plug what we get into Equation 4: Do this for the other two concentrations and find the values equal: V(x[F])= 4.869, V(x[T])= 6.188. Then plug these three value functions into Equation 4 to find: We can then multiply this number by both the kWh and cost per SWU to find our two answers: Therefore it would take 6090.355 kWh to produce this uranium at a cost of $15926.27.
{"url":"https://energyeducation.ca/encyclopedia/Separative_work_unit","timestamp":"2024-11-13T17:31:50Z","content_type":"text/html","content_length":"22714","record_id":"<urn:uuid:6b4b2456-b1d8-4091-9814-b70b31526bd3>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00677.warc.gz"}
gotoAndPlay Flash Tutorials -- The art of scriptable skew Flash MX add a function to distort "shapes"; it is only a "shape" that we can distort. We still can not distort "movieClip" or "bitmap". So, when we need to skew a movieClip, we can only skew it into a regular diamond (quandrangle); no perspective skew can be applied to movieClip either manually or by script, unless the target is a "shape" or something you break-apart into shape. To distort a movieClip, we need a trick to make the picture by multiple stripes and control them respectively. However, it is not practical for large movieClip. If a movieClip has 300 width, and we make a stripe 3 pixels wide, we get 100 movieClips to construct the original movieClip. The CPU will go mad. Below is a movie demonstrate the principles. Download the source code Isn't there any way? Could we break them into two triangle and distort and skew each triangle and combine them together? I tried this by many ways. After I think deep about the math, I get a conclusion: it is not possible. Below is a movie I did. I made two triangle to stitch them together. At first glance, the distortion skew seems not too bad. But if we click the button, we will realize that, it is not an even distortion. The stitch line will be obvious sometimes. The common way is make a 180 or 360 frame tween to render the skew angle. The pitfall is that it is not accurate enough. If we want to construct 3D object by multiple skewing face, we can not get a seamless model without crack. So, could we use math to skew a movieClip? Here is the secret. Skewing can be obtained easily by puting movieclip in a stretched movieClip. MovieClip itself does rotation and _parent does stretching. Below is the swf showing a skew of 100x100 square. The _parent can also do rotation and the movieClip can also do stretch. With various combination of stretch and rotation we can make the square movieClip skew to a shape that we want. But, how do we know how much _rotation and _xscale and _yscale we should apply to _parent and movieClip itself ? I will explain to you. Ok, we can first stretch our movieClip into 100x100 size and then rotate -45 degrees. That will be what we see below. Now try to control 3 parameters: the _parent._yscale, and the _xscale and _yscale of movieClip itself. You can click the buttons to increase and decrease the value. I suggest you to observe the change of _parent._yscale first. That is the core of the skew math. You will realize that, you can easily make a skew with the angle you want and the shape you want. After you get it, we will discuss the math. Here is the diagram about the math to calculate the relation ship of skew angle and _parent._yscale; I don't want to explain them in details. You should try to develop the math equation yourself. Anyway, stretchig of _parent._yscale will increase the skew angle and shrink the _parent.yscale will shapen the skew angle. If you figure out this, you will know it is very simple math. Ok, we go step by step to make a a movieClip of 200*100 to conform to the 3 draggable corners (you can not change the 4th corner, because Flash allows only a diamond skew. As said before, we can't do distortional skew such as perspective skew). These are the steps. First, we calculate the skew angle: Angle(P2-P0-P1); we make a quadrangle with that angle; then we stretch the arm of "width" and then stretch the arm of "height". Then we put the prepared quadrangle to the place we want it to be, setting _x, _y and _rotation. Before the function can be applied, some preparation should be done. First, the movieClip that is going to do skew must have its registration point at left-top corner, not at the center of movieClip. Create an empty holder movieClip, and put this movieClip in it. The registration point should be the (0,0) point. Now give your movieClip an instance name of "mc". If you prefer to give a name you like, for example "mySkewableMC", then in the first frame of this holder movieClip, write this.mc=mySkewableMC. The above setting is just for my skew function. You can develop your own skew function equation and modify it. The function is: function skewObj (obj, mcW, mcH, pt0, ptH, ptW) function distance (pt1, pt2) var dy = pt2.y-pt1.y; var dx = pt2.x-pt1.x; var side = Math.sqrt(dy*dy+dx*dx); return side; obj._x = pt0.x; obj._y = pt0.y; obj._yscale = 100; var angleP2 = Math.atan2(ptW.y-pt0.y, ptW.x-pt0.x); var angleP1 = Math.atan2(ptH.y-pt0.y, ptH.x-pt0.x); var dAngle = (angleP1-angleP2)/2; var arm = Math.sqrt(2)/2/Math.cos(dAngle); // original a 100x100 model, now use 1x1 model obj._rotation = (180/Math.PI)*(angleP1-dAngle); obj.mc._rotation = -45; obj._yscale = 100*Math.tan(dAngle); obj.mc._xscale = distance(ptW, pt0)*100/arm/mcW; obj.mc._yscale = distance(ptH, pt0)*100/arm/mcH; Here, obj is the instance name of your holder movieClip. mcW and mcH are the original width and height of the movieClip inside the holder. pt0, ptH and ptW are point object with the format of {x:??,y:??}; they are in the same coordinate system of the holder movieClip. Ok, below is an example to make a movieClip stretched by 3 draggable corners movieClip p0, pH, pW. H = obj.mc._height; W = obj.mc._width; function updateSkew () pt0 = {x:p0._x, y:p0._y}; ptH = {x:pH._x, y:pH._y}; ptW = {x:pW._x, y:pW._y}; skewObj(obj, W, H, pt0, ptH, ptW);
{"url":"http://gotoandplay.biz/_articles/2004/07/skew.php?PHPSESSID=88033e5f5ceab352cc1326bbab56f16d","timestamp":"2024-11-09T01:26:31Z","content_type":"text/html","content_length":"26349","record_id":"<urn:uuid:2411ac02-2268-4aa4-9919-becd90fec920>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00013.warc.gz"}
[molpro-user] hessian always calculated numerically for root=2 ump2, rccsd, rccsd(t) geometry opts H.-J. Werner werner at theochem.uni-stuttgart.de Sat Feb 17 09:19:54 GMT 2007 By default, the hessian is computed in the first step of transition state optimizations. Minimum searches do not compute the hessian by default but use a model hessian instead. If analytical gradients are available (e.g. uhf/rhf/mcscf) the hessian is computed by finite differences from gradients. If gradients are not available (ump2, rccsd, and rccsd(t)) the hessian is computed by finite differences from energies. This needs many more displacements. In order to avoid the numerical calculation of the rccsd(t) hessian, you might be able to compute the hessian using, e.g., rhf or mcscf, and then use this in the rccsd(t) optimization. See manual, READHESS option! Joachim Werner On Fr, 16 Feb 2007, Benj FitzPatrick wrote: >I am trying to find transition states for a couple of open shell systems with >2006.1 (patch level 34). When it gets to the first OPT where it calculates the >hessian I get differing results based on which level of theory I use. If I use >uhf/rhf/mcscf the hessian is computed using analytic gradients it it makes 9 >calculations. For ump2, rccsd, and rccsd(t) it wants to do 90 calculations to >get the hessian. To me this looks like it is trying to use finite differences. > This only happens when I'm looking at a transition state, if I use root=1 the >behavior of ump2, etc. is the same as uhf. Is this behavior expected? If no, >why is molpro doing this and is it possible to get around this? I included an >input file below. >Benj FitzPatrick >University of Chicago > ***,ump2 irc of c3h5o going from the epoxide to o having the radical *** > memory,50,M; > C; > H, 1, B1; > H, 1, B2, 2, A1; > C, 1, B3, 3, A2, 2, D1; > H, 4, B4, 1, A3, 3, D2; > basis=avdz; > wf,15,1,1} > wf,15,1,1} Prof. Hans-Joachim Werner Institute for Theoretical Chemistry University of Stuttgart Pfaffenwaldring 55 D-70569 Stuttgart, Germany Tel.: (0049) 711 / 685 64400 Fax.: (0049) 711 / 685 64442 e-mail: werner at theochem.uni-stuttgart.de More information about the Molpro-user mailing list
{"url":"https://www.molpro.net/pipermail/molpro-user/2007-February/002089.html","timestamp":"2024-11-02T11:38:42Z","content_type":"text/html","content_length":"5824","record_id":"<urn:uuid:c42f4393-9bc0-4333-b5d1-44a42c91710e>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00489.warc.gz"}
Quantitative Aptitude Quiz For IBPS RRB PO, Clerk Prelims 2021- 25th June Q1. A person brought some articles and sold 60% of them for the price he paid for total articles. He sold the remaining articles at 10% profit. Find his total profit percentage. (a) 50% (b) 46% (c) 44% (d) 10% (e) 56% Q2. Cost price of article A is 60% more than selling price of another article B. If discount allowed on article B is 20% and article A sold on 25% profit, then find marked price of article B is what percent less than selling price of article A? (a) 25% (b) 37.5 % (c) 12.5% (d) 7.5% (e) 9% Q3. Distance cover by a jeep in two hours is equal to the distance cover by a car in 3 hours. Train which is 33⅓% more faster to a car, covers 240 km in 3 hours, then find the distance cover by jeep in 5 hours. (a) 300 km (b) 400 km (c) 600 km (d) 450 km (e) 500 km Q6. Ratio of speed of boat in upstream to that of in downstream is 2:x and distance travelled by boat in still water in 4hr is equal to distance travelled by boat in upstream in 5hr. Then find value of ‘x’? (a) 7 (b) 3 (c) 5 (d) None of these (e) Can’t be determined Q7. The ratio between length of two trains is 2 : 3 and speed of these two trains is 108 km/hr and 96 km/hr respectively. if smaller train cross a 360 m long platform in 16 sec, then find the time taken by both trains to cross each other running in same direction ? (a) 90 sec (b) 84 sec (c) 96 sec (d) 108 sec (e) 112 sec Q10. Sumit can swim 12 km upstream and 16 km downstream in 5 hours. If ratio of speed of stream to that of speed of Sumit in upstream is 1 : 2, then find the speed of Sumit in still water. (a) 6 km/hr (b) 5 km/hr (c) 4 km/hr (d) 8 km/hr (e) 7 km/hr Q11. Two trains A and B started to move towards each other from P and Q respectively and they meet after 4 hours. In the time A reaches Q, B covers 100 km more distance after reaching P. Speed of A and B is in the ratio 2 : 3. Find the speed of train A. (a) 25 km/hr (b) 20 km/hr (c) 15 km/hr (d) 18 km/hr (e) 22 km/hr Q12. Two boats A and B are 120 km apart in a river and they start to move towards each other and meet after 6 hours. If ratio of speed of boat in still water to downstream speed of boat is 5 : 6, then find the speed of boat in still water. (consider speed of both boat in still water is same) (a) 9 km/hr (b) 7 km/hr (c) 6 km/hr (d) 8 km/hr (e) 10 km/hr Q13. Train ‘X’ take 2 hours more than train ‘Y’ to cover certain distance ‘D’ while train ‘X’ can cover (D + 160) km in 8 hours. If speed of train ‘Y’ is 50% more than that of train ‘X’, then find the speed of train ‘Y’? (a) 80 km/hr (b) 120 km/hr (c) 16 km/hr (d) 40 km/hr (e) 60 km/hr Practice More Questions of Quantitative Aptitude for Competitive Exams: Click Here to Register for Bank Exams 2021 Preparation Material
{"url":"https://www.bankersadda.com/quantitative-aptitude-quiz-for-ibps-rrb-po-clerk-prelims-2021-25th-june/","timestamp":"2024-11-09T04:50:38Z","content_type":"text/html","content_length":"614600","record_id":"<urn:uuid:f3631b18-b7ae-4175-a67a-7551ec099297>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00360.warc.gz"}
Core percolation and the controllability of complex networks Inhalt des Dokuments Core percolation and the controllability of complex networks Held by Márton Pósfai (Eötvös University, Budapest) 19.03.2013, 16:00 EW 731 Network science has emerged as a prominent field in complex system research, showing that complexity may arise from the connectivity patterns that underline a system. Structural transitions in networks were extensively studied due to their impact on numerous dynamical processes. Here we study such a transition, the core percolation on complex networks. The core of a network is defined as a spanned subgraph which remains in the network after the following greedy leaf removal procedure: As long as the network has leaves, i.e. nodes of degree 1, choose an arbitrary leaf and its neighbour, and remove them together with all the adjacent links. Finally, we remove all isolated nodes. For low mean degree the core is small (zero asymptotically), whereas for mean degree larger than a critical value the core covers a finite fraction of all the nodes. We analytically solve the core percolation problem for networks with arbitrary degree distributions. We show that for undirected networks the transition is continuous while for directed networks it is discontinuous (and hybrid) if the in- and out-degree distributions differ. As an application we use core percolation to explain phenomena related to the controllability of networks. Recent advances in applying control theory to complex networks have o?ered the tools to identify the driver nodes, through which we can achieve full control of a system. These tools predict the existence of multiple control configurations, prompting us to classify each node in a network based on their role in control: a node is critical if a system cannot be controlled without it; intermittent if it acts as a driver node only in some con?gurations; and redundant if it does not play a role in control. We find that above the core percolation threshold networks fall into two classes: (i) either they are in a centralized control mode, being driven by only a tiny fraction of nodes, (ii) or a distributed control mode, when most nodes play some role in controlling the system. We show that the control mode of a network depends on the structure of the emerging core. Zusatzinformationen / Extras Quick Access: Schnellnavigation zur Seite über Nummerneingabe Auxiliary Functions
{"url":"https://www3.itp.tu-berlin.de/collaborative_research_center_910/sonderforschungsbereich_910/events/seminars/core_percolation_and_the_controllability_of_complex_networks/index.html","timestamp":"2024-11-08T03:15:08Z","content_type":"application/xhtml+xml","content_length":"15123","record_id":"<urn:uuid:62a3eed7-a538-4591-835f-216cd36e9dda>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00319.warc.gz"}
What are some good resources to learn about basic electricity? What are some good resources to learn about basic electricity? August 11, 2005 6:21 PM Subscribe What are some good resources to learn about basic electricity? I have a device here that says "9VAC / 1300mA" under the power input. I just moved and the adapter that was packed with it says it outputs "DC 7.5V 1A". It's not exactly the same, so I'm afraid to try it out in case I destroy my beautiful equipment. This is just the latest electrical idiot moment for me. I seem to know zero about things like DC, AC, VAC, volts, watts, amps, milliamps, etc. I've looked up the individual terms, but it tends to lead to equations and science history. Is there a resource that will help me gain a basic working knowledge rather than treating electricity like scary witchcraft? Don't plug an AC->DC adaptor into an AC appliance. is a pretty good little slide show on basic electrical concepts. posted by nicwolff at 6:41 PM on August 11, 2005 Best answer: google is always a good resource for this kind of stuff. there are a million pages out there with basic electronics tutorials. this one looks to be good, though i haven't read it really, but lots of pictures and not so much math. this stuff is difficult for beginners to learn because it's all . however there are some everyday analogues that come in handy. *deep breath* a short but simple description of these circuit concepts goes like this: electronics is all about the way moves around in wires. charge comes in discrete units, called electrons - however an electron is very small, and so there are very, very many of them moving around in your typical household circuit. describes the invisible force that makes the electrons move. another, geekier name for voltage is electro-motive force , which pretty accurately describes what it does. a (V) is the unit of voltage. voltage is two-directional - it can be an attractive force, or a repulsive force. so what happens when you get some electrons near a voltage? well, if they're inside a conductor like a copper wire, they start to move, either away from the voltage or towards the voltage, depending on its direction. this is where the notion of "positive" and "negative" voltage come in - a positive voltage attracts an electron, a negative voltage repels it. you can talk about how many electrons that move through your wire when you apply a voltage - this is called , and is measured in units of (A), or Amps for short. a milliamp is 1/1000 of an amp. one Amp is an awful lot of electrons - around 62,000,000,000,000,000,000 of them. so what determines how much current you get? for most simple circuits, two things: the voltage you apply, and the material properties of the circuit. these material properties are neatly wrapped up in a quantity called , which is measured in units called s. now, i know we don't like equations in this discussion, but bear with me. voltage, current, and resistance are related in an equation called Ohm's law : V = I * R, or if you do a little algebra, I = V / R. so, basically, if you apply a voltage to some with some resistance, you'll get some electrons to move around in the thing. if the resistance is increased, less current will flow; if the resistance is decreased, more current will flow. if the voltage is increased, more current will flow; if the voltage is descreased, less will. an easy-to-grasp analogue of this is water in hoses. voltage can be thought of something similar to the water pressure, current can be thought of something similar to the amount of water that runs through the hose, and resistance is kind of like the of the hose diameter. turn up the pressure, you get more flow; step on the hose and reduce the diameter, you get less. i've been talking about circuits, which mean that the voltage is constant. however many circuits, for various reasons, use alternating current or AC. in an AC circuit, the voltage is constantly swinging back and forth between positive and negative. this means that the electrons in the circuit first go one way, then slow down, stop, go back the other way, slow down, etc. current is still flowing; it's just that it happens to be the "same" electrons that flow back and forth through a particular segment of your circuit. as for your devices: "9VAC / 1300 mA" means that the device you have requires 9 volts of AC (alternating current) and has an effective resistance (which you could easily do the math on and figure out) that makes 1300 mA flow. what this means is that it 1300 mA worth of current to work properly, and the little power supply thingy you plug in has to be capable of providing at least that much, and at the right voltage. the one you've got supplies 7.5V (not enough) and it's DC, and it can only put out 1 amp (also not enough). i don't think there would be any harm in plugging this in, since it's undervoltage, but your device is almost certain to not work. best bet is to shop for a "universal" AC adapter; the best ones are the kind that have a little switch that allow you to select the voltage they put out. make sure you get one that puts out AC if that's what your device needs. a BIG gotcha here is that there are like a million different sizes of plugs (the end that goes into your thing, not the wall end) and there's no apparent rhyme or reason to how they're selected. so if you get a universal one, get one of the kind that allow you to change the tips and that comes with a big variety. anyway, best of luck. man, this turned out to be really long. sweet jeezus. sorry. posted by sergeant sandwich at 7:05 PM on August 11, 2005 oh, i should add - in general it's okay to plug a DC source into a circuit that's expecting AC (it won't damage the circuit, unless the voltage is higher than expected), but the opposite is not true. an AC source can easily damage a circuit that's expecting DC. posted by sergeant sandwich at 7:20 PM on August 11, 2005 Best answer: [Heh, good stuff Sgt. I was typing up a summary as well, which is not a detailed as yours but might help so here goes...] Volts measure the difference in potential between the sides of a circuit, and you can think of this as if the electrons were water flowing from one tank to another; raise one tank on high, and there's more pressure in the hose. Ohms measure the resistance in the circuit to the flow of electricity - how wide is that hose? Amps measure the flow of current, which is the voltage divided by the resistance - if you want more water out of a hose, you can raise one tank higher or get a bigger hose. Watts measure the power at some point in the circuit - how much work could the flowing water do? - and that's the product of the voltage and the amperage. If you wanted to turn a waterwheel faster, you could either increase the pressure coming out of the hose, or the speed of the flow. Now if you just hook up a battery to some wires, all the electrons are coming at you all the time, from the negative terminal around the circuit to the positive terminal. (Remember, the charge of an electron is negative.) That's called direct current or DC, and there's nothing wrong with it - Edison wanted to distribute power that way - but it turns out that if you instead push the electrons back and forth in the circuit - "alternating" the current's direction - you can manage the voltage of the power with transformers, raising it for long-distance transmission and then lowering it for delivery into homes. So that's the situation; your home is getting 120 volts AC (in the US - maybe 220 or 240 abroad) from the power company, and the breakers in the box in the basement share out that power at different amperages to 120V outlets in different rooms. Most of the simple electrical devices in your house take 120V AC directly - lamps, the refrigerator &c. - but some small devices are designed to take less voltage, so they have a step-down transformer - in your case, the device wants 9 volts at 1.3 amps of AC power. On the other hand, complex electronic devices run on DC internally so they have either external adaptors or internal power supplies that convert the AC to DC. You've got one of the external adaptors there, which takes 120V AC and outputs 7.5V DC at 1 amp. So, it's both the wrong kind of power and not enough power to operate that device. posted by nicwolff at 7:24 PM on August 11, 2005 I suggest that you turn to an actual book for learning the basics. Much of what you find on the web will be confusing to you if you are a beginner. I suggest this book As far as power adapters go, here is all it boils down to in a nutshell: - match AC to AC and DC to DC, don't try to mix. - the voltage rating of both should be the same. [*] - the current rating of the power supply should be equal to or greater than the current rating of the device. [**] - make sure the polity of the plug is correct. Most devices will have a little picture showing whether ring or tip is positive. That's it. [*] There are some exceptions to this, however. In some cases you can easily use a DC power supply with a higher voltage than what is called for in the equipment if the equipment expects unregulated DC (and thus it will be stepping it down and regulating it anyway, so the exact input voltage is irrelevent.) Some equipment uses a DC-DC converter (most laptops) and here again the actual voltage of the input isn't really critical to be exactly right. [**] It's fine for the power supply to have a much larger current rating than the device, as it will only supply as much current as the device will draw. The reverse is not true though. posted by Rhomboid at 7:49 PM on August 11, 2005 The rule my electrician and electrical engineer friends told me to boil it down to the simplest terms: It takes one amp to move one volt across one ohm. posted by lambchop1 at 8:44 PM on August 11, 2005 lambchop1: shouldn't that be, it takes one volt to move one amp across one ohm? posted by delmoi at 9:17 PM on August 11, 2005 and good show, sergeant sandwich, nicwolff. posted by fatllama at 12:06 AM on August 12, 2005 In addition to what's been said, an AC voltage can be specified in several ways: 1) "Peak voltage" is the the highest voltage that there ever is. Once the voltage reaches the peak, it starts decreasing again. 2) "Peak-to-peak voltage" is the difference between the strongest positive and the strongest negative voltage, which is twice the peak voltage. 3) "RMS voltage" stands for Root-Mean-Square. If you take the AC voltage and reverse all the negative peaks, you get a voltage that varies between the peak voltage and zero. Then you smooth this out to DC. This is the RMS voltage, which is 0.7 times the peak voltage. For grid power and home appliances, the voltage specified is always RMS. In other words, an 9VAC adapter feeds out the equivalent of 9VDC but has a peak voltage of 12.7 volts and a peak-to-peak voltage around 25 volts. posted by springload at 4:20 AM on August 12, 2005 Just saw sergeant sandwich's comment linked on the sidebar on the front page. Great post overall, but I do have one correction: you can talk about how many electrons that move through your wire when you apply a voltage - this is called current, and is measured in units of Amperes (A), or Amps for short. a milliamp is 1/1000 of an amp. one Amp is an awful lot of electrons - around 62,000,000,000,000,000,000 of them. Current is not just how many electrons move through a wire--it's how many electrons, and how quickly. The unit of measure which is related to the number of electrons alone is the . One coulomb is equivalent to about 6,240,000,000,000,000,000 electrons. If you take 1 coulomb and push it through a wire, you could do it quickly, or you could do it slowly. If you push 1 coulomb through in 1 second, your current is 1 (a.k.a. amp). If you take 10 seconds to push that coulomb through, your current is only 1/10 amp. Thus, one amp is not any fixed number of electrons--you can't determine a specific number of electrons without knowing how long the current is applied for. posted by DevilsAdvocate at 1:51 PM on August 17, 2005 DevilsAdvocate is right - missed that one in the proofread. should have been electrons-per-second, and the right number of zeroes. posted by sergeant sandwich at 4:09 PM on August 18, 2005 « Older How do I convince corporate America that this... | Does "data" have mass? Newer » This thread is closed to new comments.
{"url":"https://ask.metafilter.com/22494/What-are-some-good-resources-to-learn-about-basic-electricity","timestamp":"2024-11-12T22:15:46Z","content_type":"text/html","content_length":"42197","record_id":"<urn:uuid:9f652202-5f69-4a78-8aa1-035c61198abc>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00494.warc.gz"}
#77 Brownian Carnot engine-I 微觀卡諾引擎-I Hello everyone. Happy Chinese New Year! Today I am going to greet our readers with my humble gift, the Brownian Carnot engine model. If you are not familiar with the Carnot engine, please refer to our older post: The Carnot cycle proposed in 1824 sets an upper limit to the efficiency of thermal engines. However, does that still hold in the microscopic world, in which most biological motors operate? In order to experimentally study how a microscopic Carnot engine would work, the authors of our suggested reading developed a way that could effectively heat, cool, compress or expand a microsphere confined to the optical tweezers, as shown below. (figure adapted from the supplementary figure 1 of ref[1].) The optical tweezers could act as an effective spring with force constant κ, which confines the range of motion of the microsphere. Thus, by adjusting the force constant κ, we could effectively compress or expand the volume of the system. Besides, by recording the position of the microsphere, we could deduce the temperature if we know the force constant κ, because: This also implies that if we push the microsphere to increase its range of motion, we could then effectively heat the system. This is feasible since the polystyrene beads have an inherent charge in polar solutions, and we could adjust its range of motion by applying random voltage. After introducing this system, we will now reformulate heat, work, and other key variables in our system. First we write down the Hamiltonian of the system: The conjugated force associated with the external variable κ could be obtained by and the work must be done to change κ would be and the heat could be calculated as It's much more easier to figure out how we could perform an isothermal compression or expansion on this system. What about the adiabatic ones? In a microscopic regime, we could obtain a process during which and thus implies which means hold during an adiabatic process. So now we know how to build and run a Brownian Carnot engine conceptually. In the next episode, in contrary to what the authors did, we are going to simulate a Brownian Carnot engine (rather than really build one) and plot the experiment results. Stay tuned! Suggested reading: Martinez, I. A., Roldan, E., Dinis, L., Petrov, D., Parrondo, J. M. R., Rica, R. A. (2015). Brownian Carnot Engine. Nature Physics 12, [1]Martínez, I. A., Roldán, E., Dinis, L., Petrov, D. and Rica, R. A. (2015) Adiabatic processes realized with a trapped Brownian particle. Phys. Rev. Lett . 114, 120601.
{"url":"http://www.threeminutebiophysics.com/2017/01/77-brownian-carnot-engine-i-i.html","timestamp":"2024-11-07T22:13:51Z","content_type":"application/xhtml+xml","content_length":"109018","record_id":"<urn:uuid:9b8a3e4a-78f4-4074-a674-f0b4e4b67003>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00327.warc.gz"}
Technical Mathematics, 2nd Edition You will NOT need a calculator for this module. The metric system was first implemented following the French Revolution; if we’re overthrowing the monarchy, why should we use a unit of a “foot” that is based on the length of a king’s foot? The metric system was designed to be based on the natural world, and different units are related to each other by powers of The table below shows the most common metric prefixes. The prefixes are arranged in order so that we can convert between them simply by moving the decimal point the same number of places shown in the kilo- (k) hecta- (h) deka- (da) [base unit] deci- (d) centi- (c) milli- (m) Because deka- and deci- both start with d, the abbreviation for deka- is da. Metric System: Measurements of Length The base unit of length is the meter, which is a bit longer than a yard (three feet). Because the prefix kilo- means one thousand, centi- means one hundredth, milli- means one thousandth, From each of the four choices, choose the most reasonable measure. 1. The length of a car: 2. The height of a notebook: 3. The distance to the next town: 4. An adult woman’s height: 5. An adult woman’s height: 6. The thickness of a pane of glass: kilo- (km) hecta- (hm) deka- (dam) meter (m) deci- (dm) centi- (cm) milli- (mm) To convert metric units, you can simply move the decimal point left or right the number of places indicated in the table above. No calculator required! A 2024 Chevrolet Silverado 1500 pickup truck is ^[1] 7. Convert 8. Convert One mile is approximately 1.609 kilometers. 9. Convert 10. Convert A sheet of A4 paper^[2] is 297 millimeters long. 11. Convert 12. Convert The Burj Khalifa in Dubai is the world’s tallest building, with a height of 828 meters.^[3] 13. Convert 14. Convert Metric System: Measurements of Weight or Mass The base unit for mass is the gram, which is about the mass of a paper clip. A kilogram is milligram. From each of the three choices, choose the most reasonable measure. 15. The mass of an apple: 16. The mass of an adult man: 17. The amount of active ingredient in a pain relief pill: 18. The base vehicle weight of a Chevrolet Silverado 1500 pickup truck: kilo- (kg) hecta- (hg) deka- (dag) gram (g) deci- (dg) centi- (cg) milli- (mg) This table is identical to the previous table; the only difference is that the base unit “meter” has been replaced by “gram”. This means that converting metric units of mass is exactly the same process as converting metric units of length. A five-pound bag of flour weighs about 19. Convert 20. Convert A 20-ounce bottle of Dr. Pepper contains 21. Convert 22. Convert 23. Convert A 20-ounce bottle of Dr. Pepper contains 24. Convert 25. Convert Metric System: Measurements of Volume or Capacity The base unit of volume is the liter, which is slightly larger than one quart. The milliliter is also commonly used; of course, there are 1 liter is equivalent to a cube with sides of 10 centimeters. Image adapted by Cristianrodenas on Wikimedia Commons. In case you were wondering, the units of volume, length, and mass are all connected; one cubic centimeter (a cube with each side equal to Image by nclm on Wikimedia Commons. From each of the two choices, choose the more reasonable measure. 26. The capacity of a car’s gas tank: 27. A dosage of liquid cough medicine: kilo- (kL) hecta- (hL) deka- (daL) liter (L) deci- (dL) centi- (cL) milli- (mL) Again, this table is identical to the previous tables; just move the decimal point left or right to convert the units. 28. A bottle of sparkling water is labeled 50 cl. Convert 50 centiliters to liters. 29. A “1-liter” bag of saline solution for intravenous use actually contains about 1.05 liters of solution.^[4] How many deciliters is this? 30. A carton of orange juice has a volume of 1.75 liters. Convert this into mL. 31. One cup (8 fluid ounces) is approximately 250 milliliters. Convert 250 milliliters into liters. 32. While being served drinks on IcelandAir, you notice that one mini bottle is labeled 50 mL, but another mini bottle is labeled 5 cL. How do the two bottles compare in size? 33. The engine displacement of a Yamaha Majesty scooter is 125 cc (cubic centimeters), and the engine displacement of a Chevrolet Spark automobile is 1.4 L (liters). What is the approximate ratio of these engine displacements? 34. How many 500-milliliter bottles of Coke are equivalent to one 2-liter bottle?
{"url":"https://openoregon.pressbooks.pub/techmath2e/chapter/14-the-metric-system/","timestamp":"2024-11-04T02:11:22Z","content_type":"text/html","content_length":"158648","record_id":"<urn:uuid:a09e5e7e-f98f-435e-bd40-2e64aafe9966>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00238.warc.gz"}
Why you may need to reconsider your route selection criterium You have a job interview in 20 minutes and you are in a hurry to arrive at your application in time. To make matters even more stressful, there are many routes to your destination, but you have no idea which one to select. Luckily, you have access to a navigation system that can help you in your route selection process. After inserting your desired destination, your device prompts you to choose a route selection criterium. Surely, you tell your navigation system to select the fastest route. After all, it is the fastest route that is most likely to get you to your destination in time, right? This may not always be the case. The travel time of your trip depends on many uncertain factors. For example, on your way to your job interview, you may get stuck in a traffic jam because of an accident, which will drastically increase your travel time. One could also think of smaller hindrances, such as having to wait for a bridge that is opening or repeatedly having to stop for red lights. Some routes will be more likely to cause delays than others. These routes bear a higher risk related to the travel time. Since travel times are not fixed upon departure, it is important to realise that the fastest route suggested by your navigation system is only the fastest route in expectation. It may be the case that this route consists of roads that are likely to cause delays. As a result, the actual travel time corresponding to this route could be very uncertain. It is because of this uncertainty that the fastest route may not always be the most desirable route. Fortunately, there is a way to determine your most desirable route mathematically. This is where probability distributions come in. A probability distribution is a mathematical function that describes the likelihood of the actual outcomes of an uncertain event. Specifically, in case of travel times, a probability distribution describes the probabilities of the actual travel times corresponding to a route. As a simple example, suppose there are only two possible routes to your job interview. Route 1 takes with certainty 20 minutes to traverse, whereas Route 2 has a shorter travel time of only 18 minutes. However, on Route 2, there is a 10% chance that you have to wait for an open bridge, causing a 4 minute delay. The probability distributions of these routes are shown in Figure 1. Figure 1: The probability distributions of two routes to your job interview. Route 1 has a travel time of 20 minutes, whereas the travel time of Route 2 is either 18 minutes (with a probability of 90%) or 22 minutes (with a probability of 10%). Of course, the probability distributions in Figure 1 do not give an accurate representation of the real world. It seems unrealistic to assume that a travel time can only equal a few distinct values. In fact, the number of possible travel times may be so many such that every travel time has a probability of 0% of occurring. As a consequence, the probability distribution will not contain any useful information. This gives rise to probability density functions. These functions describe the relative likelihood of the actual travel times, while the absolute likelihood for a particular travel time is 0. Two examples of probability densities can be found in Figure 2. Figure 2. $T_3$ has a normal distribution with mean 15 and variance 4 (left) and $T_4$ has a normal distribution with mean 12 and variance 16 (right). Relative versus absolute likelihoods As said before, the absolute likelihood for a particular travel time is 0. For example, your travel time will never be exactly equal to 17 minutes. But, the probability that your travel time is between 17 minutes and 17 minutes and 1 second is quantifiable and non-zero. This probability can be estimated using the value of the probability density function at the point 17 minutes. For the probability density on the left this probability is equal to $p\approx 0.10 \cdot \frac{1}{60} = 0.0017$ while for the probability density on the right this probability is equal to $p\approx 0.07 \ cdot \frac{1}{60} = 0.00117$. The probability density can be interpreted as the relative likelihood that the travel time is close to 17 minutes. Probability density functions can also say how much more likely some outcome is compared to another outcome. For example, in the probability density on the left, the value around 15 minutes is 0.20 and the value around 20 minutes is approximately 0.01. Hence in this probability density it is $\frac{0.20}{0.01} = 20$ more likely to have a travel time close to 15 minutes than a travel time close to 20 minutes. Choosing the best route Routes that are expected to have a short travel time will have a probability density that is centred around a relatively low value. Unreliable routes, i.e. routes on which there is high uncertainty about the travel time, will have a probability density that is more spread. As an example, suppose you get to choose between two more routes for which the travel times have probability densities as in Figure 2, where the travel times of Route 3 and Route 4 are denoted by $T_3$ and $T_4$. Since the probability density of $T_4$ is centred around a lower value, Route 4 has a lower expected travel time compared to Route 3. Consequently, if you ask your navigation system for the fastest route, it will recommend Route 4. Having access to the probability densities means that it is possible to explicitly compute the probabilities of in time arrival. This is done by integrating the probability densities over the travel times that lead to in time arrival. If $f_3$ and $f_4$ denote the probability densities of $T_3$ and $T_4$, then the probability that you arrive within 20 minutes at your job interview is given by It is remarkable that Route 4 has a lower mean travel time compared to Route 3, but at the same time this route is more likely to take more than 20 minutes to traverse. You may want to verify yourself that we find the same phenomenon for Route 1 and Route 2 in Figure 1. It turns out that if you want to maximise the probability of arriving at your job interview within 20 minutes, the expected fastest route, which is the route that will be suggested by a navigation system, is not your best option! The featured image of this article is due to waldryano and is taken from Pixabay. Related articles
{"url":"https://www.networkpages.nl/why-you-may-need-to-reconsider-your-route-selection-criterium/","timestamp":"2024-11-09T03:48:03Z","content_type":"text/html","content_length":"89023","record_id":"<urn:uuid:ff7116b5-6615-4907-ad3b-873b5d8aed48>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00377.warc.gz"}
Verifying distributed systems with Isabelle/HOL We use distributed systems every day in the form of internet services. These systems are very useful, but also challenging to implement because networks are unpredictable. Whenever you send a message over the network, it is likely to arrive quite quickly, but it’s possible that it might be delayed for a long time, or never arrive, or arrive several times. When you send a request to another process and don’t receive a response, you have no idea what happened: was the request lost, or has the other process crashed, or was the response lost? Or maybe nothing was lost at all, but a message has simply been delayed and may yet arrive. There is no way of knowing what happened, because unreliable message-passing is the only way how processes can Distributed algorithms work with this model of unreliable communication and build stronger guarantees on top of it. Examples of such stronger guarantees include database transactions and replication (maintaining copies of some data on multiple machines so that the data is not lost if one machine fails). Unfortunately, distributed algorithms are notoriously difficult to reason about, because they must uphold their guarantees regardless of the order in which messages are delivered, and even when some messages are lost or some processes crash. Many algorithms are very subtle, and informal reasoning is not sufficient for ensuring that they are correct. Moreover, the number of possible permutations and interleavings of concurrent activities quickly becomes too great for model-checkers to test exhaustively. For this reason, formal proofs of correctness are valuable for distributed algorithms. In this blog post we will explore how to use the Isabelle/HOL proof assistant to formally verify a number of distributed algorithms. Isabelle/HOL does not have any built-in support for distributed computing, but fortunately it is quite straightforward to model a distributed system using structures that Isabelle/HOL provides: functions, lists, and sets. First, we asssume each process (or node) in the system has a unique identifier, which could simply be an integer or a string. Depending on the algorithm, the set of process IDs in the system may be fixed and known, or unknown and unbounded (the latter is appropriate for systems where processes can join and leave over time). The execution of the algorithm then proceeds in discrete time steps. In each time step, an event occurs at one of the processes, and this event could be one of three things: receiving a message sent by another process, receiving user input, or the elapsing of a timeout. datatype ('proc, 'msg, 'val) event = Receive (msg_sender: 'proc) (recv_msg: 'msg) | Request 'val | Timeout Triggered by one of these events, the process executes a function that may update its own state, and may send messages to other processes. A message sent in one time step may be received at any future time step, or may never be received at all. Each process has a local state that is not shared with any other process. This state has a fixed initial value at the beginning of the execution, and is updated only when that process executes a step. One process cannot read the state of another process, but we can describe the state of the entire system as the collection of all the processes’ individual states: Even though in reality processes may run in parallel, we do not need to model this parallelism since the only communication between processes is by sending and receiving messages, and we can assume that a process finishes processing one event before starting to process the next event. Every parallel execution is therefore equivalent to some linear sequence of execution steps. Other formalisations of distributed systems, such as the TLA+ language, also use such a linear sequence of steps. We do not make any assumptions about which time step is executed by which process. It is possible that the processes fairly take turns to run, but it is equally possible for one process to execute a million steps while another process does nothing at all. By avoiding assumptions about process activity we ensure that the algorithm works correctly regardless of the timing in the system. For example, a process that is temporarily disconnected from the network is modelled simply by a process that does not experience any receive-message events, even while the other processes continue sending and receiving messages. In this model, a process crash is represented simply by a process that executes no more steps after some point in time; there is no need for a crash to be explicitly represented. If we want to allow processes to recover from a crash, we can add a fourth type of event that models a process restarting after a crash. When executing such a crash-recovery event, a process deletes any parts of its local state that are stored in volatile memory, but preserves those parts of its state that are in stable storage (on disk) and hence survive the crash. When reasoning about safety properties of algorithms, it is best not to assume anything about which process executes in which time step, since that ensures the algorithm can tolerate arbitrary message delays. If we wanted to reason about liveness (for example, that an algorithm eventually terminates), we would have to make some fairness assumptions, e.g. that every non-crashed process eventually executes a step. However, in our proofs so far we have only focussed on safety properties. We can now express a distributed algorithm as the step function, which takes three arguments: the ID of the process executing the current time step, the current local state of that process, and the event that has occurred (message receipt, user input, timeout, or crash recovery). The return value consists of the new state for that process, and a set of messages to send to other processes (each message tagged with the ID of the recipient process). type_synonym ('proc, 'state, 'msg, 'val) step_func = ‹'proc ⇒ 'state ⇒ ('proc, 'msg, 'val) event ⇒ ('state × ('proc × 'msg) set)› The current state of a process at one time step equals the new state after the previous step by the same process (or the initial state if there is no previous step). Assuming the step function is deterministic, we can now encode any execution of the system as a list of (processID, event) pairs indicating the series of events that occurred, and at which process they happened. The final state of the system is obtained by calling the step function one event at a time. To prove a distributed algorithm correct, we need to show that it produces a correct result in every possible execution, i.e. for every possible list of (processID, event) pairs. But which executions are possible? There is only really one thing we can safely assume: if a message is received by a process, then that message must have been sent to that process. In other words, we assume the network does not fabricate messages out of thin air, and one process cannot impersonate another process. (In a public network where an attacker can inject fake packets, we would have to cryptographically authenticate the messages to ensure this property, but let’s leave that out of scope for now.) Therefore, the only assumption we will make is that if a message is received in some time step, then it must have been sent in a previous time step. However, we will allow messages to be lost, reordered, or received multiple times. Let’s encode this assumption in Isabelle/HOL. First, we define a function that tells us whether a single event is possible: (valid_event evt proc msgs) returns true if event evt is allowed to occur at process proc in a system in which msgs is the set of all messages that have been sent so far. msgs is a set of (sender, recipient, message) triples. We define that a Receive event is allowed to occur iff the received message is in msgs, and Request or Timeout events are allowed to happen anytime. fun valid_event :: ‹('proc, 'msg, 'val) event ⇒ 'proc ⇒ ('proc × 'proc × 'msg) set ⇒ bool› where ‹valid_event (Receive sender msg) recpt msgs = ((sender, recpt, msg) ∈ msgs)› | ‹valid_event (Request _) _ _ = True› | ‹valid_event Timeout _ _ = True› Next, we define the set of all possible event sequences. For this we use an inductive predicate in Isabelle: (execute step init procs events msgs states) returns true if events is a valid sequence of events in an execution of the algorithm where step is the step function, init is the initial state of each process, and proc is the set of all processes in the system (which might be infinite if we want to allow any number of processes). The last two arguments keep track of the execution state: msgs is the set of all messages sent so far, and states is a map from process ID to the state of that inductive execute :: ‹('proc, 'state, 'msg, 'val) step_func ⇒ ('proc ⇒ 'state) ⇒ 'proc set ⇒ ('proc × ('proc, 'msg, 'val) event) list ⇒ ('proc × 'proc × 'msg) set ⇒ ('proc ⇒ 'state) ⇒ bool› where ‹execute step init procs [] {} init› | ‹⟦execute step init procs events msgs states; proc ∈ procs; valid_event event proc msgs; step proc (states proc) event = (new_state, sent); events' = events @ [(proc, event)]; msgs' = msgs ∪ {m. ∃(recpt, msg) ∈ sent. m = (proc, recpt, msg)}; states' = states (proc := new_state) ⟧ ⟹ execute step init procs events' msgs' states'› This definition states that the empty list of events is valid when the system is in the initial state and no messages have been sent. Moreover, if events is a valid sequence of events so far, and event is allowed in the current state, then we can invoke the step function, add any messages it sends to msgs, update the state of the appropriate process, and the result is another valid sequence of events. And that’s all we need to model the distributed system! Now we can take some algorithm (defined by its step function and initial state) and prove that for all possible lists of events, some property P holds. Since we do not fix a maximum number of time steps, there is an infinite number of possible lists of events. But that’s not a problem, since we can use induction over lists to prove P. We use the List.rev_induct induction rule in Isabelle/HOL. It requires showing that: In other words, we prove that P is an invariant over all possible states of the whole system. In Isabelle, that proof looks roughly like this (where step, init, and procs are appropriately defined): theorem prove_invariant: assumes ‹execute step init procs events msgs states› shows ‹some_invariant states› using assms proof (induction events arbitrary: msgs states rule: List.rev_induct) case Nil then show ‹some_invariant states› sorry next case (snoc event events) then show ?case sorry qed The real challenge in verifying distributed algorithms is to come up with the right invariant that is both true and also implies the properties you want your algorithm to have. Unfortunately, designing this invariant has to be done manually. However, once you have a candidate invariant, Isabelle is very helpful for checking whether it is correct and whether it is strong enough to meet your goals. For more detail on how to prove the correctness of a simple consensus algorithm in this model, I recorded a 2-hour video lecture that runs through a demo from first principles (no prior Isabelle experience required). The Isabelle code of the demo is also available. If you want to work on this kind of thing, I will soon be looking for a PhD student to work with me on formalising distributed algorithms in Isabelle, based at TU Munich. If this sounds like something you want to do, please get in touch! If you found this post useful, please support me on Patreon so that I can write more like it! To get notified when I write something new, follow me on Bluesky or Mastodon, or enter your email address:
{"url":"https://martin.kleppmann.com/2022/10/12/verifying-distributed-systems-isabelle.html","timestamp":"2024-11-09T20:25:03Z","content_type":"application/xhtml+xml","content_length":"28361","record_id":"<urn:uuid:ef168e35-86b0-45f4-b217-8327edf1210b>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00483.warc.gz"}
Meditations on Mathematics As a hint, recall that all the answers are integer days of the month. And the solution employs a technique familiar to these pages. See Autumn Sum for a solution. Serious Series ^th year Ontario high school students seeking entrance and scholarships for the second year at a university. “If s[n] denotes the sum of the first n natural numbers, find the sum of the infinite series Unfortunately, the “Grade XIII” exam problem sets were not provided with answers, so I have no confirmation for my result. There may be a cunning way to manipulate the series to get a solution, but I could not see it off-hand. So I employed my tried and true power series approach to get my answer. It turned out to be power series manipulations on steroids, so there must be a simpler solution that does not use calculus. I assume the exams were timed exams, so I am not sure how a harried student could come up with a quick solution. I would appreciate any insights into this. See Serious Series for a solution. (Update 1/18/2021, 9/6/2024) Other Solutions Continue reading Number of the Beast Five Hundred Mathematical Challenges. “Problem 5. Calculate the sum It has a non-calculus solution, but that involves a bunch of manipulations that were not that evident to me, or at least I doubt if I could have come up with them. I was able to reframe the problem using one of my favorite approaches, power series (or polynomials). The calculations are a bit hairy in any case, but I was impressed that my method worked at all. See the Number of the Beast for solutions. Unexpected Sum Maths Challenge website (mathschallenge.net). Find the exact value of the following infinite series: 1/2! + 2/3! + 3/4! + 4/5! + …” See the Unexpected Sum for solutions. Infinite Product Problem Mathematical Quickies (1967). “Evaluate the infinite product: I came up with a motivated solution using some standard techniques from calculus. Mathematical Quickies had a solution that did not employ calculus, but one which I felt used unmotivated tricks. See the Infinite Product Problem for solutions. Point Set Topology The essay turned out to have a surprising structure more like a musical theme and variations. The theme was the geometric series. I found it to be a wonderful medium to show the evolution of ideas (acting as variations) from the early Greeks (Zeno’s Paradoxes) through the development of calculus, decimal expansions of real numbers, to power series, metric spaces, and finally general There was an additional benefit to this series of transformations of an initial idea: one of the major aspects of true mathematics became evident, namely, the extension of an idea into new territories that reveal unexpected connections to other forms of mathematics. Treating complicated functions as points in a topological space was a wonderful idea developed over the end of the 19th and beginning of the 20th centuries and became the basis of the field of functional analysis. See Point Set Topology (revised). (Update 6/3/2021) Slightly revised version. I happened to review this article and noticed I made a mistake in my integration example. I have no idea what I was thinking at the time, so I corrected it. As I reviewed the rest of the article, I noticed a bunch of “typos” that would make the text confusing, so I corrected those as well. And finally I rephrased wording in a couple of places to try to make things clearer.
{"url":"https://josmfs.net/tag/power-series/","timestamp":"2024-11-11T18:16:42Z","content_type":"text/html","content_length":"96421","record_id":"<urn:uuid:80dd6bc5-9e4f-42bb-957d-c79d13aa7b7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00059.warc.gz"}
Analysis of free vibration of nano plate resting on Winkler foundation Present paper deals with the vibration analysis of nano plate supported by Winkler foundation using Eringen’s elasticity theory with Classical Plate theory. Rayleigh-Ritz method has been employed in this study for finding frequencies of plate subjected to different edge conditions. The obtained results are first tested for convergence and then validated with the published literature. Further study is carried out to analyse the effect of various parameters on natural nondimensional frequencies of nano plate resting on Winkler foundation. The study reveals that the non-local effect has significant effect on vibration behaviour of nano plate resting on elastic foundation. Observations shows that on increasing the Winkler foundation modulus and aspect ratio the nondimensional frequency parameters increases. 1. Introduction Nanoplates are the building blocks for various nanoscale objects which are called nanostructures. Recent, development of nanotechnology enables new generation devices with enhanced functionality hence playing vital roles in many engineering fields. These materials have superior properties as compared to conventional materials [1]. In recent time the nanomaterials have become the important components of different modern nanoelectromechanical systems [2]. It is thus necessary to analyse vibration behaviour in order to design nanostructures. Experimental analysis at nanoscale size is tough to carry out, hence efficient mathematical models for nanostructures analysis is needed. Vibration analysis of Nanostructures with traditional models shows that properties changes to an extent at nanoscale called size effect. When the system is reduced to nanoscale it can no longer be modelled as a continuum as systems become comparable to intermolecular spacing. If small-scale effects in Nano designing are ignored then this may lead to faulty Nano system design. Hence in order to achieve an accurate solution for correct and realistic design small scale effects must be taken into account. In order to overcome these shortcommings different continuum models as couple stress theory [3], modified couple stress theory [4], strain gradient theory [5] and nonlocal elasticity theory [6] were given. Among these thories, the nonlocal elasticity theory proposed by Eringen is mostly considered. Nonlocal Continuum mechanics theory put forward by Eringen [7] incorporates small length scale effect which plays a key role when Nano structures are taken into account. This theory is a widely used as results are found to be an accurate and quick analysis of Nano plates can be done. According to the nonlocal elasticity theory, the stress at a specific point not only depends on stress at that point but also the stresses of every points of that body. In the present work, governing equations of classical plate theory is based on nonlocal elasticity theory. Natrajan et al. [8] analysed the size effect on linear free vibration behaviour of nano plate using Erigen’s nonlocal elasticity theory. Murmu and Pradhan [9] analysed the scaling effect on vibration behaviour of nano plate using nonlocal continuum mechanics. Behra et al. [10] studied the vibration of the rectangular isotropic nano plate. Daneshmehr et al. [11] studied the vibration of functionally graded nano plate with the help of non-local theory. Hence it is can be seen that Erigen non-local theory is widely used to analyse the vibration behaviour of different nano plate structures. Previous literature suggests that most of the studies focuses analysis of vibration behaviour of nano beams and nano plates of isotropic material plates without any interaction with foundation. Very less study is found to be conducted on analysis of plate on foundation. In number of applications Nano plates on elastic foundation, are used. Different elastic medium models for analysis of plate on elastic foundation were proposed by number of researchers. One of the reported models is Winkler model [12], which is derived from Winkler's hypotheses. This model considers vertical closely spaced linear springs as the elastic medium. This model is considered as building block and is easy to solve. According to literature many authors have tried to solve the problem using a different method as Finite Element Method, Differential quadrature methods etc. It is to be noted that when the plate is rested on elastic foundation then these methods become very difficult to solve and accurate result cannot be obtained. It can be concluded that previous methods are not efficient for analysis of plate resting on Winkler foundation. In present article, authors have obtained the accurate solution of natural frequency with help of Kirchoff’s theory and Rayleigh Ritz method. Few studies have used this method for the analysis of large scale plate [13]. The extant literature review suggests that no study highlights in detail, the effect of various material and foundation parameters on Nano plates resting on elastic support subjected to different combinations of boundary conditions. 2. Mathematical modelling Nano plate of dimension $a$×$b$ as shown in Fig. 1, has been considered for analysis. The edges are taken along the $x$, $y$ axes for the mathematical modelling of the Nano plate that rest on Winkler foundation. The $z$-axis is considered perpendicular to $xy$ – plane. Fig. 1Winkler foundation supporting nano plate According to classical plate theory, displacements of a point is expressed by equation: $u=\left\{\begin{array}{c}u\left(x,y,z\right)\\ u \left(x,y,z\right)\\ w\left(x,y,z,t\right)\end{array}\right\}=\left\{\begin{array}{c}-z\frac{\partial w\left(x,y\right)}{\partial x}\\ -z\frac{\ partial w\left(x,y\right)}{\partial y}\\ w\left(x,y,t\right)\end{array}\right\},$ where, $u$, $v$ and $w$ denote displacement in the $x$, $y$ and $z$ axes and $w$ represents a deflection along the $z$-axis. Neglecting the transverse shear caused due to in-plane stretching and bending, the linear strain linked with the displacement field is: $\left\{\begin{array}{l}{\sigma }_{xx}\\ {\sigma }_{yy}\\ {\tau }_{xy}\end{array}\right\}=\left(\begin{array}{ccc}{Q}_{11}& {Q}_{12}& 0\\ {Q}_{21}& {Q}_{22}& 0\\ 0& 0& {Q}_{66}\end{array}\right)\left \{\begin{array}{l}{\epsilon }_{xx}\\ {\epsilon }_{yy}\\ {\gamma }_{xy}\end{array}\right\}.$ Strain energy of an elastic Nano plate is formulated as: $U=\frac{1}{2}{\int }_{\mathrm{\Omega }}\left[{D}_{11}\left\{{\left(\frac{{\partial }^{2}w}{\partial {x}^{2}}\right)}^{2}+{\left(\frac{{\partial }^{2}w}{\partial {y}^{2}}\right)}^{2}\right\}+2{D}_ {12}\frac{{\partial }^{2}w}{\partial {x}^{2}}\frac{{\partial }^{2}w}{\partial {y}^{2}}+4{D}_{33}{\left(\frac{{\partial }^{2}w}{\partial x\partial y}\right)}^{2}\right]dxdy,$ ${U}_{foundation}=\frac{1}{2}{\int }_{\mathrm{\Omega }}{k}_{w}\left[{w}^{2}+\mu \left({\left(\frac{\partial w}{\partial x}\right)}^{2}+{\left(\frac{\partial w}{\partial y}\right)}^{2}\right)\right] The effective strain energy (${U}_{eff}=U+{U}_{foundation}$) of the nano plate, on Winkler-Pasternak elastic foundation, is: where, $\mu ={\left({c}_{0}{l}_{\mathrm{i}\mathrm{n}\mathrm{t}}\right)}^{2}$ [6] is the nonlocal parameter, ${l}_{int}$ denotes internal characteristic length, ${c}_{0}$ represents constant and ${k}_ {w}$ shows stiffness of Winkler foundation. The kinetic energy of the nano plate on Winkler foundation in simplified form is given by: $\mathrm{T}=\frac{1}{2}{m}_{0}{\omega }^{2}{\int }_{\Omega }\left[w{\left(\frac{\partial w}{\partial x}\right)}^{2}+\mu \left(\frac{{\partial }^{2}w}{\partial {x}^{2}}+\frac{{\partial }^{2}w}{\ partial {y}^{2}}\right)\right]dxdydz,$ where, ${m}_{0}$ is $\rho h$. The harmonic type displacement components are assumed as: $w\left(x,y,t\right)=W\left(x,y\right)\mathrm{c}\mathrm{o}\mathrm{s}\omega t,$ where, $w\left(x,y,t\right)$, $W\left(x,y\right)$ and $\omega$ are transverse deflection, maximum deflection and frequency of free vibration of the nano plate on elastic foundation. Now, on equating the maximum strain energy ${U}_{ef{f}_{\mathrm{m}\mathrm{a}\mathrm{x}}}$ and kinetic energy ${T}_{\mathrm{m}\mathrm{a}\mathrm{x}}$ we will get Rayleigh quotient, given by the following equations: ${\omega }^{2}=\frac{{U}_{ef{f}_{\mathrm{m}\mathrm{a}\mathrm{x}}}}{{\int }_{\mathrm{\Omega }}\left[{W}^{2}+\mu \left(\frac{{\partial }^{2}\stackrel{-}{\mathrm{w}}}{\partial {x}^{2}}+\frac{{\partial } ^{2}\stackrel{-}{\mathrm{w}}}{\partial {y}^{2}}\right)\right]dxdy}.$ The maximum deflection $W\left(x,y\right)$ is the summation of the series of simple algebraic polynomials involving $x$ and $y$, presented as: $W\left(x,y\right)=\sum _{i=0}^{n}{c}_{i}{f}_{i}\left(x,y\right),i=1,2,\dots ,n,$ where, ${c}_{i}$ are unknown constants and ${f}_{i}$ are function generated. The functions (${f}_{i}$) contain polynomials $\varphi$ and simple polynomials ${g}_{i}\left(x,y\right)$ are represented ${f}_{i}\left(x,y\right)=\varphi {g}_{i}\left(x,y\right),$ where exponents $p$, $q$, $r$ and $s$ stands for edge conditions of different edges. Moreover, considering the partial derivative of ${\omega }^{2}$ w.r.t unknown constant and then equating it with zero, gives: $\frac{\partial {\mathrm{\omega }}^{2}}{\partial {\mathrm{c}}_{\mathrm{i}}}=0,\mathrm{}\mathrm{}\mathrm{}\mathrm{}i=1,2,3,\dots ,n.$ Further simplifying the Eq. (15), we get the following equation in term of Stiffness and inertia matrix: $\left\{\mathrm{\Delta }\right\}\left(\left[K+{K}_{foundation}\right]-{\lambda }^{2}\left[M\right]\right)=0,$ ${\lambda }^{2}=\frac{\rho h{a}^{2}{\omega }^{2}}{D},D=\frac{E{h}^{3}}{12\left(1-{\upsilon }^{2}\right)}.$ $\lambda$ is the nondimensionalised frequency parameter, and $K$, ${K}_{foundation}$ and $M$ are stiffness and mass matrices respectively. 3. Results and discussions 3.1. Convergence and validation The frequency parameter of nano plates is analysed in detail in this section. Convergence test is carried assure the accuracy of present result. Authors further compared the obtained results with the results available in the literature. For test of convergence and comparison, the nano plate is considered to be under SSSS edge condition without foundation. 3.1.1. Convergence Convergence of frequency parameter of nano plate subjected to various boundary conditions is shown in Fig. 2 with respect to different number of the polynomials. The figure suggests that the frequency parameter converges with increase in the number of polynomials, to the exact frequency and becomes constant after certain no. of polynomial i.e. 40 in present case. Fig. 2Convergence of fundamental frequency parameter of nano plate 3.1.2. Comparison The non-dimensional natural frequency is compared with Aghababei and Reddy as reported in Table 1. An excellent agreement between the results can be observed. Table 1Comparison of frequency parameter (λm×n=ωm×nhρ/G) of SSSS (simply supported) square plate. (E= 30×106, ν= 0.3) Frequencies $\mu$ Aghababei and Reddy [14] Present 0 0.0963 0.09636 ${\lambda }_{1×1}$ 1 0.0880 0.0880 0 0.3853 0.3874 ${\lambda }_{2×2}$ 1 0.288 0.2884 0 0.8669 0.8608 ${\lambda }_{3×3}$ 1 0.5202 0.51671 3.2. Parametric study Table 2 illustrates the variation of Frequency parameter of nano plate subjected to different boundary condition and resting on elastic foundation for different aspect ratio ($a$/$b$), nonlocal parameter ($\mu$) and foundation parameters (${k}_{w}$). On analysis of the table it is observed that the nonlocal parameter plays the crucial role. With increase in nonlocal parameter ($\mu$) the non-dimension frequency decreases irrespective of BCs. This is because plate becomes more flexible on introduction of small scale effect hence this effect should not be neglected. Further it is seen from the Table 2 that for constant value of μ if aspect ratio is increased the frequency parameter increases considering specific values of foundation parameter for every BCs. This is because of the fact that, with increase in $a$/$b$ the stiffness of the plate decreases which increases the non dimensional frequency. It is evident from Table 2 that frequency parameter increases with increase in coefficient of stiffness of Winkler Foundation for specified $\mu$ and $a$/$b$. Although the frequency parameter increases but the increment is very small. It can be said that Winkler parameter has insignificant effect on frequency parameter irrespective of BC, $\mu$ and $a$/$b$. Table 2Variation of frequency parameter of nano plate under different boundary condition with different aspect ratio (a/b), foundation parameter (kw) and nonlocal parameter (μ) Nonlocal parameter ($\mu$) $\mu =$1 $\mu =$ 2 BC ${k}_{w}$ Aspect ratio 0.5 1 1.5 2 0.5 1 1.5 2 0 10.0984 14.7555 21.2350 28.6250 8.7550 12.2911 16.9907 22.1922 SSSS 10 10.1073 14.7616 21.2393 28.6282 8.7653 12.2984 16.9960 22.1963 100 10.1870 14.8162 21.2773 28.6564 8.8570 12.3640 17.0435 22.2327 0 14.1635 13.7892 15.4657 17.8578 11.1517 11.6753 12.7460 14.3190 SCCF 10 14.1698 13.7951 15.4715 17.8629 11.1598 11.6830 12.7530 14.3253 100 14.2268 13.8540 15.5237 17.9080 11.2320 11.7520 12.8163 14.3816 0 2.0521 3.0454 4.2989 5.2961 1.4647 2.8003 3.8153 4.5305 SFSF 10 1.6108 3.0747 4.3198 5.3130 1.5248 2.8322 3.8388 4.5503 100 2.0521 3.3272 4.5030 5.4630 1.9853 3.1044 4.0438 4.7246 4. Conclusions The authors used the classical plate theory along with Eringen’s nonlocal elasticity theory to analyse the frequency parameter of nano plate under different combination of boundary condition that rest on Winkler elastic foundation. Effect of the aspect ratio, stiffness of Winkler foundation and nonlocal parameter on the vibration behaviour of nano plate is studied. It can be concluded from the results that non-local parameter has an influence on vibration behaviour of small scale plate for all considered BCs hence it should always be considered for modelling small scale plate structure. Further study reveals that frequency parameters increases on increasing aspect ratio for every BCs. It is also concluded that the effect of stiffness of Winkler foundation on vibration behaviour of nano plate is insignificant. • Dai H., Hafner J. H., Rinzler A. G., Colbert D. T., Smalley R. E. Nanotubes as nanoprobes in scanning probe microscopy. Nature, Vol. 384, 1996, p. 147. • Chakraverty S., Behera L. Free vibration of rectangular nanoplates using Rayleigh-Ritz method. Physica E: Low-dimensional Systems and Nanostructures, Vol. 56, 2014, p. 357-363. • Hadjesfandiari A. R., Dargush G. F. Couple stress theory for solids. International Journal of Solids and Structures, Vol. 48, Issue 18, 2011, p. 2496-2510. • Nix W. D., Gao H. Indentation size effects in crystalline materials: a law for strain gradient plasticity. Journal of the Mechanics and Physics of Solids, Vol. 46, Issue 3, 1998, p. 411-425. • Eringen A. C. Nonlocal polar elastic continua. International Journal of Engineering Science, Vol. 10, Issue 1, 1972, p. 1-16. • Eringen A. C. On differential equations of nonlocal elasticity and solutions of screw dislocation and surface waves. Journal of Applied Physics, Vol. 54, Issue 9, 1983, p. 4703-4710. • Eringen A. C. Nonlocal Continuum Field Theories. Springer Science & Business Media, New York, 2002. • Natarajan S., Chakraborty S., Thangavel M., Bordas S., Rabczuk T. Size-dependent free flexural vibration behavior of functionally graded nanoplates. Computational Materials Science, Vol. 65, 2012, p. 74-80. • Murmu T., Pradhan S. C. Small-scale effect on the free in-plane vibration of nanoplates by nonlocal continuum model. Physica E: Low-dimensional Systems and Nanostructures, Vol. 41, Issue 8, 2009, p. 1628-1633. • Behera L., Chakraverty S. Effect of scaling effect parameters on the vibration characteristics of nanoplates. Journal of Vibration and Control, Vol. 22, Issue 10, 2016, p. 2389-2399. • Daneshmehr A., Rajabpoor A., Hadi A. Size dependent free vibration analysis of nanoplates made of functionally graded materials based on nonlocal elasticity theory with high order theories. International Journal of Engineering Science, Vol. 95, 2015, p. 23-35. • Winkler E. The Theory of Elasticity and Strength Dominicus. Dominicus Prague, Czechoslovakia, 1867. • Singh Pratap P., Azam M. S. Ranjan V. Vibration analysis of a thin functionally graded plate having an out of plane material inhomogeneity resting on Winkler-Pasternak foundation under different combinations of boundary conditions. Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science, 2018, https://doi.org/10.1177/0954406218796040. • Aghababaei R., Reddy J. N. Nonlocal third-order shear deformation plate theory with application to bending and vibration of plates. Journal of Sound and Vibration, Vol. 326, Issues 1-2, 2009, p. About this article Modal analysis and applications nano plate Rayleigh Ritz method frequency parameter nonlocal elasticity theory Winkler foundation Copyright © 2018 Piyush Pratap Singh, et al. This is an open access article distributed under the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
{"url":"https://www.extrica.com/article/20406","timestamp":"2024-11-14T03:27:35Z","content_type":"text/html","content_length":"146882","record_id":"<urn:uuid:551422f3-507b-48ae-bf86-b106afad93f3>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00382.warc.gz"}
Measure-preserving dynamical system In mathematics, a measure-preserving dynamical system is an object of study in the abstract formulation of dynamical systems, and ergodic theory in particular. A measure-preserving dynamical system is defined as a probability space and a measure-preserving transformation on it. In more detail, it is a system with the following structure: • is a set, • is a σ-algebra over , • is a probability measure, so that μ(X) = 1, and μ(∅) = 0, • is a measurable transformation which preserves the measure , i.e., . This definition can be generalized to the case in which T is not a single transformation that is iterated to give the dynamics of the system, but instead is a monoid (or even a group) of transformations T[s] : X → X parametrized by s ∈ Z (or R, or N ∪ {0}, or [0, +∞)), where each transformation T[s] satisfies the same requirements as T above. In particular, the transformations obey the rules: The earlier, simpler case fits into this framework by definingT[s] = T^s for s ∈ N. The existence of invariant measures for certain maps and Markov processes is established by the Krylov–Bogolyubov theorem. Examples include: The concept of a homomorphism and an isomorphism may be defined. Consider two dynamical systems and . Then a mapping is a homomorphism of dynamical systems if it satisfies the following three properties: 1. The map φ is measurable, 2. For each , one has , 3. For μ-almost all x ∈ X, one has φ(Tx) = S(φ x). The system is then called a factor of . The map φ is an isomorphism of dynamical systems if, in addition, there exists another mapping that is also a homomorphism, which satisfies 1. For μ-almost all x ∈ X, one has 2. For ν-almost all y ∈ Y, one has . Hence, one may form a category of dynamical systems and their homomorphisms. Generic points A point x ∈ X is called a generic point if the orbit of the point is distributed uniformly according to the measure. Symbolic names and generators Consider a dynamical system , and let Q = {Q[1], ..., Q[k]} be a partition of X into k measurable pair-wise disjoint pieces. Given a point x ∈ X, clearly x belongs to only one of the Q[i]. Similarly, the iterated point T^nx can belong to only one of the parts as well. The symbolic name of x, with regards to the partition Q, is the sequence of integers {a[n]} such that The set of symbolic names with respect to a partition is called the symbolic dynamics of the dynamical system. A partition Q is called a generator or generating partition if μ-almost every point x has a unique symbolic name. Operations on partitions Given a partition Q = {Q[1], ..., Q[k]} and a dynamical system , we define T-pullback of Q as Further, given two partitions Q = {Q[1], ..., Q[k]} and R = {R[1], ..., R[m]}, we define their refinement as With these two constructs we may define refinement of an iterated pullback which plays crucial role in the construction of the measure-theoretic entropy of a dynamical system. Measure-theoretic entropy The entropy of a partition Q is defined as^[1]^[2] The measure-theoretic entropy of a dynamical system with respect to a partition Q = {Q[1], ..., Q[k]} is then defined as Finally, the Kolmogorov–Sinai or metric or measure-theoretic entropy of a dynamical system is defined as where the supremum is taken over all finite measurable partitions. A theorem of Yakov Sinai in 1959 shows that the supremum is actually obtained on partitions that are generators. Thus, for example, the entropy of the Bernoulli process is log 2, since almost every real number has a unique binary expansion. That is, one may partition the unit interval into the intervals [0, 1/2) and [1/2, 1]. Every real number x is either less than 1/2 or not; and likewise so is the fractional part of 2^nx. If the space X is compact and endowed with a topology, or is a metric space, then the topological entropy may also be defined. See also 1. ↑ Ya.G. Sinai, (1959) "On the Notion of Entropy of a Dynamical System", Doklady of Russian Academy of Sciences 124, pp. 768–771. 2. ↑ Ya. G. Sinai, (2007) "Metric Entropy of Dynamical System" • Michael S. Keane, "Ergodic theory and subshifts of finite type", (1991), appearing as Chapter 2 in Ergodic Theory, Symbolic Dynamics and Hyperbolic Spaces, Tim Bedford, Michael Keane and Caroline Series, Eds. Oxford University Press, Oxford (1991). ISBN 0-19-853390-X (Provides expository introduction, with exercises, and extensive references.) • Lai-Sang Young, "Entropy in Dynamical Systems" (pdf; ps), appearing as Chapter 16 in Entropy, Andreas Greven, Gerhard Keller, and Gerald Warnecke, eds. Princeton University Press, Princeton, NJ (2003). ISBN 0-691-11338-6 • T. Schürmann and I. Hoffmann, The entropy of strange billiards inside n-simplexes. J. Phys. A28, page 5033ff, 1995. PDF-Dokument This article is issued from - version of the 7/16/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.
{"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Measure-preserving_dynamical_system.html","timestamp":"2024-11-14T16:48:49Z","content_type":"text/html","content_length":"24501","record_id":"<urn:uuid:323f2e50-9d82-49c2-9eec-18e87c81a433>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00842.warc.gz"}
Gem #69: Let's SPARK! — Part 2 by Yannick Moy —AdaCore Let's get started… In the last Gem, we proved that procedure Linear_Search was free from uninitialized variable accesses and run-time errors, which are safety properties of Linear_Search. Now we can try to prove a specific behavioral property of Linear_Search, expressed as a contract between Linear_Search and its callers. A contract will consist of a precondition that callers of Linear_Search are responsible for establishing, before calling Linear_Search, and a postcondition that Linear_Search must establish, before returning to the caller. If not present, a default "true" pre- or postcondition is assumed. Let's prove that when Linear_Search returns with Found = True, the value of Table at Index is Value. This can be expressed in SPARK as a postcondition on procedure Linear_Search that is located after its declaration in search.ads: procedure Linear_Search (Table : in IntArray; Value : in Integer; Found : out Boolean; Index : out Integer); --# post Found -> Table(Index) = Value; Notice the implication symbol ->, which is only defined in SPARK annotations. Let's call the SPARK tools, as we did in the previous Gem: Examine, Simplify All, POGS ... This time, the column "TO DO" is not empty: VCs for procedure_linear_search : | | | -----Proved In----- | | | # | From | To | vcg | siv | plg | prv | False | TO DO | 1 | start | rtc check @ 11 | | YES | | | | | 2 | start | rtc check @ 13 | | YES | | | | | 3 | start | assert @ 13 | | YES | | | | | 4 | 13 | assert @ 13 | | YES | | | | | 5 | 13 | rtc check @ 14 | | YES | | | | | 6 | 13 | rtc check @ 16 | | YES | | | | | 7 | 13 | assert @ finish | | YES | | | | | 8 | 13 | assert @ finish | | | | | | YES | If we right-click on this line and select SPARK/Show Simplified VC, GPS opens a file linear_search.siv, which shows that the unproved VC corresponds precisely to the postcondition we just added: C1: found -> element(table, [index]) = value . This is the conclusion (C) that the prover tries to prove with the set of hypotheses (H) above it. If we look at the hypotheses, we see that the conclusion cannot indeed be proved. This has to do with the warning which we already saw in the previous Gem: Warning 402 - Default assertion planted to cut the loop To prove a property of a procedure with a loop, we cannot unroll the loop an unbounded number of times. Therefore, SPARK "cuts" the loop with a loop invariant, which is a property that the loop maintains. Unless you provide such a loop invariant, SPARK assumes by default that nothing is maintained through the loop. If you provide one, SPARK will prove three things: 1) the loop invariant holds when control enters the loop for the first time 2) the loop invariant is maintained during an arbitrary run through the loop 3) the loop invariant is sufficient to prove the postcondition What is missing in our case is the information that Found remains False throughout the loop. Let's add it, with the following syntax in search.adb: for I in Integer range Table'Range loop --# assert Found = False; Let's do it again: Examine, Simplify All, POGS ... Notice that the warning about a default assertion disappeared. This time, the "TO DO" column is empty, so we have successfully proved Linear_Search's postcondition! Finally, let's see how SPARK deals with global variables, by adding a counter to Linear_Search, which is incremented by one each time the call succeeds. We need to state explicitly that Counter is part of the state of this package, which we do using an "own" annotation below. We also need to state explicitly that Counter is initialized using an "initializes" annotation. The following declaration should go into search.ads: package Search --# own Counter; --# initializes Counter; Counter : Natural := 0; Now, we increment Counter in Linear_Search's body in search.adb: if Table(I) = Value then Counter := Counter + 1; If we try to run the Examiner at this point, it flags an error: Semantic Error 1 - The identifier Counter is either undeclared or not visible at this point This is because reads and writes of global variables are part of a subprogram specification in SPARK. Since Linear_Search does not declare in its specification that it reads or writes a global variable, it is not allowed to do so in its body. So let's add a SPARK annotation to state that Linear_Search reads and writes Counter. procedure Linear_Search (Table : in IntArray; Value : in Integer; Found : out Boolean; Index : out Integer); --# global in out Counter; --# post Found -> Table(Index) = Value; Let's do it again: Examine, Simplify All, POGS ... We get a new unproved VC in the column "TO DO", which corresponds to the following conclusion: C1: counter <= 2147483646 . This time, it is because SPARK has detected a problem during the proof that the update of Counter does not overflow. Indeed, it could overflow! The solution is to add a precondition to Linear_Search, that promises that it will never be called in a state where Counter is the largest integer value. procedure Linear_Search (Table : in IntArray; Value : in Integer; Found : out Boolean; Index : out Integer); --# global in out Counter; --# pre Counter < Integer'Last; --# post Found -> Table(Index) = Value; As before, we must modify the loop invariant. Here, we just have to repeat this information in the loop invariant: for I in Integer range Table'Range loop --# assert Found = False and Counter < Integer'Last; Let's do it again: Examine, Simplify All, POGS ... Everything is proved! Notice that the promise made by the precondition will have to be proved by callers of Linear_Search ... Finally, let's add to the SPARK annotation of Linear_Search that Counter is incremented by one when Linear_Search returns Found = True. This can be expressed as a postcondition relating the value of Counter at procedure entry, denoted Counter~, and the value of Counter at procedure exit, denoted Counter: procedure Linear_Search (Table : in IntArray; Value : in Integer; Found : out Boolean; Index : out Integer); --# global in out Counter; --# pre Counter < Integer'Last; --# post Found -> (Table(Index) = Value and Counter = Counter~ + 1); Notice that in the precondition we simply use Counter to denote the value at procedure entry, because the precondition is precisely evaluated at procedure entry. As before, we update the loop invariant to state that the current value of Counter is the same as the one at procedure entry: for I in Integer range Table'Range loop --# assert Found = False and Counter < Integer'Last --# and Counter = Counter~; Let's do it a last time: Examine, Simplify All, POGS ... Everything is proved! Remember though that we can only prove a contract we wrote, which may be very different from saying abstractly that procedure Linear_Search is "correct". Who knows what the correct behaviour of Linear_Search means for the human who programmed it? Let's recap what we have seen so far. SPARK is a language that combines a strict subset of Ada with annotations written inside stylized Ada comments. We have seen various kinds of SPARK annotations: preconditions introduced by pre; postconditions introduced by post; loop invariants introduced by assert; frame conditions introduced by global, own, and initializes. The SPARK tools allow us to check that a procedure is free from uninitialized variable accesses, that it executes without run-time errors, and that it respects a given contract, written next to its declaration, that callers can rely on. In later Gems we will explore in more depth the capabilities of SPARK and its integration with GPS. In the meantime, you can learn more about SPARK in the SPARK tutorial at http://www.adacore.com/ About the Author Yannick Moy’s work focuses on software source code analysis, mostly to detect bugs or verify safety/security properties. Yannick previously worked for PolySpace (now The MathWorks) where he started the project C++ Verifier. He then joined INRIA Research Labs/Orange Labs in France to carry out a PhD on automatic modular static safety checking for C programs. Yannick joined AdaCore in 2009, after a short internship at Microsoft Research. Yannick holds an engineering degree from the Ecole Polytechnique, an MSc from Stanford University and a PhD from Université Paris-Sud. He is a Siebel Scholar.
{"url":"https://www.adacore.com/gems/gem-69","timestamp":"2024-11-05T23:34:34Z","content_type":"text/html","content_length":"37294","record_id":"<urn:uuid:a9395518-16c4-490c-9b1c-c508e4d07a58>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00142.warc.gz"}
Denominators and Numerators Comparison Worksheets This Fractions Worksheet is great for testing children in their comparison of fractions with similar denominators and numerators to see if they are greater than, less than or equal. You may select different denominators and have the problems produce similar denominators, or problems with similar numerators, or a mixture of both. Include Fractions with These Denominators... 3, 4, and 5 will always be included. Type of Problems Denominators will be the same and the Numerators will vary Numerators will be the same and the Denominators will vary Mix both types of problems Language for the Fractions Worksheet English German Albanian Spanish Swedish Italian French Turkish Polish Memo Line for the Fractions Worksheet You may enter a message or special instruction that will appear on the bottom left corner of the Fractions Worksheet. Fractions Worksheet Answer Page Now you are ready to create your Fractions Worksheet by pressing the Create Button. If You Experience Display Problems with Your Math Worksheet
{"url":"https://www.math-aids.com/Fractions/Denominator_Numerator_Comparison.html","timestamp":"2024-11-08T00:59:25Z","content_type":"text/html","content_length":"29083","record_id":"<urn:uuid:a21e5a43-77aa-4308-a763-4613dc1381cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00584.warc.gz"}
General Statistics: Ch 1 Test Flashcards Determine whether the given description corresponds to an observational study or an experiment. A sample of fish is taken from a lake to measure the effect of pollution from a nearby factory on the fish. Determine whether the given description corresponds to an observational study or an experiment. A quality control specialist compares the output from a machine with a new lubricant to the output of machines with the old lubricant. The personnel manager at a company wants to investigate job satisfaction among the female employees. One evening after a meeting she talks to all 30 female employees who attended the meeting. Does this sampling plan result in a random sample? Simple random sample? Explain. No; no. The sample is not random because not all female employees have the same chance of being selected. Those that didn't attend the meeting have no chance of being selected. It is not a simple random sample because some samples are not possible, such as a sample containing female employees who did not attend the meeting. Determine which of the four levels of measurement (nominal, ordinal, interval, ratio) is most appropriate. The sample of spheres categorized from softest to hardest. Determine which of the four levels of measurement (nominal, ordinal, interval, ratio) is most appropriate. Salaries of college professors. Identify the type of observational study. A statistical analyst obtains data about ankle injuries by examining a hospital's records from the past 3 years. Identify which of these types of sampling is used: random, stratified, systematic, cluster, convenience. • 49, 34, and 48 students are selected from the Sophomore, Junior, and Senior classes with 496, 348, and 481 students respectively. • A sample consists of every 49th student from a group of 496 students.
{"url":"https://www.easynotecards.com/notecard_set/48442","timestamp":"2024-11-06T22:16:01Z","content_type":"text/html","content_length":"22917","record_id":"<urn:uuid:4c871245-4849-49ad-9aae-63ce088a55df>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00400.warc.gz"}
Countable set (Redirected from Countable) Jump to navigation Jump to search This article's tone or style may not reflect the encyclopedic tone used on Wikipedia (June 2018) (Learn how and when to remove this template message) In mathematics, a countable set is a set with the same cardinality (number of elements) as some subset of the set of natural numbers. A countable set is either a finite set or a countably infinite set. Whether finite or infinite, the elements of a countable set can always be counted one at a time and, although the counting may never finish, every element of the set is associated with a unique natural number. Some authors use countable set to mean countably infinite alone.^[1] To avoid this ambiguity, the term at most countable may be used when finite sets are included and countably infinite, enumerable,^ [2] or denumerable^[3] otherwise. Georg Cantor introduced the term countable set, contrasting sets that are countable with those that are uncountable (i.e., nonenumerable or nondenumerable^[4]). Today, countable sets form the foundation of a branch of mathematics called discrete mathematics. A set S is countable if there exists an injective function f from S to the natural numbers N = {0, 1, 2, 3, ...}.^[5] If such an f can be found that is also surjective (and therefore bijective), then S is called countably infinite. In other words, a set is countably infinite if it has one-to-one correspondence with the natural number set, N. As noted above, this terminology is not universal. Some authors use countable to mean what is here called countably infinite, and do not include finite sets. Alternative (equivalent) formulations of the definition in terms of a bijective function or a surjective function can also be given. See below. In 1874, in his first set theory article, Cantor proved that the set of real numbers is uncountable, thus showing that not all infinite sets are countable.^[6] In 1878, he used one-to-one correspondences to define and compare cardinalities.^[7] In 1883, he extended the natural numbers with his infinite ordinals, and used sets of ordinals to produce an infinity of sets having different infinite cardinalities.^[8] A set is a collection of elements, and may be described in many ways. One way is simply to list all of its elements; for example, the set consisting of the integers 3, 4, and 5 may be denoted {3, 4, 5}. This is only effective for small sets, however; for larger sets, this would be time-consuming and error-prone. Instead of listing every single element, sometimes an ellipsis ("...") is used, if the writer believes that the reader can easily guess what is missing; for example, {1, 2, 3, ..., 100} presumably denotes the set of integers from 1 to 100. Even in this case, however, it is still possible to list all the elements, because the set is finite. Some sets are infinite; these sets have more than n elements for any integer n. For example, the set of natural numbers, denotable by {0, 1, 2, 3, 4, 5, ...}, has infinitely many elements, and we cannot use any normal number to give its size. Nonetheless, it turns out that infinite sets do have a well-defined notion of size (or more properly, of cardinality, which is the technical term for the number of elements in a set), and not all infinite sets have the same cardinality. To understand what this means, we first examine what it does not mean. For example, there are infinitely many odd integers, infinitely many even integers, and (hence) infinitely many integers overall. However, it turns out that the number of even integers, which is the same as the number of odd integers, is also the same as the number of integers overall. This is because we arrange things such that for every integer, there is a distinct even integer: ... −2→−4, −1→−2, 0→0, 1→2, 2→4, ...; or, more generally, n→2n, see picture. What we have done here is arranged the integers and the even integers into a one-to-one correspondence (or bijection), which is a function that maps between two sets such that each element of each set corresponds to a single element in the other set. However, not all infinite sets have the same cardinality. For example, Georg Cantor (who introduced this concept) demonstrated that the real numbers cannot be put into one-to-one correspondence with the natural numbers (non-negative integers), and therefore that the set of real numbers has a greater cardinality than the set of natural numbers. A set is countable if: (1) it is finite, or (2) it has the same cardinality (size) as the set of natural numbers. Equivalently, a set is countable if it has the same cardinality as some subset of the set of natural numbers. Otherwise, it is uncountable. Formal overview without details[edit] By definition a set S is countable if there exists an injective function f : S → N from S to the natural numbers N = {0, 1, 2, 3, ...}. It might seem natural to divide the sets into different classes: put all the sets containing one element together; all the sets containing two elements together; ...; finally, put together all infinite sets and consider them as having the same size. This view is not tenable, however, under the natural definition of size. To elaborate this we need the concept of a bijection. Although a "bijection" seems a more advanced concept than a number, the usual development of mathematics in terms of set theory defines functions before numbers, as they are based on much simpler sets. This is where the concept of a bijection comes in: define the correspondence a ↔ 1, b ↔ 2, c ↔ 3 Since every element of {a, b, c} is paired with precisely one element of {1, 2, 3}, and vice versa, this defines a bijection. We now generalize this situation and define two sets as of the same size if (and only if) there is a bijection between them. For all finite sets this gives us the usual definition of "the same size". What does it tell us about the size of infinite sets? Consider the sets A = {1, 2, 3, ... }, the set of positive integers and B = {2, 4, 6, ... }, the set of even positive integers. We claim that, under our definition, these sets have the same size, and that therefore B is countably infinite. Recall that to prove this we need to exhibit a bijection between them. But this is easy, using n ↔ 2n, so that 1 ↔ 2, 2 ↔ 4, 3 ↔ 6, 4 ↔ 8, .... As in the earlier example, every element of A has been paired off with precisely one element of B, and vice versa. Hence they have the same size. This is an example of a set of the same size as one of its proper subsets, which is impossible for finite sets. Likewise, the set of all ordered pairs of natural numbers is countably infinite, as can be seen by following a path like the one in the picture: The resulting mapping is like this: 0 ↔ (0,0), 1 ↔ (1,0), 2 ↔ (0,1), 3 ↔ (2,0), 4 ↔ (1,1), 5 ↔ (0,2), 6 ↔ (3,0) .... This mapping covers all such ordered pairs. If you treat each pair as being the numerator and denominator of a vulgar fraction, then for every positive fraction, we can come up with a distinct number corresponding to it. This representation includes also the natural numbers, since every natural number is also a fraction N/1. So we can conclude that there are exactly as many positive rational numbers as there are positive integers. This is true also for all rational numbers, as can be seen below. Theorem: The Cartesian product of finitely many countable sets is countable. This form of triangular mapping recursively generalizes to vectors of finitely many natural numbers by repeatedly mapping the first two elements to a natural number. For example, (0,2,3) maps to (5,3), which maps to 39. Sometimes more than one mapping is useful. This is where you map the set you want to show is countably infinite onto another set—and then map this other set to the natural numbers. For example, the positive rational numbers can easily be mapped to (a subset of) the pairs of natural numbers because p/q maps to (p, q). What about infinite subsets of countably infinite sets? Do these have fewer elements than N? Theorem: Every subset of a countable set is countable. In particular, every infinite subset of a countably infinite set is countably infinite. For example, the set of prime numbers is countable, by mapping the n-th prime number to n: • 2 maps to 1 • 3 maps to 2 • 5 maps to 3 • 7 maps to 4 • 11 maps to 5 • 13 maps to 6 • 17 maps to 7 • 19 maps to 8 • 23 maps to 9 • ... What about sets being naturally "larger than" N? For instance, Z the set of all integers or Q, the set of all rational numbers, which intuitively may seem much bigger than N. But looks can be deceiving, for we assert: Theorem: Z (the set of all integers) and Q (the set of all rational numbers) are countable. In a similar manner, the set of algebraic numbers is countable.^[9] These facts follow easily from a result that many individuals find non-intuitive. Theorem: Any finite union of countable sets is countable. With the foresight of knowing that there are uncountable sets, we can wonder whether or not this last result can be pushed any further. The answer is "yes" and "no", we can extend it, but we need to assume a new axiom to do so. Theorem: (Assuming the axiom of countable choice) The union of countably many countable sets is countable. For example, given countable sets a, b, c, ... Using a variant of the triangular enumeration we saw above: • a[0] maps to 0 • a[1] maps to 1 • b[0] maps to 2 • a[2] maps to 3 • b[1] maps to 4 • c[0] maps to 5 • a[3] maps to 6 • b[2] maps to 7 • c[1] maps to 8 • d[0] maps to 9 • a[4] maps to 10 • ... Note that this only works if the sets a, b, c, ... are disjoint. If not, then the union is even smaller and is therefore also countable by a previous theorem. Also note that we need the axiom of countable choice to index all the sets a, b, c, ... simultaneously. Theorem: The set of all finite-length sequences of natural numbers is countable. This set is the union of the length-1 sequences, the length-2 sequences, the length-3 sequences, each of which is a countable set (finite Cartesian product). So we are talking about a countable union of countable sets, which is countable by the previous theorem. Theorem: The set of all finite subsets of the natural numbers is countable. If you have a finite subset, you can order the elements into a finite sequence. There are only countably many finite sequences, so also there are only countably many finite subsets. The following theorem gives equivalent formulations in terms of a bijective function or a surjective function. A proof of this result can be found in Lang's text.^[3] (Basic) Theorem: Let S be a set. The following statements are equivalent: 1. S is countable, i.e. there exists an injective function f : S → N. 2. Either S is empty or there exists a surjective function g : N → S. 3. Either S is finite or there exists a bijection h : N → S. Corollary: Let S and T be sets. 1. If the function f : S → T is injective and T is countable then S is countable. 2. If the function g : S → T is surjective and S is countable then T is countable. Cantor's theorem asserts that if A is a set and P(A) is its power set, i.e. the set of all subsets of A, then there is no surjective function from A to P(A). A proof is given in the article Cantor's theorem. As an immediate consequence of this and the Basic Theorem above we have: Proposition: The set P(N) is not countable; i.e. it is uncountable. For an elaboration of this result see Cantor's diagonal argument. The set of real numbers is uncountable (see Cantor's first uncountability proof), and so is the set of all infinite sequences of natural numbers. Some technical details[edit] The proofs of the statements in the above section rely upon the existence of functions with certain properties. This section presents functions commonly used in this role, but not the verifications that these functions have the required properties. The Basic Theorem and its Corollary are often used to simplify proofs. Observe that N in that theorem can be replaced with any countably infinite Proposition: Any finite set is countable. Proof: Let S be such a set. Two cases are to be considered: Either S is empty or it isn't. 1.) The empty set is even itself a subset of the natural numbers, so it is countable. 2.) If S is nonempty and finite, then by definition of finiteness there is a bijection between S and the set {1, 2, ..., n} for some positive natural number n. This function is an injection from S into N. Proposition: Any subset of a countable set is countable.^[10] Proof: The restriction of an injective function to a subset of its domain is still injective. Proposition: If S is a countable set then S ∪ {x} is countable.^[11] Proof: If x ∈ S there is nothing to be shown. Otherwise let f: S → N be an injection. Define g: S ∪ {x} → N by g(x) = 0 and g(y) = f(y) + 1 for all y in S. This function g is an injection. Proposition: If A and B are countable sets then A ∪ B is countable.^[12] Proof: Let f: A → N and g: B → N be injections. Define a new injection h: A ∪ B → N by h(x) = 2f(x) if x is in A and h(x) = 2g(x) + 1 if x is in B but not in A. Proposition: The Cartesian product of two countable sets A and B is countable.^[13] Proof: Observe that N × N is countable as a consequence of the definition because the function f : N × N → N given by f(m, n) = 2^m3^n is injective.^[14] It then follows from the Basic Theorem and the Corollary that the Cartesian product of any two countable sets is countable. This follows because if A and B are countable there are surjections f : N → A and g : N → B. So f × g : N × N → A × B is a surjection from the countable set N × N to the set A × B and the Corollary implies A × B is countable. This result generalizes to the Cartesian product of any finite collection of countable sets and the proof follows by induction on the number of sets in the collection. Proposition: The integers Z are countable and the rational numbers Q are countable. Proof: The integers Z are countable because the function f : Z → N given by f(n) = 2^n if n is non-negative and f(n) = 3^− n if n is negative, is an injective function. The rational numbers Q are countable because the function g : Z × N → Q given by g(m, n) = m/(n + 1) is a surjection from the countable set Z × N to the rationals Q. Proposition: The algebraic numbers A are countable. Proof: Per definition, every algebraic number (including complex numbers) is a root of a polynomial with integer coefficients. Given an algebraic number ${\displaystyle \alpha }$, let ${\displaystyle a_{0}x^{0}+a_{1}x^{1}+a_{2}x^{2}+\cdots +a_{n}x^{n}}$ be a polynomial with integer coefficients such that ${\displaystyle \alpha }$ is the kth root of the polynomial, where the roots are sorted by absolute value from small to big, then sorted by argument from small to big. We can define an injection (i. e. one-to-one) function f : A → Q given by ${\displaystyle f(\alpha )=2^{k-1}\cdot 3^{a_ {0}}\cdot 5^{a_{1}}\cdot 7^{a_{2}}\cdots {p_{n+2}}^{a_{n}}}$, while ${\displaystyle p_{n}}$ is the n-th prime. Proposition: If A[n] is a countable set for each n in N then the union of all A[n] is also countable.^[15] Proof: This is a consequence of the fact that for each n there is a surjective function g[n] : N → A[n] and hence the function ${\displaystyle G:\mathbf {N} \times \mathbf {N} \to \bigcup _{n\in \mathbf {N} }A_{n},}$ given by G(n, m) = g[n](m) is a surjection. Since N × N is countable, the Corollary implies that the union is countable. We use the axiom of countable choice in this proof to pick for each n in N a surjection g[n] from the non-empty collection of surjections from N to A[n]. A topological proof for the uncountability of the real numbers is described at finite intersection property. Minimal model of set theory is countable[edit] If there is a set that is a standard model (see inner model) of ZFC set theory, then there is a minimal standard model (see Constructible universe). The Löwenheim–Skolem theorem can be used to show that this minimal model is countable. The fact that the notion of "uncountability" makes sense even in this model, and in particular that this model M contains elements that are: • subsets of M, hence countable, • but uncountable from the point of view of M, was seen as paradoxical in the early days of set theory, see Skolem's paradox. The minimal standard model includes all the algebraic numbers and all effectively computable transcendental numbers, as well as many other kinds of numbers. Total orders[edit] Countable sets can be totally ordered in various ways, e.g.: • Well-orders (see also ordinal number): □ The usual order of natural numbers (0, 1, 2, 3, 4, 5, ...) □ The integers in the order (0, 1, 2, 3, ...; −1, −2, −3, ...) • Other (not well orders): □ The usual order of integers (..., −3, −2, −1, 0, 1, 2, 3, ...) □ The usual order of rational numbers (Cannot be explicitly written as an ordered list!) Note that in both examples of well orders here, any subset has a least element; and in both examples of non-well orders, some subsets do not have a least element. This is the key definition that determines whether a total order is also a well order. See also[edit] 1. ^ Rudin 1976, Chapter 2 2. ^ Kamke 1950, p. 2 3. ^ ^a ^b Lang 1993, §2 of Chapter I 4. ^ Apostol 1969, Chapter 13.19 5. ^ Since there is an obvious bijection between N and N* = {1, 2, 3, ...}, it makes no difference whether one considers 0 a natural number or not. In any case, this article follows ISO 31-11 and the standard convention in mathematical logic, which takes 0 as a natural number. 6. ^ Stillwell, John C. (2010), Roads to Infinity: The Mathematics of Truth and Proof, CRC Press, p. 10, ISBN 9781439865507, “Cantor's discovery of uncountable sets in 1874 was one of the most unexpected events in the history of mathematics. Before 1874, infinity was not even considered a legitimate mathematical subject by most people, so the need to distinguish between countable and uncountable infinities could not have been imagined.” 7. ^ Cantor 1878, p. 242. 8. ^ Ferreirós 2007, pp. 268, 272–273. 9. ^ Kamke 1950, pp. 3–4 10. ^ Halmos 1960, p. 91 11. ^ Avelsgaard 1990, p. 179 12. ^ Avelsgaard 1990, p. 180 13. ^ Halmos 1960, p. 92 14. ^ Avelsgaard 1990, p. 182 15. ^ Fletcher & Patty 1988, p. 187 • Apostol, Tom M. (June 1969), Multi-Variable Calculus and Linear Algebra with Applications, Calculus, 2 (2nd ed.), New York: John Wiley + Sons, ISBN 978-0-471-00007-5 • Avelsgaard, Carol (1990), Foundations for Advanced Mathematics, Scott, Foresman and Company, ISBN 0-673-38152-8 • Cantor, Georg (1878), "Ein Beitrag zur Mannigfaltigkeitslehre", Journal für die Reine und Angewandte Mathematik, 84: 242–248 • Ferreirós, José (2007), Labyrinth of Thought: A History of Set Theory and Its Role in Mathematical Thought (2nd revised ed.), Birkhäuser, ISBN 3-7643-8349-6 • Fletcher, Peter; Patty, C. Wayne (1988), Foundations of Higher Mathematics, Boston: PWS-KENT Publishing Company, ISBN 0-87150-164-3 • Halmos, Paul R. (1960), Naive Set Theory, D. Van Nostrand Company, Inc Reprinted by Springer-Verlag, New York, 1974. ISBN 0-387-90092-6 (Springer-Verlag edition). Reprinted by Martino Fine Books, 2011. ISBN 978-1-61427-131-4 (Paperback edition). • Kamke, E. (1960), Theory of Sets, New York: Dover • Lang, Serge (1993), Real and Functional Analysis, Berlin, New York: Springer-Verlag, ISBN 0-387-94001-4 • Rudin, Walter (1976), Principles of Mathematical Analysis, New York: McGraw-Hill, ISBN 0-07-054235-X External links[edit] Look up countable in Wiktionary, the free dictionary.
{"url":"https://static.hlt.bme.hu/semantics/external/pages/tenzorszorzatok/en.wikipedia.org/wiki/Countable.html","timestamp":"2024-11-03T23:31:56Z","content_type":"text/html","content_length":"160905","record_id":"<urn:uuid:93108e26-f465-4ebb-9d4a-07ddbb686782>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00465.warc.gz"}
Lowpass Resampler Convert signal from one sample time to another Since R2021a Mixed-Signal Blockset / Utilities The Lowpass Resampler block converts either a fixed-step discrete or a variable-step discrete sample time at its input to a different sample time at its output. To calculate the output sample values, the block uses a lowpass filtering interpolation algorithm. The algorithm minimizes frequency aliasing at the output with respect to an output rise/fall time. If the output rise/fall time is inherited from a fixed-step discrete input, the cutoff frequency is the Nyquist rate of the input. Otherwise, the cutoff frequency is the Nyquist rate associated with a sample interval obtained by scaling the specified 20%−80% output rise/fall time to a value for 0%−100% rise/fall time. in — Discrete time input signal Discrete time input signal, specified as fixed-step or variable-step scalar. Data Types: single | double | int8 | int16 | int32 | uint8 | uint16 | uint32 | Boolean out — Continuous time output signal Continuous time output signal, returned as a fixed-step or variable-step scalar. Data Types: single | double Inherit output rise/fall time — Inherit output rise/fall time from fixed-step input sample time on (default) | off Select to inherit the output rise/fall time from the fixed-step input sample time. In case of variable step input, there is no rise/fall time to inherit. Output rise/fall time — 20%−80% output rise/fall time 1e-10 (default) | positive real scalar 20%−80% output rise/fall time, specified as a positive real scalar. To enable this parameter, deselect the Inherit output rise/fall time parameter. Programmatic Use Block parameter: OutputRiseFall Type: character vector Values: positive real scalar Default: 1e-10 Number of samples of delay — Number of samples of propagation delay for fixed-step operation 1 (default) | positive real scalar Number of samples of propagation delay for fixed-step operation, specified as a positive real scalar. If the Lowpass Resampler block inherits Output rise/fall time in fixed-step mode, the resampler conversion delay is given by NDelay·τ, where NDelay is the Number of samples of delay parameter and τ is the inherited input sample interval. In the variable-step input mode, the resampler conversion delay is given by 0.6τ[v] when NDelay equals to one, and is (NDelay-0.5)·τ[v] when NDelay is greater than one. τ[v] is 5/3 times the Output rise/fall time parameter. There is a tradeoff between the delay of the conversion and the suppression of out-of-band numerical artifacts, with greater delay producing better fidelity in band and greater suppression If you need anti-aliasing filter rejection, set Number of samples of delay to 5 or higher. To enable this parameter, deselect the Inherit output rise/fall time parameter. Programmatic Use Block parameter: NDelay Type: character vector Values: positive real scalar Default: 1 Output sample time — Type of output sample time Inherited (default) | Fixed step discrete | Variable step discrete Type of the output sample time to be used by downstream blocks, specified as either fixed step discrete or variable step discrete. The output sampling rate must be higher than the input sampling rate. For more information, see Variable Step Sampling Scheme. Programmatic Use Block parameter: OutputTsType Type: character vector Values: Fixed step discrete | Variable step discrete Default: Inherited Samples out per rise/fall time — Number of output samples in single output rise/fall time 5 (default) | positive real scalar Number of output samples in a single output rise/fall time, specified as a positive real scalar. To enable this parameter, select the option Fixed step discrete or Variable step discrete in the Output sample time parameter. Programmatic Use Block parameter: OutputTsRatio Type: character vector Values: positive real scalar Default: 5 Enable large buffer — Enable extra buffer samples on (default) | off Select to enable extra buffer samples. This option is enabled by default. Number of extra buffer samples — Number of extra buffer samples to support sample delays 10 (default) | positive real scalar Number of extra buffer samples needed to support the number of samples of delay, specified as a positive real scalar. Programmatic Use Block parameter: NBuffer Type: character vector Values: positive real scalar Default: 10 More About Frequency and Step Response The Lowpass Resampler block interpolates the incoming discrete signal using an anti-aliasing filter. The anti-aliasing filter introduces some delay and the suppression of outputs above the specified signal bandwidth is not perfect. Introducing larger delays produces better fidelity in band and greater suppression out-of-band at the cost of more delay inserted in the signal path. In all cases, the interpolation filter is designed to produce negligible group delay distortion in-band. In the time domain, the interpolation does introduce some ringing in the step response of the anti-aliasing filter, which could affect some digital switching applications. However, the choice of Number of samples of delay parameter of 1 and Output rise/fall time parameter greater than zero is specifically designed to produce a smooth waveform transition with no ringing. Variable Step Sampling Scheme In some cases, it might be necessary for the Lowpass Resampler block to define its own sample time. This is particularly true if the block receives an unspecified output sample time at its input. If the Lowpass Resampler block receives a signal that changes its value at the input, the block generates some output samples to show the transition. The number of samples are defined by the Samples out per rise/fall time parameter. Version History Introduced in R2021a R2022a: Updated in R2022a Lowpass Resampler can define its own output sample times. R2021a: Introduced in R2021a Use to change the sampling time of a signal.
{"url":"https://kr.mathworks.com/help/msblks/ref/lowpassresampler.html","timestamp":"2024-11-02T02:58:55Z","content_type":"text/html","content_length":"96571","record_id":"<urn:uuid:4df87420-4cdd-4548-b8f3-f3438b4aca31>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00231.warc.gz"}
Implementing Factorial in 6502 May 29, 2017#30 Technical Deep Dive Implementing Factorial in 6502 Programming in 6502 has been an interesting adventure for me. I dabbled with it back 8 or 10 years ago. But I ultimately settled into coding most of my C64 projects in C. C felt sufficiently low level for my tastes. It was reasonably fast, much much faster than BASIC for example, there feels like a close 1-to-1 relationship between what you type and what eventually comes out the far end of the compiler. And it was still quite a learning curve before I was able to do much of anything useful. Now that I'm not targeting the SuperCPU, even C feels bloated. The 6502 was just not designed for the sorts of contortions that C tries to put it through. For example, every time a C function is called the arguments are put onto the stack. The C function itself then manipulates those values on the stack as its working local memory space. This is really great for recursion or pre-emptive multitasking, but it pains me to think of what the 6502 must be doing to accomplish the stack referencing. Once I'd decided that the stock C64 is what I will target for this particular project, I became fully committed to learning to do everything I need to do in 6502 assembly. At first it was a pretty big shock. And to be frank, it's often still shocking. Without the KERNAL and BASIC roms a C64 has almost nothing there upon which to write your code. You are literally writing against the registers of the I/O chips directly, and the only resources at your disposal to get anything done are the 6502/6510's 56 simple instructions. That sounds crazy. Well, how crazy is it? I wanted to try my hand at something both simple and complicated at the same time. Somehow I decided on writing a routine to compute factorial. How hard could that be? #FLW Implementing Factorial in Javascript In case you don't remember high school math, I'll remind you what factorial is. It's the product of all the numbers from a given number multiplied together counting down to 1. So, for example, 5 factorial (written "5!") would be 5 x 4 x 3 x 2 x 1 = 120. Writing an algorithm to compute the factorial of an arbitrary input is pretty simple, so I decided first to write it in Javascript. Since I'm a javascript programmer at work every day. It took me about a minute, maybe less, to see how it would work. I started with a recursive solution since that seemed the most elegant. That's it! Pretty basic. If the input is 1, return 1. But if the input is more than 1 then return the product of this times the factorial of one less than this. Recursion as we know can break our minds, but it is certainly elegant if nothing else. The whole function is only 5 lines, and very short lines at that. But more, the properties of recursion feel like they best capture the essence of factorial as a mathematical concept. The problem with recursion is that it takes up stack space for each iteration. When I run this in the javascript console of Safari 10 on macOS, it craps out after fac(25368). If I try fac(25369), it gives a: RangeError: Maximum call stack exceeded. However, this isn't the worst or the only problem. Because long before the stack is exceeded the result of the algorithm falls into Infinity. Infinity is what Javascript returns when a number is bigger than the biggest number it is capable of handling.^1 It hits this after just fac(170), which it returns as: 7.257415615307994e+306, a very big number to be sure. fac(171) returns as Infinity. The second problem, hitting the maximum handlable number, essentially crops up because factorial grows really really fast. On the other hand, many algorithms won't grow nearly so fast but still need lots of iterations to get the answer. You certainly don't want to crap out after just 25368 iterations because of a stack overflow. So my next stab at writing this was to use an iterative loop that involves no recursion. Writing this version of factorial took me just one more minute, approximately. It's not much longer, only longer by one line. It feels slightly less elegant to my eyes, but it is undoubtedly significantly more efficient. When I try to find the maximum number of times it can iterate before crapping out... I'm unable to find that number. What happens is that the computer just starts taking longer and longer to compute for larger and larger input numbers. But they all eventually finish without hitting weird limits like the stack overflowing. Of course, for the last however many iterations they aren't doing much besides computing Infinity * nextNumber. It's fair to say that writing a little function that'll compute factorial for you, in Javascript, is stunningly simple. If you're a programmer who already knows how to write some code, there's nothing to it. It's also fair to say, as you'll see, that getting the equivalent results out of 6502 ASM was much much harder. Implementing Factorial in BASIC 2.0 But let's be even more fair. The Commodore 64 as a commercial product isn't just a 6510 processor on a bus hooked up to some RAM and I/O chips. It is that, but it also includes the KERNAL rom and, more importantly for the purposes of writing a factorial algorithm, the BASIC rom. BASIC was included with the computer precisely because it made writing programs and doing some fairly sophisticated math relatively easy and accessible to the average person. BASIC, at its core, of course is just a bunch of 6502 code. About 8K of it in fact, permanently etched into a rom. So, anything we could do in BASIC is in a very real way something that can be done with 6502 assembly. But to get there from scratch first you'd have to write BASIC, or significant parts of it. This is not what we're going to be doing in our own 6502 implementation. But, first, let's look at how one could write factorial in BASIC. This also only took about a minute. It might have even taken less time than the Javascript version because I'd already primed my brain to know exactly what to do. Note, that I didn't even attempt to do the recursive version in BASIC, that didn't make any sense, so I just skipped directly to replicating the iterative loop version. And it works very well. I played around a bit to find the maximum number which BASIC could handle. It is quite small compared to what Javascript can handle. Turns out to be just 33. It spits out the result of 33! as 8.68331762e+36. As soon as I try to compute 34! I get an overflow error on the line that does the multiplication. That's okay though. Any number in scientific notation that goes to the power of 36 is a pretty big number. Most regular people doing regular math on their home computer aren't going to need numbers that big. I wanted to check of the accuracy of BASIC though, just to be sure. And what I noticed is that it's accurate but a bit less precise. BASIC gives us 8 decimal places of precision and correctly rounds the final digit. Whereas the javascript algorithm gives us 14 decimals of precision. This can be attributed entirely to the underlying floating point number implementation of both environments. I know nothing about javascript's implementation, or its relationship to the floating point features of the more advanced CPUs it's running on. But I've done a bit of reading in The Advanced Machine Language Book For The Commodore-64 by Abacus Software^2 on how floating point is implemented in BASIC on the C64. Very briefly, a number in BASIC is stored as 5 bytes. 4 bytes are for precision, and store the most significant 32-bits worth of the number. The 5th byte is signed, and represents the offset of the decimal. By convention, within the 4 bytes the decimal is always presumed to fall after the first decimal number, and the 5th byte is adjusted to normalize any value that exceeds 9. So for example, 8 is stored as "8.000000..." with a decimal offset of 0. But 12 is stored as "1.200000..." with a decimal offset of 1. (I'm goin' from memory here, don't flay me if I'm a bit off in the precise details. Instead, leave a comment below!) This is the reason why BASIC's precision stops after 8 decimal places. Because "868331762" (bearing in mind that by convention the decimal falls immediately after the leftmost 8,) when converted to hex is $33 C1 B0 F2, and that my friends is exactly 4 bytes. Cool stuff. Implementing Factorial in 6502 Assembly There is nothing wrong with writing some math algorithm in BASIC. If you were in high school and you had a Commodore 64 and you wanted to use it to help you with your homework, as I used to do, writing a quick little program in BASIC is absolutely ideal. Often much better and much more expressive than a mere calculator. That is the reason it was included with the machine in the first place. Because it adds tremendous utility to a home computer. It turns a bunch of chips into a powerful programmable computational tool. Nothing wrong with that! However, what I really wanted to know is, what is the minimum amount of code necessary to write an algorithm like factorial, on the bare and essential features of the 6502 CPU. That's a very different question. If all I wanted to do was get the answer, and all I had was a C64, I'd use BASIC. But in 6502, where do we even begin? Well, let me show you. The essential requirements of factorial are: • The ability to loop • The ability to decrement a counter • The ability to multiply two numbers together, and • The ability to determine that the answer has been arrived at and that the loop should end. That's enough to compute factorial. But there is one other step that I consider at least tangentially necessary to complete the picture. You need to be able to see the answer, and so even though human readable output is not strictly part of the computation of factorial, it is close enough that I have to include it. Otherwise, there would be no way of knowing if the answer it arrived at is even right. Fortunately, looping, decrementing a counter, and comparing to see if a counter has reached 1 and leaving the loop are all standard fare. Those things are super easy on a 6502. But what about multiplying two numbers together? Oops, as it turns out, there are no instructions that support multiplication directly. Not even two 8-bit inputs with an 8-bit output. Nothing. In order to do multiplication of any kind^3 it is necessary to dig deep and figure out how multiplication fundamentally works and its relationship to addition and symbol shunting, such as bit shifting and rolling. Whenever you plan to sit down and write an algorithm (or use someone else's hard work), you have to decide what precision and range you want to get out of it. In BASIC, that decision is effectively made for you, and the choice was made to use 32 bits of precision because that will handle a heck of a lot of normal use cases. One of the powers of writing the algorithm directly in 6502 is that if you wanted 64 or 128 bits of precision, you could do that. It puts the computer through more gyrations, but the possibility is there to do it. It is not possible to get more then 32 bits of precision out of BASIC. It is safe to say that the multiplication routines for smaller bit widths is simpler than those for larger bit widths. And so two 8-bit inputs and an 8-bit result are definitely the simplest, but the range is so cripplingly limited as to be almost useless. 6! is 720, that's outside the range of an 8-bit result. The range of 16-bit doesn't take us much further. 9! is outside the range of 16-bit. However, as this is an experiment, I want to show how an 8-bit CPU can be used to do math beyond just 8-bits, but without unnecessarily bogging us down with the complexity of going to 32 or 64 bits. But rest assured, the same way the CPU has what it takes (namely thanks to the carry) to be able to extend math operations beyond 8-bit, the principle can be applied to extend to much larger bit widths too. So here we have a 16-bit multiplication subroutine. It takes a 16-bit multiplier and a 16-bit multiplicand, and puts the product of the two in a 16-bit output. Note of course that there are lots and lots of combinations of two 16-bit inputs that will overflow a 16-bit output. The really safe way to deal with this problem is to use multiplication algorithms that have an output twice the width of the inputs. 16-bit by 16-bit input and a 32-bit output, which this routine doesn't do so bear that in mind. Next, I did not write this routine. Sadly, it would have been beyond my ability. Instead I pulled it from this forum post on forum.6502.org. I modified it slightly to preserve the original inputs, and to improve clarity (in my personal judgement of what is clear.) There are other great examples of math algorithms with different levels of precision at Codebase64. I don't want, nor am I well suited, to get into the mathematical details of how this works. I can say with some certainty though, that a multiplication can be acheived as successive additions. For example, if you have 13 times 5, you can initialize a running total as zero, and then in a loop add one of those two arguments to the total, decrement the other argument, and if it's not zero run the loop again. Thus adding 5 to a running total in a loop that loops 13 times. Or, the inverse, add 13 to a running total in a loop that loops 5 times over. There is indeed an addition stage in the algorithm above starting at the CLC inside the loop and ending just before the "skip" label. However, people far smarter than I am discovered a long time ago that there are certain efficiencies to be gained by doing the additions in other ways than merely doing the same addition some large number of times. If you are multiplying two 16-bit numbers, the naïve way might mean you have to add 572 to the running total 65,000 times! Whereas, in the example above, the loop explicitly only loops 16 times, which is not coincidentally the number of bits in the output. Somehow by rolling the bits in the output, and rolling the bits in one of the arguments, it can be determined whether an addition need take place. Repeat 16 times and the correct answer magically appears. If you're into mathematical theory, you'd probably really enjoy analyzing this routine to death. For me, it is enough to know that it works, and it consists of 30 instructions. Or, 22 instructions if you don't care about corrupting one of the inputs in the process. There is just one other thing I'd like to note about this routine. The inputs and the output are hardcoded references to what amount to global variables. The use of global variables is typically considered very bad form by modern computing standards. But it seems hard to avoid on the 6502. As it happens these global variables are in zero page, that was done so that all the access to them will be as fast as possible. But, I did test it and everything works just fine when they are not in zero page. Therefore, the product could be computed in a local variable, and could even be returned in the .X and .Y registers. However, these are not big concerns when literally the entire machine is at our disposal for just the one solitary purpose of computing factorial. Factorial core routine Now we can get to the core routine of factorial. Remember that the workspace variables for the multiplication have already been defined. Comparing this to the Javascript function, or the BASIC program, they are actually quite similar. Before the "next" label we are simply initializing the factor. This is the number for which we want to get the factorial. Because a 16-bit output can only support factorial up to 8 (remember 9! is over 350,000) we really only need an 8-bit initial factor. This gets loaded into the low byte of the multiplier, and the high byte of the multiplier is set to 0. So if the factor is 6, the multiplier becomes $0006. We enter the top of the loop and decrement factor. If it is now zero we leave the loop by branching to done. If not, we load the newly decremented factor into the multiplicand and JSR to mult16. Thus 6 x 5 ends up in product. Next we transfer product to the multiplier and loop back to the top. On the next iteration multiplier is prepopulated by with the result of 6x5 (30) and multiplicand is populated by the next decremented factor 4. Thus 30 x 4 ends up in product, and so on. Finally feels like we're making some progress. Human readable output That's all well and good, but how do we know if product contains the right answer unless we can see the output? The final product is just two bytes. But the problem is that those two bytes cannot simply be shoved onto the screen because they will represent some odd C64 Screencodes and we won't be able to interpret them as numbers. What we really want is to convert them to some base of numbers we can read. Number conversions to and from decimal is very cool, and it's something I'd like to get into in a future post. But conversion from binary to hexadecimal is significantly more straight forward. One byte of binary input always converts into exactly 2 bytes of hexadecimal output. Here's a routine that I keep handy in my own coding repertoire while I'm working on C64 OS. I usually put it in an include file so that I can bring it into my source with a .include "inttohex.s" directive whenever I need it. It came out of a discussion with someone on #c64friends on IRC. This is a clever little routine. The hexits label is effectively a conversion table. The contents of the string are bytes that the assembler encodes in PETSCII. But the offset into the table is the conversion to the PETSCII representation of that value. So if you had the actual integer 7, and you index 7 bytes into hexits, you arrive at the PETSCII value "7" (which is actually the integer 55, but we never need to know that.) Perfect. At the calling label "inttohex" you see the documentation style that I've grown accustomed to. It shows the register and a right-pointing arrow to mean input, followed by a description of what it should hold. The registers that are outputs are followed by a left-pointing arrow and then a description of them. So, put an 8-bit integer in the accumulator and JSR inttohex. You get back two PETSCII characters in X and Y, lo and hi nybbles respectively. To do that we push the accumlator onto the stack to back it up, then shift right 4 times. That takes the hi nybble (4-bits) and shifts it into the lower nybble. The upper nybble is replaced by zeros. Transfer that to the X register to be used as an index into hexits. Grab the hi nybble PETSCII character and stash it in a temp variable. Restore the original accumulator from the stack and using AND zero out the upper nybble leaving only the lower nybble intact. Transfer that to the X register and use it to index into hexits to grab the PETSCII value for the lower nybble. Lastly, stick the low nybble in X, read the high nybble from the temp variable directly into the Y register and return. Bingo bango. That's a fine way to convert one byte into two PETSCII characters that are an easily readable hexadecimal output. But how do we physically output them? Well, we could try shoving them into screen memory, and that would work. But I'm pretty comfortable at this stage using the KERNAL routines designed to output characters to the screen. In this case, CHROUT ($ffd2). There are however, two bytes in a 16-bit output. And although the CPU is little endian and wants the lo byte first followed by the hi byte, that's not the order we want to see them in. To make things easy and clear, I wrote a little macro "out16." You pass it the address of the low byte, and it converts the hi byte (low byte + 1) to PETSCII and chrout's the result, hi nybble lo nybble. Then it calls inttohex on the lo byte, and chrout's its result, hi nybble lo nybble. And finally the macro outputs a carriage return so multiple calls to out16 will put the results on separate lines. Putting it all together It took a long time, and a lot of effort, but we've got pretty much all the parts together we need to make a complete and running program. Here's what the final source code, altogether, in TurboMacroProX format looks like, including comments, section separators and a little something extra I'll talk about in a moment. This whole thing took me hours to put together, mostly because of a few gotchas I'm going to describe now. It's safe to say it took a lot less code and a lot less time to write that little 5 line Javascript function from the first examples. But, it's also remarkable how easy it was to do the same thing in BASIC. The power and simplicity of a high level language certainly seems worth the relative slowness of its execution. If what you needed could be accomplished in BASIC you'd be masochistic to do it in ASM. Not to mention the fact that adding user input to the BASIC program would be trivial, and its output is in decimal. Getting user input in ASM would be a whole new ordeal, and its output is readable but it's in nerdy hexadecimal rather than the much more common and user-friendly decimal. I like to structure my code such that constants appear at the top, followed by the main code body, followed by subroutines that will be used and called by the main code body. I nearly tore my hair out trying to debug the meaning of "label not defined" on the calls to out16. Eventually I realized that the assembler wants macro definitions to appear in the code before anything attempts to use them. I'm not entirely sure why. So, I moved the out16 macro definition to appear just below the constants. Next, you have to declare where in memory you want your code to assemble to. I chose $0801 because that's where BASIC programs are loaded into, and because it's an easy place that I know how to jump to with a SYS command. This decision may have been the most hair pulling of this experiment. As it happens, when you double click a .PRG file to open it in VICE (which is what I was testing in,) it not only loads it, but it also runs it automatically, by issuing the RUN command. I totally get why it does this. It's so that if you're using VICE with a one file game you can just double click it, VICE will launch, load and run the program for you, and boom you're playing a C64 game in an emulator and you didn't have to know even a single arcane C64 "command." The problem with RUN is that it attempts to execute my code as though it were a BASIC program, which it's not. The result of this is that BASIC was actually corrupting my code before I had a chance to run it. So, what I would do is double click the file, wait for BASIC to mess it up and return to the ready prompt and then I'd issue an SYS2049 after that. It crashed immediately every time. This was very frustrating until I figured out what was happening. The secret is to include a tiny prelude to your code, that is itself in fact a valid BASIC program that when run will jump to your real code. This trick is used all the time in C64 games and demos. I didn't have what that prelude would be handy but it's surprisingly easy to figure out. I simply wrote a basic program: 10 SYS2049 Then I saved that to "disk" which in VICE put the .PRG basic program back onto my Mac's desktop. Opening that file with the excellent Mac utility, Hex Fiend, revealed the following BASIC program bytes: 01 08 0B 08 0A 00 9E 32 30 34 39 00 00 00 The first two bytes are the load address, 01 08, $0801, so we can ignore them, they won't end up in memory. The next 2 bytes are a pointer to the start of the next BASIC line, 0B 08, ($080B). The next two bytes are the 16-bit program line number, 0A 00, ($000A, or 10) and that 9E must be the token for SYS. The next four bytes, 32 30 34 39, reveal very clearly the 2 0 4 9 conveniently visible in the lower nybble of each hex value. And the last 3 bytes 00 00 00 are the basic program terminating sequence. In other words, the basic program will occupy (2 + 2 + 1 + 4 + 3) 12 bytes in memory. Which means if the BASIC prelude starts at $0801 (2049), then my ASM code will start at 2049 + 12 or 2061. Thus we can just change the BASIC prelude so the SYS will go to 2061 instead of 2049 and we get a BASIC prelude like this: 0B 08 0A 00 9E 32 30 36 31 00 00 00 And so that is exactly what we put into the assembler as a sequence of bytes that immediately precedes the start of the main body code. And it works perfectly. ;Basic Prelude, 10 SYS 2061 .byte $0B,$08,$0A,$00,$9E,$32,$30,$36,$31,$00,$00,$00 And there we have it running in VICE. I added one more #out16 to the body code so we could see the output of the multiplier before beginning to compute its factorial, just so we can see in the screenshot the two values. The first output is 0008 (8 in decimal), and the result is printed as 9D80 (40320 in decimal), and that is the correct answer! Hurray! You can run that in VICE, or on a real C64 and see that it in fact works. Or analyze it with a Hex Editor. Note that it's hardcoded to compute 8!. I'll be the first to admit that it took a lot of work, and a lot of effort, and the result is less capable than what I got with BASIC, but it feels good. It feels like we're at rock bottom. There is no code running that we are not explicitly aware of (with the exception of those final chrouts.) It's like painting a picture by applying every single stroke of paint to the canvas by hand. The other thing I should say before signing off, is that in practice it doesn't feel quite this difficult when writing ASM code for C64 OS. First of all, I have yet to need multiplication even once. And second, after you've been working on your project for a while all the surrounding structure such as where to put constants and macros, what address to load your code to, and even output, become solved problems that you stop thinking about on a day to day basis. That's it for this post. I consider that worthy of the Technical Deep Dive category. 1. Actually Javascript is bit wonky around big numbers. Number.MAX_VALUE returns 1.7976931348623157e+308. If you check the equality of Number.MAX_VALUE == Infinity it returns false. However, if you check the equality of Number.MAX_VALUE == Number.MAX_VALUE + 1 it returns true! That certainly feels like a behaviour unique to Infinity. [↩] 2. I'd like to point out just how incredibly awesome it is that these classic books are available for free at Archive.org. I read the Geos Programmers Reference Guide cover to cover on Archive.org. Although, I have Advanced Machine Language in paperback, and do prefer a paper copy in most cases, sometimes it's nearly impossible to get your hands on a physical copy. [↩] 3. Actually, there is one kind of multiplication, and division, that can be done with the built in instructions. Multiplying by two can be accomplished by shifting all the bits left one position. And dividing by two can be done by shifting all the bits right on position. Using the carry shifts can be done on higher bytes that shift in a bit that fell of the edge of a lower byte, thus enabling shifts that are 16-bit, 32-bit, 64-bit and more. [↩] Do you like what you see? You've just read one of my high-quality, long-form, weblog posts, for free! First, thank you for your interest, it makes producing this content feel worthwhile. I love to hear your input and feedback in the forums below. And I do my best to answer every question. I'm creating C64 OS and documenting my progress along the way, to give something to you and contribute to the Commodore community. Please consider purchasing one of the items I am currently offering or making a small donation, to help me continue to bring you updates, in-depth technical discussions and programming reference. Your generous support is greatly appreciated. Greg Naçu — C64OS.com
{"url":"https://nacu.ca/post/factorialin6502","timestamp":"2024-11-04T07:53:25Z","content_type":"text/html","content_length":"54702","record_id":"<urn:uuid:5f97edce-4089-4ef8-b903-290317258c5b>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00696.warc.gz"}
Algebraic graph theory Algebraic graph theory is a branch of mathematics in which algebraic methods are applied to problems about graphs. This is in contrast to geometric, combinatoric, or algorithmic approaches. There are three main branches of algebraic graph theory, involving the use of linear algebra, the use of group theory, and the study of graph invariants. The first branch of algebraic graph theory involves the study of graphs in connection with linear algebra. Especially, it studies the spectrum of the adjacency matrix, or the Laplacian matrix of a graph (this part of algebraic graph theory is also called spectral graph theory). For the Petersen graph, for example, the spectrum of the adjacency matrix is (−2, −2, −2, −2, 1, 1, 1, 1, 1, 3). Several theorems relate properties of the spectrum to other graph properties. As a simple example, a connected graph with diameter D will have at least D+1 distinct values in its spectrum. Aspects of graph spectra have been used in analysing the synchronizability of networks. The second branch of algebraic graph theory involves the study of graphs in connection to group theory, particularly automorphism groups and geometric group theory. The focus is placed on various families of graphs based on symmetry (such as symmetric graphs, vertex-transitive graphs, edge-transitive graphs, distance-transitive graphs, distance-regular graphs, and strongly regular graphs), and on the inclusion relationships between these families. Certain of such categories of graphs are sparse enough that lists of graphs can be drawn up. By Frucht's theorem, all groups can be represented as the automorphism group of a connected graph (indeed, of a cubic graph). Another connection with group theory is that, given any group, symmetrical graphs known as Cayley graphs can be generated, and these have properties related to the structure of the group. This second branch of algebraic graph theory is related to the first, since the symmetry properties of a graph are reflected in its spectrum.
{"url":"https://graphsearch.epfl.ch/en/concept/1620216","timestamp":"2024-11-08T07:41:04Z","content_type":"text/html","content_length":"123183","record_id":"<urn:uuid:10658b43-0011-4763-a5ed-6d64c72bb275>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00380.warc.gz"}
Perform 2-D filtering of a pixel stream The visionhdl.BilateralFilter object filters images while preserving edges. Some applications of bilateral filtering are denoising while preserving edges, separating texture from illumination, and cartooning to enhance edges. The filter replaces each pixel at the center of a neighborhood by an average that is calculated using spatial and intensity Gaussian filters. The object determines the filter coefficients from: • Spatial location in the neighborhood (similar to a Gaussian blur filter) • Intensity difference from the neighborhood center value The object provides two standard deviation parameters for independent control of the spatial and intensity coefficients. To perform bilateral filtering of a pixel stream: 1. Create the visionhdl.BilateralFilter object and set its properties. 2. Call the object with arguments, as if it were a function. To learn more about how System objects work, see What Are System Objects? filt2d = visionhdl.BilateralFilter(Name,Value) returns a bilateral filter System object™. Set properties using name-value pairs. Enclose each property name in single quotes. For example: filt2d = visionhdl.BilateralFilter('CoefficientsDataType','Custom',... Unless otherwise indicated, properties are nontunable, which means you cannot change their values after calling the object. Objects lock when you call them, and the release function unlocks them. If a property is tunable, you can change its value at any time. For more information on changing property values, see System Design in MATLAB Using System Objects. NeighborhoodSize — Size of image region to average '3×3' (default) | '5×5' | '7×7' | '9×9' | '11×11' | '13×13' | '15×15' Size of the image region used to compute the average, specified as an N-by-N pixel square. SpatialStdDev — Spatial standard deviation target 0.5 (default) | positive real number Spatial standard deviation target used to compute coefficients for the spatial Gaussian filter, specified as a positive real number. This parameter has no limits, but recommended values are from 0.1 to 10. At the high end, the distribution becomes flat and the coefficients are small. At the low end, the distribution peaks in the center and has small coefficients in the rest of the neighborhood. These boundary values also depend on the neighborhood size and the data type used for the coefficients. IntensityStdDev — Intensity standard deviation target 0.5 (default) | positive real number Intensity standard deviation target used to compute coefficients for the intensity Gaussian filter, specified as a positive real number. This parameter has no limits, but recommended values are from 0.1 to 10. At the high end, the distribution becomes flat and the coefficients are small. At the low end, the distribution peaks in the center and has small coefficients in the rest of the neighborhood. These boundary values also depend on the neighborhood size and the data type used for the coefficients. When the intensity standard deviation is large, the bilateral filter acts more like a Gaussian blur filter, because the intensity Gaussian has a lower peak. Conversely, when the intensity standard deviation is smaller, edges in the intensity are preserved or enhanced. PaddingMethod — Method for padding boundary of input image 'Constant' (default) | 'Symmetric' | 'Replicate' | 'Reflection' | 'None' Methods for padding the boundary of the input image, specified as one of these values. • 'Constant' — Interpret pixels outside the image frame as having a constant value. • 'Replicate' — Repeat the value of pixels at the edge of the image. • 'Symmetric' — Set the value of the padding pixels to mirror the edge of the image. • 'Reflection' — Set the value of the padding pixels to reflect around the pixel at the edge of the image. • 'None' — Exclude padding logic. The object does not set the pixels outside the image frame to any particular value. This option reduces the hardware resources that are used by the object and reduces the blanking that is required between frames. However, this option affects the accuracy of the output pixels at the edges of the frame. To maintain pixel stream timing, the output frame is the same size as the input frame. To avoid using pixels calculated from undefined padding values, mask off the n/2 pixels around the edge of the frame for downstream operations. n is the size of the operation kernel. For more details, see Increase Throughput by Omitting Padding. For more information about these methods, see Edge Padding. PaddingValue — Value used to pad boundary of input image 0 (default) | integer Value used to pad the boundary of the input image, specified as an integer. The object casts this value to the same data type as the input pixel. This parameter applies when you set PaddingMethod to 'Constant'. LineBufferSize — Size of line memory buffer 2048 (default) | positive integer Size of the line memory buffer, specified as a positive integer. Choose a power of two that accommodates the number of active pixels in a horizontal line. If you specify a value that is not a power of two, the buffer uses the next largest power of two. OverflowAction — Overflow mode used for fixed-point operations 'Saturate' (default) | 'Wrap' Overflow mode used for fixed-point operations. When the input is any integer or fixed-point data type, the algorithm uses fixed-point arithmetic for internal calculations. This option does not apply when the input data type is single or double. CoefficientsDataType — Method to determine data type of filter coefficients 'Same as first input' (default) | 'Custom' Method for determining the data type of the filter coefficients. The coefficients usually require a data type with more precision than the input data type. • 'Custom' — Sets the data type of the coefficients to match the data type defined in the CustomCoefficientsDataType property. • 'Same as first input'' — Sets the data type of the coefficients to match the data type of the pixelin argument. CustomCoefficientsDataType — Data type for the filter coefficients numerictype(0,16,15) (default) | numerictype(0,WL,FL) Data type for the filter coefficients, specified as numerictype(0,WL,FL), where WL is the word length and FL is the fraction length in bits. Specify an unsigned data type that can represent values less than 1. The coefficients usually require a data type with more precision than the input data type. The object calculates the coefficients based on the neighborhood size and the values of IntensityStdDev and SpatialStdDev. Larger neighborhoods spread the Gaussian function such that each coefficient value is smaller. A larger standard deviation flattens the Gaussian so that the coefficients are more uniform in nature, and a smaller standard deviation produces a peaked response. If you try a data type and after quantization, more than half of the coefficients become zero, the object issues a warning. If all the coefficients are zero after quantization, the object issues an error. These messages mean that the object was unable to express the requested filter by using the data type specified. To avoid this issue, choose a higher-precision coefficient data type or adjust the standard deviation parameters. This property applies when you set CoefficientsDataType to 'Custom'. OutputDataType — Method to determine data type of output pixels 'Same as first input' (default) | 'Custom' Method to determine data type of output pixels. • 'Same as first input'' — Sets the data type of the output pixels to match the data type of pixelin. • 'Custom' — Sets the data type of the output pixels to match the data type defined in the CustomOutputDataType property. CustomOutputDataType — Data type for the output pixels numerictype(1,16,15) (default) | numerictype(S,WL,FL) Data type for the output pixels, specified as numerictype(S,WL,FL), where S is 1 (true) for signed and 0 (false) for unsigned,WL is the word length, and FL is the fraction length in bits. The filtered pixel values are cast to this data type. This property applies when you set OutputDataType to 'Custom'. This object uses a streaming pixel interface with a structure for frame control signals. This interface enables the object to operate independently of image size and format and to connect with other Vision HDL Toolbox™ objects. The object accepts and returns a scalar pixel value and control signals as a structure containing five signals. The control signals indicate the validity of each pixel and its location in the frame. To convert a pixel matrix into a pixel stream and control signals, use the visionhdl.FrameToPixels object. For a description of the interface, see Streaming Pixel Input Arguments pixelin — Input pixel stream Single image pixel in a pixel stream, specified as a scalar value representing intensity. Integer and fixed-point data types larger than 16 bits are not supported. You can simulate System objects with a multipixel streaming interface, but you cannot generate HDL code for System objects that use multipixel streams. To generate HDL code for multipixel algorithms, use the equivalent Simulink^® blocks. The software supports double and single data types for simulation, but not for HDL code generation. Data Types: uint | int | fi | double | single Output Arguments pixelout — Output pixel stream Single image pixel in a pixel stream, returned as a scalar value representing intensity. Integer and fixed-point data types larger than 16 bits are not supported. You can simulate System objects with a multipixel streaming interface, but you cannot generate HDL code for System objects that use multipixel streams. To generate HDL code for multipixel algorithms, use the equivalent Simulink blocks. The software supports double and single data types for simulation, but not for HDL code generation. Data Types: uint | int | fi | double | single Object Functions To use an object function, specify the System object as the first input argument. For example, to release system resources of a System object named obj, use this syntax: Common to All System Objects step Run System object algorithm release Release resources and allow changes to System object property values and input characteristics reset Reset internal states of System object Create Bilateral Filter for HDL Generation Load input image and create serializer and deserializer objects. frmOrig = imread('rice.png'); frmActivePixels = 48; frmActiveLines = 32; frmIn = frmOrig(1:frmActiveLines,1:frmActivePixels); title 'Input Image' frm2pix = visionhdl.FrameToPixels(... [~,~,numPixPerFrm] = getparamfromfrm2pix(frm2pix); pix2frm = visionhdl.PixelsToFrame(... Write a function that creates and calls the System object™. You can generate HDL from this function. function [pixOut,ctrlOut] = BilatFilt(pixIn,ctrlIn) % Filters one pixel according to the default spatial and intensity standard % deviation, 0.5. % pixIn and pixOut are scalar intensity values. % ctrlIn and ctrlOut are structures that contain control signals associated % with the pixel. % You can generate HDL code from this function. persistent filt2d; if isempty(filt2d) filt2d = visionhdl.BilateralFilter(... [pixOut,ctrlOut] = filt2d(pixIn,ctrlIn); Filter the image by calling the function for each pixel. pixOutVec = zeros(numPixPerFrm,1,'uint8'); ctrlOutVec = repmat(pixelcontrolstruct,numPixPerFrm,1); [pixInVec,ctrlInVec] = frm2pix(frmIn); for p = 1:numPixPerFrm [pixOutVec(p),ctrlOutVec(p)] = BilatFilt(pixInVec(p),ctrlInVec(p)); [frmOut,frmValid] = pix2frm(pixOutVec,ctrlOutVec); if frmValid title 'Output Image' Extended Capabilities C/C++ Code Generation Generate C and C++ code using MATLAB® Coder™. This System object supports C/C++ code generation for accelerating MATLAB^® simulations, and for DPI component generation. For more information about acceleration, see Accelerate Pixel-Streaming Designs Using MATLAB Coder. For more information about DPI component generation, see Considerations for DPI Component Generation with MATLAB (HDL Verifier). HDL Code Generation Generate VHDL, Verilog and SystemVerilog code for FPGA and ASIC designs using HDL Coder™. To generate HDL code from Vision HDL Toolbox System objects, see Design Hardware-Targeted Image Filters in MATLAB. Version History Introduced in R2017b R2022a: Multipixel streaming The object now supports multipixel streams. For multipixel streaming, the object supports input and output column vectors of 2, 4, or 8 pixels. The ctrl argument remains scalar, and the control signals in the pixelcontrol structure apply to all pixels in the vector. You can simulate System objects with a multipixel streaming interface, but you cannot generate HDL code for System objects that use multipixel streams. To generate HDL code for multipixel algorithms, use the equivalent Simulink blocks. R2021b: Reflection padding Pad the edge of a frame by reflecting around the edge-pixel value. This padding method helps reduce edge contrast effects and can improve results for machine learning while maintaining the original frame size. R2020a: Option to omit padding You can now configure the object to not add padding around the boundaries of the active frame. This option reduces the hardware resources used by the object and reduces the blanking interval required between frames but affects the accuracy of the output pixels at the edges of the frame. To use this option, set the PaddingMethod property to 'None'. R2018b: Improved line buffer The internal line buffer in this object now handles bursty data, that is, noncontiguous valid signals within a pixel line. This implementation uses fewer hardware resources due to improved padding logic and native support for kernel sizes with an even number of lines. This change affects the visionhdl.LineBuffer object and objects that use an internal line buffer. The latency of the line buffer is now reduced by a few cycles for some configurations. You might need to rebalance parallel path delays in your designs. A best practice is to synchronize parallel paths in your designs by using the pixel stream control signals rather than by inserting a specific number of delays. See Also
{"url":"https://es.mathworks.com/help/visionhdl/ref/visionhdl.bilateralfilter-system-object.html","timestamp":"2024-11-13T09:03:03Z","content_type":"text/html","content_length":"131219","record_id":"<urn:uuid:5a5cef8d-7284-48a0-a59f-eaa3efeae506>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00218.warc.gz"}
Taking Smooth Steps Toward Equilibrium By combining continuous and discrete theoretical approaches, researchers show how to plot an optimal path from one nonequilibrium quantum state to another. Figure 1: The mathematical condition called thermomajorization indicates that there exists a thermal process that can change the state of the system from p to q. But the condition doesn’t indicate what the state-change process might look like in time. For the case of a “forgetful” thermal process, Korzekwa and Lostaglio’s continuous thermomajorization supplements this condition by providing an explicit recipe for the state-change process, which comprises a continuous sequence of small thermal processes, indicated here by arrows [1, 2]. No physical system is truly isolated. Even when a system is left “untouched,” interactions with its environment will affect its state and generically drive it toward thermal equilibrium, as encapsulated by the classical laws of thermodynamics. How exactly this “thermalization” occurs in complex systems remains an active area of research; what is clear is that thermalization as a dynamical process is ubiquitous. In a new theoretical study, Matteo Lostaglio at the University of Amsterdam and the Delft University of Technology, Netherlands, and Kamil Korzekwa at Jagiellonian University, Krakow, Poland, take this fact as a starting point to explore what happens to small systems en route to thermalization [1, 2]. Focusing on the regime in which a system interacts weakly with a large environment, Lostaglio and Korzekwa offer three findings: a complete set of constraints that all thermalization processes need to fulfill; a decomposition of a thermalization process into elementary components; and an algorithm that outputs a recipe for achieving any desired thermalizing transformation by piecing together these elements. The results could show physicists the optimal thermalizing paths between one thermodynamic state and another—a useful ingredient in heat engines and other applications. The earliest theoretical developments in thermodynamics uncovered the classical zeroth, first, second, and third laws, which sketched out certain basic features of the physical world that distinguish macroscopic thermodynamic phenomena from the microscopic motion and interaction of particles and fields. These features include the concepts of heat and thermal equilibrium as well as the increase of entropy (or disorder), which were vital to figuring out what levers of control were available for achieving a particular desired state. More recently, researchers have developed more fine-grained and sophisticated descriptions for quantum thermodynamics with a particular focus on small quantum systems (see, for example, Ref. [3]). A prominent treatment of thermodynamics describes the continuous evolution of a quantum system over time using a type of differential equation called a master equation [4]. An alternative approach, called thermal operations, abstracts away the continuity aspect and instead models processes as discrete transformations between two points in time [5, 6]. These operations can be engineered to achieve certain goals—such as optimizing the performance of a quantum-scale engine or refrigerator—by bringing a physical system in contact with a heat bath (a large system that maintains its own internal temperature) and making the two interact in carefully chosen ways. The resulting processes include thermalization—the system reaching thermal equilibrium with the bath—but also a rich variety of other possibilities, such as the counterintuitive outcomes that only quantum systems can produce. One may imagine all these possibilities as approaches to thermalization via any of a diverse collection of paths. While this discrete model has an advantage over the continuous approach in that its solutions are more general, the system-environment interactions necessary to change a system from one state to another can be complex and difficult to determine. (Here, “state” refers to the collection of energy-level populations of a quantum system, that is, the probabilities with which it can be found in various levels.) Subsequent work has identified precise mathematical conditions—called thermomajorization—for thermal operations to be feasible in different contexts [6–9]. However, these conditions reveal little about how to implement such transformations. The central contribution of Lostaglio and Korzekwa is providing a connection between the time-continuous and discrete-operational descriptions for a system weakly coupled to a large thermal environment. They also derive a family of entropy-production inequalities that supplant the standard second law by a more fine-grained requirement. The duo focus on a class of continuous processes where the system interacts with a thermal environment that is “forgetful”: the environment (initially in equilibrium) has a brief energy exchange with the system and then instantaneously relaxes back to equilibrium before its next exchange with the system. Mathematically, such processes are described by so-called Markovian master equations. Lostaglio and Korzekwa identify a version of thermomajorization that is compatible with such a continuous treatment, determining which pairs of states can be connected by feasible continuous Markovian processes. They provide efficient algorithmic methods to check this relation for any given pair of states, to chart out all the final states reachable from a given initial state, and to explicitly construct for each feasible pair a connecting process comprising a sequence of particularly simple operations called elementary thermalizations. Each of these basic steps consists of partially thermalizing two energy levels of the system at a time (Fig. 1) [10]. In addition to involving only two-level interactions, these operations are also more direct than general thermal operations, which can take the system on circuitous routes toward equilibrium. These properties make elementary thermalizations amenable to efficient numerical optimization. Going forward, it is worth noting that thermodynamics is ultimately a theory of control, describing what type of transformations nature allows when we have access only to macroscopic control variables. The picture described here so far is of a passive nature: it describes systems left to their own devices. The permitted control is limited to connecting and disconnecting parts of the system to the environment’s thermalizing influence. Ultimately, a more general modular theory could describe a larger set of elementary operations that are amenable to a continuous treatment but not limited to partial thermalization; some progress in this direction has already been achieved [11]. The appeal of Lostaglio and Korzekwa’s model lies in its explicit incorporation of time-continuous dynamics and its focus on modular operations. Thermalization features in thermodynamic protocols across the board, be they simplified theoretical toy models for information erasure, in the context of Landauer’s principle, or thermodynamic engine cycles [12]. Being able to modularize this ubiquitous process for small quantum systems is thus a valuable step forward. Besides the obvious question of including true “quantumness” in the description (in the form of quantum superposition between energy levels), it would also be interesting to see if a similar modularization can be applied when multiple heat baths are present at the same time but active control is still absent, as is the case for systems that support nonequilibrium steady states. Such explorations may pave the way toward incorporating the full power of general, non-Markovian processes in a continuous, modular
{"url":"https://physics.aps.org/articles/v15/110","timestamp":"2024-11-02T12:13:03Z","content_type":"text/html","content_length":"36203","record_id":"<urn:uuid:84c1eebb-c95c-4e2f-b660-84d183334bae>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00549.warc.gz"}
Function MACROEXPAND, MACROEXPAND-1 macroexpand form &optional env => expansion, expanded-p macroexpand-1 form &optional env => expansion, expanded-p Arguments and Values: form---a form. env---an environment object. The default is nil. expansion---a form. expanded-p---a generalized boolean. macroexpand and macroexpand-1 expand macros. If form is a macro form, then macroexpand-1 expands the macro form call once. macroexpand repeatedly expands form until it is no longer a macro form. In effect, macroexpand calls macroexpand-1 repeatedly until the secondary value it returns is nil. If form is a macro form, then the expansion is a macro expansion and expanded-p is true. Otherwise, the expansion is the given form and expanded-p is false. Macro expansion is carried out as follows. Once macroexpand-1 has determined that the form is a macro form, it obtains an appropriate expansion function for the macro or symbol macro. The value of *macroexpand-hook* is coerced to a function and then called as a function of three arguments: the expansion function, the form, and the env. The value returned from this call is taken to be the expansion of the form. In addition to macro definitions in the global environment, any local macro definitions established within env by macrolet or symbol-macrolet are considered. If only form is supplied as an argument, then the environment is effectively null, and only global macro definitions as established by defmacro are considered. Macro definitions are shadowed by local function definitions. (defmacro alpha (x y) `(beta ,x ,y)) => ALPHA (defmacro beta (x y) `(gamma ,x ,y)) => BETA (defmacro delta (x y) `(gamma ,x ,y)) => EPSILON (defmacro expand (form &environment env) (multiple-value-bind (expansion expanded-p) (macroexpand form env) `(values ',expansion ',expanded-p))) => EXPAND (defmacro expand-1 (form &environment env) (multiple-value-bind (expansion expanded-p) (macroexpand-1 form env) `(values ',expansion ',expanded-p))) => EXPAND-1 ;; Simple examples involving just the global environment (macroexpand-1 '(alpha a b)) => (BETA A B), true (expand-1 (alpha a b)) => (BETA A B), true (macroexpand '(alpha a b)) => (GAMMA A B), true (expand (alpha a b)) => (GAMMA A B), true (macroexpand-1 'not-a-macro) => NOT-A-MACRO, false (expand-1 not-a-macro) => NOT-A-MACRO, false (macroexpand '(not-a-macro a b)) => (NOT-A-MACRO A B), false (expand (not-a-macro a b)) => (NOT-A-MACRO A B), false ;; Examples involving lexical environments (macrolet ((alpha (x y) `(delta ,x ,y))) (macroexpand-1 '(alpha a b))) => (BETA A B), true (macrolet ((alpha (x y) `(delta ,x ,y))) (expand-1 (alpha a b))) => (DELTA A B), true (macrolet ((alpha (x y) `(delta ,x ,y))) (macroexpand '(alpha a b))) => (GAMMA A B), true (macrolet ((alpha (x y) `(delta ,x ,y))) (expand (alpha a b))) => (GAMMA A B), true (macrolet ((beta (x y) `(epsilon ,x ,y))) (expand (alpha a b))) => (EPSILON A B), true (let ((x (list 1 2 3))) (symbol-macrolet ((a (first x))) (expand a))) => (FIRST X), true (let ((x (list 1 2 3))) (symbol-macrolet ((a (first x))) (macroexpand 'a))) => A, false (symbol-macrolet ((b (alpha x y))) (expand-1 b)) => (ALPHA X Y), true (symbol-macrolet ((b (alpha x y))) (expand b)) => (GAMMA X Y), true (symbol-macrolet ((b (alpha x y)) (a b)) (expand-1 a)) => B, true (symbol-macrolet ((b (alpha x y)) (a b)) (expand a)) => (GAMMA X Y), true ;; Examples of shadowing behavior (flet ((beta (x y) (+ x y))) (expand (alpha a b))) => (BETA A B), true (macrolet ((alpha (x y) `(delta ,x ,y))) (flet ((alpha (x y) (+ x y))) (expand (alpha a b)))) => (ALPHA A B), false (let ((x (list 1 2 3))) (symbol-macrolet ((a (first x))) (let ((a x)) (expand a)))) => A, false Affected By: defmacro, setf of macro-function, macrolet, symbol-macrolet Exceptional Situations: None. See Also: *macroexpand-hook*, defmacro, setf of macro-function, macrolet, symbol-macrolet, Section 3.1 (Evaluation) Neither macroexpand nor macroexpand-1 makes any explicit attempt to expand macro forms that are either subforms of the form or subforms of the expansion. Such expansion might occur implicitly, however, due to the semantics or implementation of the macro function. The following X3J13 cleanup issues, not part of the specification, apply to this section: Copyright 1996-2005, LispWorks Ltd. All rights reserved.
{"url":"http://clsnet.nl/HyperSpec/Body/f_mexp_.htm","timestamp":"2024-11-13T18:03:15Z","content_type":"text/html","content_length":"12029","record_id":"<urn:uuid:35426637-26c6-43ee-be12-3fa2abc3c4ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00327.warc.gz"}
Logarithms as Digit-Counters in ANY base! File this under “math I should have realized”. Even after all these years, I still find logarithms to be challenging, compelling, and surprising. This post is about an idea that snuck up on me this morning. And it snuck up even though: • I have written: Log[2] 8 = 3 because 2^3 = 8 on many whiteboards and many review sheets. • I have learned (and taught!) the classic collection of assorted log properties • I have read David Mermin’s lovely piece “Logarithms!” (and blogged about it here.) • I have written a chapter on exponential functions and logarithms in “Advanced Math for Young Students”. And yet, there is a basic idea about logarithms that I never really thought about (until this morning). So here it is: The integer part of the logarithm of an integer is one less than the number of digits the integer has when it is represented in the same base as the logarithm itself. You could write: Floor[Log[b] N] + 1 = the number of N’s digits when N is represented in base b. In other words, the logarithm counts the digits — and not just in base-10. You just have to round down and add 1. I would like to explain this, if for no other reason than to convince myself that it is true! Starting with base 10: Let’s look at some common (base 10) logarithms that we can find without a calculator: Log 1 = 0 …because 10^0 = 1 Log 10 = 1 …because 10^1 = 10 Log 100 = 2 …because 10^2 = 100 Log 1000 = 3 …because 10^3 = 1000 Log 10000 = 4 …because 10^4 = 10000 I think you get the idea. If we construct a number, say N, by writing a 1 followed by k zeroes, then the common (base 10) logarithm, log N = k. And the number will have k + 1 digits: the k zeroes and the 1 in front of them. So for each of these, adding 1 to the common logarithm gives you the number of digits. Now let’s look at one that you do need a calculator for. I’ll report three decimal places: Log 375 = 2.574 …because 10^2.574 = 374.973 ≈ 375 And it makes sense: 375 is more than 100 so its common logarithm is greater than 2. But 375 is also less than 1000 so its common logarithm is less than 3. It is more than the smallest 3 digit number and less than the smallest 4 digit number. And if you take the common logarithm, which was 2.574, round it down and then add 1, you get the number of digits. Nice. Let’s try another (again reporting 3 decimal places): Log (8787) = 3.944 … because 10^3.944 ≈ 8787 (though with a little more round-off error, but still…) And this makes sense too: 8787 is more than 1000 so its common log is greater than 3. But it is less than 10000 so its common log is less than 4. In fact, it is closer to 10000 so its log is close to 4, but still less. And if we round down 3.944 and add 1, again we get the number of digits in the number we started with. OK, I admit that I kind-of knew that already. But for some reason, this next step caught me by surprise…let’s move away from base 10 and see what happens. Other bases and other logarithms On and off this year, I have been playing with James Tanton’s invention, “Exploding Dots”. If you have not yet seen this, you really should take a look. You can get started here. But really, Google it – those dots are not just exploding, they are proliferating! One thing that Exploding Dots teaches you almost immediately and organically is how to express numbers in other bases. And something interesting happens when you look at the base 2 logarithms as you look at the base 2 representations… So if a number N happens to be an integer power of 2, say 2^k, then we know two things: 1. Its base-2 logarithm, log[2] N = k 2. Its base-2 representation will consist of k zeroes and a leading 1. And that makes k + 1 digits. Now let’s get some help from a calculator. Log[2] 5 = 2.322 …because 2^2.322 ≈ 5 Does it make sense? Well, since 5 is more than 4, we expect its base-2 logarithm to be more than 2. But 5 is less than 8, so we expect its base-2 logarithm to be less than 3. So this looks right. But we are not done! If we explode some dots, we can find the base 2 representation. 5 = 1 0 1 and that makes 3 digits – which is also what we get when we round down the base-2 logarithm and then add 1! OK, let’s check a big number: 500 expressed in base 2 is 1 1 1 1 0 1 0 0 (which you might want to check…) I see 9 digits. Log[2] 500 = 8.966 (and yes, it checks out: 2^8.966 ≈ 500) Round it down and add 1…YES! So if you want to know the number of digits in the base-b representation of a number, find the base-b logarithm, round it down to its floor and then add one. The logarithm is the digit counter!
{"url":"https://advancedmathyoungstudents.com/blog/?p=15840","timestamp":"2024-11-12T00:22:15Z","content_type":"text/html","content_length":"43881","record_id":"<urn:uuid:55d0338a-eb21-4578-9411-b3598a82ac7d>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00679.warc.gz"}
KOS.FS - faculty addon Technical Optics (E362502) Departments: ústav přístrojové a řídící techniky (12110) Abbreviation: Approved: 13.07.2011 Valid until: ?? Range: 2P+1.5C+0.5L Semestr: * Credits: 3 Completion: KZ Language: EN The course gives a thorough interpretation of the principle of image forming by planar and spherical surfaces under the laws of geometric optics. Monochromatic and colour aberrations are also 1. Light as electromagnetic radiation. Wavefronts and rays. Index of refraction. Optical path. Polarization. 2. Fermat's principle. Refraction and reflection at a plane surface. Total internal reflection. Planparalel plate. Handeness and parity. 3. Plane mirro and systems of mirrors. Refraction prisms: types, applications. 4.Crown and flint glass, dispersion, Abbe number. Dispersion prisms, minimum deviation position. 5. Curved optical surfaces. Transfer equations, ray tracing. Cardinal points and planes. 6. Focal length, optical power. Magnification: transverse, axial, angular. 7. Centred system of optical surfaces: cardinal points, focal length, magnification, transfer equations. 8. Thick lens: image forming, ray tracing, types. 9. Thin lens: image forming, ray tracing (graphically and algebraically). 10. Systems of thin lenses, system focus. Afocal system. 11. Aberrations of optical systems: monochromatic. 12. Aberrations of optical systems: chromatic. Corrections of aberrations. Doublets. 13. Colorimetry. Colour mixing, colour systems. Born, M., Wolf, E., Principles of optics, Pergamon Press, N.Y, 1970 Williams Ch. S., Becklund O. A.: Optics: A Short Course for Engineers and Scientists Elaboration of given tasks. Exam questions: Light as electromagnetic waves, polarization. Refraction index, dispersion, Abbe number. Optical path length. Fermat's principle. Reflection as a function of refraction index and angle of incidence. Reflection of polarized light. Deviating prisms - types. Handedness and parity. Unfolding. Dispersing prisms - principle and types. Definition of a paraxial space. Refraction equations for a spherical surface. Cardinal points and planes, sign convention, optical power, focal length, magnifications. System of spherical surfaces: cardinal points, transfer equations (Gauss, Newton), magnifications. Thick lens: types, basic shapes, locations of principal planes. Thin lens: definition, cardinal points. Thin lens combinations: effective focal length. Aperture and field stops: definition, location, function. F number. Pupils and windows: definition, location, function. Diffraction: Airy disc, angular resolution. Monochromatic aberrations: types, description. Chromatic aberration: description, correction. Achromat, apochromat, superachromat. Aditive and subtractive colour mixing. Objective colour description. Fermat's principle, prism, mirror, thin lens, thick lens, aberrations
{"url":"https://kos.fs.cvut.cz/synopsis/course/E362502/en/en/cz/printA4wide","timestamp":"2024-11-11T19:38:15Z","content_type":"text/html","content_length":"6005","record_id":"<urn:uuid:24b8be8b-7c4f-4d24-91a6-cd655a6da526>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00328.warc.gz"}
Broadband light source interferometry - Pyroistech The continuous emerging of novel manufacturing techniques, either top-down or bottom-up [1], enables a more and more precise control of the growth of structures at nanometric scale, which has motivated new and numerous applications in recent decades [2]. Particularly, the use of these novel manufacturing techniques for the fabrication of sensitive structures on fiber optic is very useful, since it allows a very precise design of the characteristics of the structure by adjusting a few parameters [3]. This has enabled the development of a wide variety of fiber optic devices, which have bridged the gap between optic and electronic devices in virtually all the disciplines, such as chemical, physical, biochemical or biological sensors [4]. These devices are based on different operating principles, such as fluorescence (see our previous blog post about fluorescence here), evanescent field (see our previous blog post about evanescent field here) or interferometry among others. Interferometric devices have been commonly used with high coherence sources such as lasers (see our blog post about coherence here). However, the utilization of novel manufacturing techniques, such as the Layer-by-Layer electrostatic self assembly [2] permit to create ultra-thin structures of submicrometric thickness on the tip of the optical fiber, which will act as an interferometer [4]. These nanocavities are susceptible of being excited using a common broadband light source because the thickness of the nanocavities formed at the tip of the optical fiber are narrower than the coherence length of common broadband light sources, such as LED light sources or SLED light sources (in the order of a few microns) [5]. Now, if we focus on a structure fabricated at the perpendicularly cleaved end of an optical fiber we can observe that it is quite similar to the geometry of a Fabry-Perot interferometer (FPI) formed by two flat and parallel mirror surfaces separated from each other by a cavity of nanometric thickness ($\inline&space;d$), as it is depicted in Figure 1 [6]. Figure 1: Schematic representation of a Fabry-Perot interferometer formed by a nanocavity fabricated at the perpendicularly cleaved end of an optical fiber Therefore, the reflected waves at both mirrors ($R_{1}$ and $R_{2}$) will interfere each other in a constructive or destructive way as a function of the nanocavity size and the wavelength of the wave, among other parameters. Thus, for a given wavelength and optical constants it is possible to obtain the cavity length and vice versa, for a given cavity length and optical constants it is possible to obtain the wavelength that produces the interference at maximum or minimum attenuation. This phenomenon can be easily exploited for sensor fabrication provided that we have a nanocavity that modifies its size as a function of the measurand. Before analyzing mathematically the optical system formed at the tip of the fiber, it is assumed that the coherence length of the incident light is greater than the equivalent optical length (optical path length), which enables the generation of the interference phenomenon. In addition, it is considered that $\inline&space;n_{2}>n_{1}>n_{3}$, something that will happen in most of the fiber optic sensing applications. As it was previously indicated, it is important to remark that in the case of nanometer thickness cavities it will be possible to use low coherence light sources or broadband light sources. Then, it can be assumed that the phase shift of the incident light is $\inline&space;\pi$ radians when reflected in the first mirror ($\inline&space;n_{1}<n_{2}$) and null when reflected in the second mirror ($\inline&space;n_{2}>n_{3}$) [7]. Following previous assumptions, an incident light beam of amplitude $E_{0}$ at the interface between the media Fiber optic/nanocavity will generate a refracted and a reflected beam. The refracted beam will be attenuated by $\inline&space;\sqrt{T_{12}}$, where $\inline&space;T_{12}$ is the transmission coefficient of the first mirror. The reflected beam will return to the optical fiber attenuated by $\inline&space;\sqrt{R_{1}}$ and with a phase offset of $\inline&space;\pi$ radians, with $\inline&space;R_{1}$ being the reflection coefficient first mirror, as it expressed in Eq. 1. In the same manner, the refracted beam generated earlier will generate a refracted and a reflected beam at the interface between the nanocavity and the external medium as it is represented in Figure If we consider now the effects associated to the attenuation losses in the nanocavity, we can express the intensity of the optical field that reaches the nanocavity / external medium interface after crossing the nanocavity as: where $\inline&space;\alpha$ is the absorption coefficient of the medium, $\inline&space;d$ is the thickness of the nanocavity and $\inline&space;\phi$ is the round trip phase shift in the interferometer expressed in Eq. 3. Thus, the optical field intensity reflected into the fiber can be obtained as the sum of all the reflections, as shown in Eq. 4. Assuming without much error that $\inline&space;\sqrt{T_{12}}\cong\sqrt{T_{21}}\cong\sqrt{T_{1}}$ , taking out common factor and grouping the previous expression we can obtain the expression shown in Eq. 5. Considering that: and that the energy conservation theorem must be fulfilled for total incident light power, which should be equal to the sum of all transmissions, reflections and absorptions produced, according to Eq. 6 (see also our previous post about transmission, reflection and absorption measurements here). Where $\inline&space;A_{i}$, $T_{i}$ and $R_{i}$ represent the absorption, transmission and reflection respectively. Then, considering Eq. 6 and Eq. 7 we can transform Eq. 5 into Eq. 8 as follows: Regarding the reflected optical power, it is obtained from: Given the intensity of the incident field $\inline&space;I_{0}=E_{0}^{2}/2$ we can express the reflected power coefficient of the Fabry-Pérot as the relationship between the reflected optical power and the incident optical power according to Eq. 10. In order to obtain a reduced expression of reflectivity we can add some additional simplifications. For example, we can assume zero dispersion and absorption losses of the material ($\inline&space;A_ {1}=0,&space;\alpha=0$) resulting in the expression of Eq. 11. Considering the particular case of standar optical fiber and external medium liquid or air ($\inline&space;n_{2}>n_{1}>n_{3}$) enables to assume, with some error, that $\inline&space;R_{1},R_{2}\ll& space;1$, reducing the prior expression to Eq. 12. [7] In view of the previous equation it can be deduced that the maximum and minimum attenuation response will occur when $\inline&space;\cos&space;\phi=-1$ or when $\inline&space;\cos&space;\phi=1$ respectively. Taking into account Eq. 3, the wavelength at the maximum and minimum attenuation can be expressed as a function of the of the nanocavity and will comply: Previous equation reveals the relation between the nanocavity size ($\inline&space;d$) and the interference wavelength as it was mentioned at the beginning of this article. This allows to fabricate optical fiber sensors that rely on this phenomena and to use broadband light sources, which is theoretically demonstrated in the interference pattern of Figure 2. This image shows the variation of the optical power as a function of the wavelength and the nanocavity size (number of bilayers) and has also been validated experimentally for a device that is later used as pH sensor [8]. Figure 2: Theoretical evolution of the optical reflectance from a Fabry–Perot interferometer as the distance between the two mirrors is varied, using the model shown in Eq. (10) More information: LED light source [1] J. Chen, “Novel patterning techniques for manufacturing organic and nanostructured electronics” Ph.D. Thesis, Massachusetts Institute of Technology, Dept. of Materials Science and Engineering [2] F. J. Arregui, Sensors Based on Nanostructured Materials, Springer Berlin, Heidelberg, 2009. [3] C. R. Zamarreño, I. R. Matias, F. J. Arregui, “Nanofabrication techniques applied to the development of novel optical fiber sensors based on nanostructured coatings” IEEE Sensors Journal, vol. 12(8), pp. 2699-2710, 2012. [4] C. Elosua, F. J. Arregui, I. Del Villar et al., “Micro and nanostructured materials for the development of optical fibre sensors,” Sensors, vol. 17(10), pp. 2312, 2017 [5] Y. Deng, D. Chu, “Coherence properties of different light sources and their effect on the image sharpness and speckle of holographic displays,” Sci. Rep., vol. 7, pp. 5893, 2017. [6] F. J. Arregui, I. R. Matias, “OpticaL fiber nanometer-scale Fabry-Perot interferometer formed by the ionic self-assembly monolayer process,” Optics Letters, vol. 24(9), pp. 596-598, 1999. [7] F. L. Pedrotti, Introduction to Optics, London: Prentice Hall, 2017. [8] C. R. Zamarreño, J. Goicoechea, I. R. Matias, F. J. Arregui, “Utilization of white light interferometry in pH sensing applications by means of the fabrication of nanostructured cavities,” Sensors and actuators B, vol. 138(2), pp. 613-618, 2009.
{"url":"https://www.pyroistech.com/broadband-light-source-interferometry/","timestamp":"2024-11-05T15:33:03Z","content_type":"text/html","content_length":"108736","record_id":"<urn:uuid:0d36a079-68cb-421d-a084-46a487ebffb3>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00782.warc.gz"}
N-Body Methods The problem involves updates to a system where each element of the system rigorously depends on the state of every other element of the system. How do we compute (or approximate) these updates in an efficient and scalable fashion? This problem often occurs in the domain of natural sciences, where the objects with properties represent physical bodies or elementary particles that affect each other through physical forces stemming from gravity or electromagnetic field. The object’s instantaneous change in position, velocity, acceleration, and momentum can be computed based on positions and intrinsic properties such as mass or charge of all other particles in the system. Each iteration of such computation corresponds to a period in time. The users of the computation are typically interested in running this simulation in the time domain for some period of time or until some condition is met, such as achieving some steady state behavior. All or some intermediate time steps are typically of interest. In a given iteration, object properties updates are independent of each other. The parallel implementation typically involves partitioning the objects between the PEs so that their properties can be updated concurrently. The pattern refers to the following term that we define as follows: Object – an independent entity whose properties for the current iteration can be computed based on the properties of all objects in the previous iteration. Property – an annotation of the object that has a value. Distance – Measure of separation between the objects in a geographical space (usually the physical xyz co-ordinates, but could be more abstract) Force – the interaction between the objects that changes the property (usually a decreasing function of the distance between the objects) Slight variations on the pattern include cases where : 1. The interactions are between one set of objects and another i.e. each object in set A depends on all objects in set B. 2. Force is independent of distance (all objects affect equally) Due to the often relatively light computational requirements of the properties update procedure, the computetocommunication ratio for each PE can be quite low. Therefore, the number of objects that are allocated to one PE has to be balanced carefully with communication. However, 1. When too many objects are allocated per PE, the computation may be unnecessarily serial, but the compute and communication will be better balanced. 2. When too few objects are allocated per PE, the computation is well parallelized, but will likely to be communication dominated. Also, if object updates require varying amounts of computational efforts, load balancing problems can easily arise. 3. Space partitioning needs to be done without destroying memory locality. Good locality implies objects closer in geographical space should reside closer in memory. This implies data movement overhead as the particles move (change location) due to the forces acting on them. The following are some of the typical implementation strategies for Nbody interaction problem in physics. The approximation strategies, described below, benefit from particular properties of the force equations used to update the acceleration, velocities and positions of objects in space: the force equation is associative with respect to its inputs, the object positions. Before going into the details of specific implementation choices, the following flowchart might be needed to convince yourself that the N-body pattern really applies to the problem. If not, other related patterns that might be useful are Structured grid and geometric partitioning. Naive algorithm Naive algorithm – a straightforward O(n^2) algorithm that simulates interaction of each object with every other object in the system. It is parallelized by evenly distributing the objects between processing elements. Each PE is responsible for updating only its objects. To do so, the PE receives object properties from all other PEs and uses them to perform the update. Then the PE sends the updated values to other PEs, and the process is repeated. The exchange of object properties between PEs that happens between update iterations, cannot occur until all the PEs complete their updates. A typical Nbody computation waits on a barrier to ensure data consistency. The communication of the data between PE has to be carefully orchestrated to avoid overwhelming the communication fabric. One approach is to organize all PEs in a pipelined cycle, such that the data goes around the cycle to each PE, and enables each PE to perform necessary updates. In absence of numerical problems and with sufficiently small time step, the Naive method returns the optimal numerical results. The naive method will also have to be used in cases where the interactions between the particles cannot be assumed to decay with distance. In cases where it is not possible to approximate the interactions at large distances, the naive method is a good way of parallelizing the problem. For pseudo-code, refer to the example section of the pattern. Barnes Hut This O(n log n) approximation to the naive algorithm clusters the objects into “geographical” partitions and summarizes a partition in a way that enables distant objects to use the compact partition summary instead of detailed object properties. For example, a partition of objects far away (the distance to be determined by thresholds in the algorithm) can be summarized simply as a center of mass. This one quantity is used to update forces and positions of individual objects far away from this cluster center of mass. The most important data structure for implementing Barnes-Hut is a quad-tree (2D) or oct-tree (3D). This refers to the partitioning of the particles in the geographical space. This partitioning could be static/dynamic. Even in the static case, it could be independent of the data(complete trees) or dependent on the distribution of the data (adaptive trees). Figures: courtesy [4] At a high level, the Barnes-Hut algorithm is as described below: 1) Build the Quadtree 2) For each subsquare in the quadtree, compute the center of mass and total mass for all the particles it contains. 3) For each particle, traverse the tree to compute the force on it. The second step of the algorithm is accomplished by a simple “post order traversal” of the quadtree (post order traversal means the children of a node are processed before the node). Note that the mass and center of mass of each tree node n are stored at n; this is important for the next step of the algorithm. Given a list of particle positions, this quadtree can be constructed recursively as follows: procedure QuadtreeBuild Quadtree = {empty} For i = 1 to n ... loop over all particles QuadInsert(i, root) ... insert particle i in quadtree end for ... at this point, the quadtree may have some empty ... leaves, whose siblings are not empty Traverse the tree (via, say, breadth first search), eliminating empty leaves procedure QuadInsert(i,n) ... Try to insert particle i at node n in quadtree ... By construction, each leaf will contain either ... 1 or 0 particles if the subtree rooted at n contains more than 1 particle determine which child c of node n particle i lies in else if the subtree rooted at n contains one particle ... n is a leaf add n's four children to the Quadtree move the particle already in n into the child in which it lies let c be child in which particle i lies else if the subtree rooted at n is empty ... n is a leaf store particle i in node n The pseudo-code for step 2 of the algorithm is as follows: ... Compute the center of mass and total mass ... for particles in each subsquare ( mass, cm ) = Compute_Mass(root) ... cm = center of mass function ( mass, cm ) = Compute_Mass(n) ... Compute the mass and center of mass (cm) of ... all the particles in the subtree rooted at n if n contains 1 particle ... the mass and cm of n are identical to ... the particle's mass and position store ( mass, cm ) at n return ( mass, cm ) for all four children c(i) of n (i=1,2,3,4) ( mass(i), cm(i) ) = Compute_Mass(c(i)) end for mass = mass(1) + mass(2) + mass(3) + mass(4) ... the mass of a node is the sum of ... the masses of the children cm = ( mass(1)*cm(1) + mass(2)*cm(2) + mass(3)*cm(3) + mass(4)*cm(4)) / mass ... the cm of a node is a weighted sum of ... the cm's of the children store ( mass, cm ) at n return ( mass, cm ) The algorithm for step 3 of Barnes-Hut is as follows: ... For each particle, traverse the tree ... to compute the force on it. For i = 1 to n f(i) = TreeForce(i,root) end for function f = TreeForce(i,n) ... Compute gravitational force on particle i ... due to all particles in the box at n f = 0 if n contains one particle f = force computed using formula (*) above r = distance from particle i to center of mass of particles in n D = size of box n if D/r < theta compute f using formula (*) above for all children c of n f = f + TreeForce(i,c) end for end if end if The correctness of the algorithm follows from the fact that the force from each part of the tree is accumulated in f recursively. [4] Fast Multipole Method This is a faster O(n), more accurate method. It differs from BarnesHut by computing potentials at every point (rather than forces), and by using a fixed set of boxes with more information rather than a varying number of boxes with fixed information. A high-level description of FMM is as follows: (1) Build the quadtree containing all the points. (2) Traverse the quadtree from bottom to top, computing Outer(n) for each square n in the tree. (3) Traverse the quadtree from top to bottom, computing Inner(n) for each square in the tree. (4) For each leaf, add the contributions of nearest neighbors and particles in the leaf to Inner(n) Here, Outer(n) refers to the force created by particles inside square n to the outside of the square. Inner(n) refers to the force created by particles outside square n into it. For more details, refer [5]. Parallelization strategies For the naive method, the most common parallelization strategy follows a pipeline pattern. Each PE/thread holds onto a few particles whose updates are controlled by it. In a pipeline fashion, other particles are passed from one PE to another (in a cycle) until each PE has updated its particles after interacting all other particles in the system. This is a strategy that works well and should be the first choice when implementing a parallel n-body problem using the o(N^2) method. Load balancing is one of the most critical problems for hierarchical algorithms such as BarnesHut and Fast Multipole Method. First, distributing the objects among Processing Elements is nontrivial since the objects are partitioned based on their geographical location. Second, the objects change position over computation iterations, and thus it might be necessary to repartition and redistribute the objects among PEs. When looking at the framework design, we could consider the following components provided by the programmer/user that enable implementation of the naive as well as hierarchical methods: Partitioner Distributes the objects into hierarchical regions and creates a corresponding mapping to PEs. Partitioner communicates with the infrastructure to (1) specify PE mappings, and (2) conditions for infrastructure to force a repartitioning. Summarizer Exports data from one hierarchical region to other regions. For example, in a naive algorithm, this would just export all the data, but a more sophisticated algorithm might export a summary: a center of mass and a force vector for objects in the region. Local and nonlocal update procedures – Executed by PEs to combine properties of objects in this PE/hierarchical region with those in other PEs/regions. Variants on classic N-Body There are other variations on the classic N-body (Particle-Particle) problem. Some of them include Particle-Mesh, Particle-Multiple-Mesh, Tree-Code-Particle-Mesh etc [3]. A very simple variation involves cases where the interactions are not between all elements of one set, but between every element of one set (A) to every element of another set (B). A naive implementation of this problem would be a O(NM) solution. Better strategies are possible, if every particle in one set is only influenced by a much small number of particles in set B. This approach also implies geographical partitioning strategies (similar to quad-trees, oct-trees etc) to reduce the computations. Ray tracing is an example of this kind of interaction between rays and objects in a scene. Precondition: A set of uniform objects, and uniform interactions between them. Each computation iteration preserves the number of objects in the system and their properties as required by their update procedure (equivalent to conservation of mass/energy/momentum) Laws of interactions between the objects do not change for all iterations Postconditions: All data has been updated for each iteration For all the examples, we assume that we have N bodies each with mass m[i] at position (x[i], y[i]) and moving with a velocity (vx[i], vy[i]). The force acting on the particles is gravitational i.e. force between bodies i and j is F(i,j) = G m[i] m[j]/r[i,j]^3 (x[i]-x[j], y[i]-y[j]) where r[i,j] = sqrt( (x[i]-x[j])^2+(y[i]-y[j])^2). The objective is to simulate the movement of the bodies over a series of time with steps dt. The code for executing one time step is given below: Naive implementation Assume v(i) be the velocity of particle i at any time step. v(i) = ( (vx(i), vy(i) ) for i = 0:N-1 vx_new(i) = vy_new(i) = 0 x_new(i)=y_new(i) = 0 for j=0:N-1 if(j != i) (x(i),y(i)) = (x(i),y(i)) + v(i)*dt (vx_new(i), vy_new(i)) = (vx(i),vy(i)) + F(i,j)/m(i) * dt (vx(i),vy(i)) = (vx_new(i),vy_new(i)) for i=0:N-1 (x(i), y(i)) = (x_new(i), y_new(i)) (vx(i),vy(i)) = (vx_new(i),vy_new(i)) Let there be M processors/threads, among which the N particles are divided. For sake of simplicity, assume that M divides N. Each processor has 2 arrays – one local for which each processor is responsible for updating the x,v values and another which is passed among the threads NP = N/M m_pass = m(threadId*NP:threadId*NP+NP-1) x_pass = x(threadId*NP:threadId*NP+NP-1) y_pass = y(threadId*NP:threadId*NP+NP-1) for i= threadId*NP:threadId*NP+NP-1 vx_new(i) = vy_new(i) = 0 x_new(i)=y_new(i) = 0 for k=1:M for i= threadId*NP:threadId*NP+NP-1 for j=0:NP-1 if(j != i) F(i,j) = G*m(i)*m_pass(j)/r(i,j)^3 (x(i)-x_pass(j), y(i)-y_pass(j) ) (x(i),y(i)) = (x(i),y(i)) + v(i)*dt (vx_new(i), vy_new(i)) = (vx(i),vy(i)) + F(i,j)/m(i) * dt nextId = (threadId+1)modulo M m_pass = m(nextId*NP:nextId*NP+NP-1) x_pass = x(nextId*NP:nextId*NP+NP-1) y_pass = y(nextId*NP:nextId*NP+NP-1) for i=threadId*NP:threadId*NP+NP-1 (x(i), y(i)) = (x_new(i), y_new(i)) (vx(i),vy(i)) = (vx_new(i),vy_new(i)) Known Uses Simulation of astronomical bodies under gravity : A collection of interacting masses in free space (no outside fields). The masses affect each other’s acceleration and thus velocity and position as described by the Universal Law of Gravitation. Given the initial positions and velocities of all objects in the system, the computation can update these parameters using the Law, one time step (iteration) at a time. Other particle codes in physics Related Patterns ParticleinCell is a related computation that effectively implements Nbody and grid like computations: objects interacting in a field. The following PLPP patterns are related to Nbody Interactions pattern: Dataflow/Pipeline – a naive O(n^2) implementation of the Nbody simulation can use a pipeline to exchange (essentially, rotate) the object properties between the PEs to orchestrate the n^2 matchings between the parameters that are used in the update procedure. Data/Task Parallelism – each object can be updated independently Geometric partitioning – the set of bodies has to partitioned between the PEs geometrically Structured/Unstructured Grid – computational pattern that is useful if the interaction between particles is very local (zero after a small fixed distance) [1] “Rapid Solution of Integral Equations of Classical Potential Theory”, V. Rokhlin, J. Comp. Phys. v. 60, 1985 [2] “A Fast Algorithm for Particle Simulations”, L. Greengard and V. Rokhlin, J. Comp. Phys. v. 73, 1987 [3] “N-Body / Particle Simulation Methods”, http://www.amara.com/papers/nbody.html – Has a good discussion of variants on n-body methods, including particle-mesh, particle-multiple-mesh, tree-code [4] “Fast Hierarchical Methods for the N-body Problem, Part 1”, http://www.cs.berkeley.edu/~demmel/cs267/lecture26/lecture26.html – Has a detailed discussion of Barnes Hut method [5] “Fast Hierarchical Methods for the N-body Problem, Part 2”, http://www.cs.berkeley.edu/~demmel/cs267/lecture27/lecture27.html – Has a detailed discussion of Fast Multipole Method and parallelization for N-Body methods Yury Markovsky, Erich Strohmaier Edit 02/24 Narayanan Sundaram
{"url":"https://patterns.eecs.berkeley.edu/?page_id=193","timestamp":"2024-11-09T05:59:45Z","content_type":"text/html","content_length":"38213","record_id":"<urn:uuid:962ad8a2-3cdd-4548-a46d-a032170e761b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00439.warc.gz"}
2 cubes each of volume 64 cm³ are joined end to end. Find the surface area of the resulting cuboid. To find the surface area of the resulting cuboid formed by joining two cubes each of volume 64 cm³ end to end, we first determine the dimensions of each cube. The volume of a cube is given by V = a³, where a is the length of each side of the cube. Given that the volume of each cube is 64 cm³, we find the side length by solving 64 = a³, which gives a = 4 cm. When two cubes of side 4 cm are joined end to end, the resulting cuboid will have dimensions 4 cm x 4 cm x 8 cm (since the length is the sum of the lengths of the two cubes, and the width and height remain the same as those of each cube). The surface area SA of a cuboid is given by SA = 2(lw+lh+wh), where l, w, and h are the length, width, and height of the cuboid, respectively. Substituting the dimensions of the cuboid, we get SA = 2(4×4+4×8+4×8) = 2(16+32+32) = 2×80 = 160 cm². Therefore, the surface area of the resulting cuboid is 160 cm². Cuboid Formation from Cubes When we delve into the realm of geometry, particularly in the creation of complex shapes from simpler ones, we encounter intriguing scenarios. One such case is the formation of a cuboid by joining two cubes end to end. This process not only exemplifies the beauty of geometric transformations but also provides a practical application of mathematical concepts. Each cube in our scenario has a volume of 64 cm³. To understand the surface area of the resulting cuboid, we must first comprehend the properties of the individual cubes and how they contribute to the formation of the new shape. This exploration not only enhances our understanding of spatial dimensions but also sharpens our problem-solving skills. Understanding Cube Dimensions The journey begins with deciphering the dimensions of each cube. Given that the volume of a cube is calculated as the cube of its side (expressed as V = a³), we can reverse-engineer this formula to find the side length of our cubes. With each cube having a volume of 64 cm³, we deduce that the side length (a) is 4 cm, as 64 = a³ simplifies to a = 4 cm. This dimension is crucial as it remains constant for the width and height of the resulting cuboid, and it forms the basis for further calculations. Understanding these dimensions is not just a mathematical exercise but also a step towards visualizing how smaller units configure to form larger structures. Formation of the Cuboid The formation of the cuboid from two cubes is a fascinating geometric transformation. By joining the cubes end to end, we effectively create a new shape whose length is the sum of the lengths of the two cubes, while its width and height are equal to the side of a cube. This results in a cuboid with dimensions 4 cm x 4 cm x 8 cm. This process is a perfect example of how geometry can be dynamic and transformative, changing the properties and dimensions of shapes through simple operations. It also illustrates the concept of dimensional addition, where the length of the cuboid is the cumulative result of the lengths of the individual cubes. Calculating the Surface Area The calculation of the surface area of the resulting cuboid is the next step. The formula for the surface area of a cuboid is SA = 2(lw+lh+wh), where l, w, and h represent the length, width, and height, respectively. Substituting the dimensions of our cuboid (4 cm x 4 cm x 8 cm) into this formula, we arrive at SA = 2(4×4+4×8+4×8) = 2(16+32+32) = 2×80 = 160 cm². This calculation not only provides the surface area but also reinforces the understanding of how geometric properties are interrelated and how changes in dimensions affect overall properties. Implications and Applications The implications of this exercise extend beyond mere calculation. It demonstrates how basic geometric shapes can be manipulated to form more complex structures, a principle that finds applications in various fields such as architecture, engineering, and design. Understanding the transformation from cubes to a cuboid and the subsequent calculation of surface area equips us with the ability to estimate materials needed for construction, packaging, and manufacturing. This example serves as a testament to the practicality of geometric principles in real-world scenarios. The Beauty of Geometric Transformations In conclusion, the process of forming a cuboid from two cubes and calculating its surface area is a splendid example of the elegance and utility of geometry. It showcases how simple shapes can be combined to form more complex ones and how their properties are intricately linked. This exercise not only enhances our understanding of geometric concepts but also highlights their practical applications in everyday life. It is a reminder of how mathematics is not just about numbers and formulas but also about understanding the world around us in a more structured and logical way. Last Edited: June 12, 2024
{"url":"https://www.tiwariacademy.com/ncert-solutions/class-10/maths/chapter-12/exercise-12-1/2-cubes-each-of-volume-64-cm%C2%B3-are-joined-end-to-end-find-the-surface-area-of-the-resulting-cuboid/","timestamp":"2024-11-12T23:44:37Z","content_type":"text/html","content_length":"241093","record_id":"<urn:uuid:4fb1f2c0-a047-40ec-b6b9-d4d1089fc79e>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00686.warc.gz"}
{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Chemical Reaction Networks\n", "\n", " \n", "\n", "The course contains material derived from the course [Biological Circuit Design by Michael Elowitz and Justin Bois, 2020 at Caltech](http://be150.caltech.edu/2020/content/index.html).\n", "\n", "The original course material has been changed by [Matthias Fuegger](http://www.lsv.fr/~mfuegger/) and [Thomas Nowak](https://www.thomasnowak.net)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "This lecture covers:\n", "\n", "**Concepts**\n", "\n", "- Chemical reaction networks (CRNs) to describe species and reactions among them\n", "- Determinstic ODE kinetics of CRNs\n", "- Stochastic Markov chain kinetics of CRNs\n", "- Their link via the Langevin equation\n", "\n", "" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Chemical Reactions Networks\n", "\ n", "\n", "A *chemical reaction network* is described by a set $\\mathcal{S}$ of species and a set $\\mathcal{R}$ of reactions.\n", "A *reaction* is a triple $(\\mathbf{r}, \\mathbf{p}, \\alpha)$ where $\\mathbf{r}, \\mathbf{p}\\in \\mathbb{N}_0^{\\mathcal{S}}$ and $\\alpha\\in\\mathbb{R}_{\\geq0}$.\n", "The species with positive count in $\\mathbf{r}$ are called the reaction's *reactants* and those with positive count in $\\mathbf{p}$ are called its *products*.\n", "The parameter $\\alpha$ is called the reaction's *rate constant*." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The dynamical equations\n", "\n", "For simple protein production, we have the following species\n", "* DNA. We assume there is a single promoter followed by a protein-coding gene in the cell\n", "* mRNA, where $m$ is the current number of mRNA corresponding to the above gene\n", "* protein, where $p$ is the current number of proteins corresponding to the above gene\n", "\n", "as well as the following reactions among them:\n", "\\n", "\\text{DNA} &\\rightarrow \\text{mRNA} + \\text{DNA} &\\text{(transcription)}\\\\\n", "\\text{mRNA} &\\rightarrow \\emptyset &\\text{(mRNA degradation and dilution)}\\\\\n", "\\text{mRNA} &\\rightarrow \\text{protein} + \\text{mRNA} &\\text{(translation)}\\\\\n", "\\text{protein} &\\rightarrow \\emptyset &\\text{(protein degradation and dilution)}\n", "\" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Macroscale equations (deterministic, ODE semantics)\n", "As we've seen before, the deterministic dynamics, which describe mean concentrations over a large population of cells, are described by the ODEs\n", "\n", "\\n", "\\frac{\\mathrm{d}m}{\\mathrm{d}t} &= \\beta_m - \\gamma_m m, \\\\[1em]\n", "\\frac{\\mathrm {d}p}{\\mathrm{d}t} &= \\beta_p m - \\gamma_p p.\n", "\\n", "\n", "\n", "In general, the rate of a reaction $(\\mathbf{r}, \\mathbf{p}, \\alpha)$ at time $t$ is $\\alpha \\prod_{s\\in \\mathcal{S}} s (t)^{\\mathbf{r}(s)}$ where $s(t)$ is the concentration of species $s$ at time $t$. The differential equation for a species $s \\in \\mathcal{S}$ is:\n", "$$\n", "\\frac{ds}{dt} = \\sum_{(\\mathbf {r}, \\mathbf{p}, \\alpha) \\in \\mathcal{R}} (\\mathbf{p}(s) - \\mathbf{r}(s)) \\cdot \\alpha \\prod_{\\tilde{s}\\in \\mathcal{S}} \\tilde{s}(t)^{\\mathbf{r}(\\tilde{s})}\n", "$$" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### The Chemical Master equation (stochastic, Markov chain semantics)\n", "\n", "We can write a master equation for these dynamics. In this case, each state is defined by an mRNA copy number $m$ and a protein copy number $p$.\n", "States can transition to other states with rates.\n", "We assume that the state fully described the system state.\ n", "That is the probability to transition from a state $(m,p)$ to a state $(m',p')$ with in an infinitesimal time $\\Delta t$\n", "is independent if how long our system already is in state $(m,p)$.\ n", "It is approximately $\\Delta t \\cdot \\gamma_i(m,p)$, where $\\gamma_i(m,p)$ is the rate at which reaction $i$ happens if in state $(m,p)$.\n", "\n", "The following image shows state transitions and their corresponding reactions for large enough $m$ and $p$.\n", "Care has to be taken at the boundaries, e.g., if $m = 1$ or $m = 0$.\n", "\n", " \n", " \n", " \n", " \n", "\n", " \n", "\n", "Denote by **$P(m, p, t)$** the probability that the system is in state $(m,p)$ at time $t$.\n", "Then, by letting $\\Delta t \\to 0$, it is\n", "\n", "\\n", "\\frac{\\mathrm{d}P(m,p,t)}{\ \mathrm{d}t} &= \\beta_m P(m-1,p,t) & \\text{(from left)}\\\\\n", " &+ (m+1)P(m+1,p,t) & \\text{(from right)}\\\\\n", " &+\\beta_p mP(m,p-1,t) & \\text{(from bottom)}\\\\\n", " &+ \\gamma (p+1)P (m,p+1,t) &\\text{(from top)}\\\\\n", " &- mP(m,p,t) & \\text{(to left)}\\\\\n", " &- \\beta_m P(m,p,t) & \\text{(to right)}\\\\\n", " &- \\gamma p P(m,p,t) &\\text{(to bottom)}\\\\\n", " &- \\beta_p mP(m,p,t)\\enspace. & \\text{(to top)}\n", "\\n", "\n", "We implicitly define $P(m, p, t) = 0$ if $m < 0$ or $p < 0$. This is the master equation we will sample from using the stochastic simulation algorithm (SSA) also called Gillespie algorithm." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Gillespie algorithm\n", "\n", "### Propensity\n", "The rhs terms in the above equation are familiar to us: they almost look like reaction rates with a single difference of not being functions of *concentrations*, but of *species counts*.\n", "For example, a reaction\n", "$$\ n", "A + B \\rightarrow C\n", "$$\n", "with mass-action kinetics and rate constant $\\gamma$ in units of $\\text{L} s^{-1}$\n", "has **rate**\n", "$$\n", "\\gamma \\cdot [A] \\cdot [B]\n", "$$\n", "in units of $\\text{L}^{-2} \\cdot \\text{L} \\text{s}^{-1} = \\text{L}^{-1} \\text{s}^{-1}$.\n", "Concentrations may as well be given in [molar units](https://en.wikipedia.org/wiki/ Molar_concentration).\n", "\n", "By contrast, the **propensity** of the reaction is\n", "$$\n", "\\gamma' \\cdot A \\cdot B\n", "$$\n", "in units of $\\text{s}^{-1}$, where\n", "$\\gamma' = \\gamma / \\text{vol}$ is in units of $\\text{s}^{-1}$\n", "and vol is the volume of the compartment in which the reactions happen.\n", "\n", "The propensity for a given transition/reaction, say indexed $i$, is denoted as $a_i$.\n", "The equivalence to notation we introduced for master equations is that if transition $i$ results in the change of state from $n'$ to $n$, then $a_i = W(n\\mid n')$.\n", "\ n", "In general, the propensity of a reaction $(\\mathbf{r}, \\mathbf{p}, \\alpha)$ at time $t$ is\n", "$$\n", "\\frac{\\alpha}{v^{o-1}} \\prod_{s\\in \\mathcal{S}} \\binom{s(t)}{\\mathbf{r}(s)}\n", "$$\n", "where $s(t)$ is the *count* of species $s$ at time $t$, $v$ is the volume, and $o = \\sum_{s\\in\\mathcal{S}} \\mathbf{r}(s)$ is the order of the reaction.\n", "Its effect is subtracting $\\ mathbf{r}(s)$ from the count of species $s$ and adding $\\mathbf{p}(s)$ to the count of species $s$.\n", "\n", "### Switching states: transition probabilities and transition times \n", "To cast this problem for a Gillespie simulation, we can write each change of state (moving either the copy number of mRNA or protein up or down by 1 in this case) and their respective propensities.\n", "\n", "\\ n", "\\begin{array}{ll}\n", "\\text{reaction, }r_i & \\text{propensity, } a_i \\\\\n", "m \\rightarrow m+1,\\;\\;\\;\\; & \\beta_m \\\\[0.3em]\n", "m \\rightarrow m-1, \\;\\;\\;\\; & m\\\\[0.3em]\n", "p \\rightarrow p+1, \\;\\;\\;\\; & \\beta_p m \\\\[0.3em]\n", "p \\rightarrow p-1, \\;\\;\\;\\; & \\gamma p\\enspace.\n", "\\end{array}\n", "\\n", "\n", "We will not carefully prove that the Gillespie algorithm samples from the probability distribution governed by the master equation, but will state the principles behind it. The basic idea is that events (such as those outlined above) are rare, discrete, separate events. I.e., each event is an arrival of a Poisson process. The Gillespie algorithm starts with some state, $(m_0,p_0)$. Then a state change, *any* state change, will happen in some time $\\Delta t$ that has a certain probability distribution (which we will show is exponential momentarily).\n", "\n", "#### transition probabilities\n", "The probability that the state change that happens is because of reaction $j$ is proportional to $a_j$.\n", "That is to say, state changes with high propensities are more likely to occur.\n", "Thus, choosing which of the $k$ state changes happens in $\\Delta t$ is a matter of drawing an integer $j \\in [1,k]$ where the probability of drawing $j$ is\n", "\n", "\\n", "\\frac{a_j}{\\sum_i a_i}\\enspace.\n", "\\n", "\n", "## ## transition times\n", "Now, how do we determine how long the state change took?\n", "Let $T_i(m,p)$ be the stochastic variable that is the time that reaction $i$ occurs in state $(m,p)$, given that it is reaction\n", " $i$ that results in the next state.\n", "The probability density function $p_i$ for the stochastic variable $T_i$, is\n", "\\n", "p_i(t) = a_i\\, \\mathrm{e}^{-a_i t}\\enspace,\ n", "\\n", "for $t \\geq 0$, and $0$ otherwise.\n", "This is known as the [exponential distribution](https://www.randomservices.org/random/poisson/Exponential.html) with rate parameter $a_i$ (related, but not equal to the rate of the reaction).\n", "\n", "The probability that it has *not* occurred by time $\\Delta t$, is thus\n", "\\n", "P(T_i(m,p) > \\Delta t \\mid \\text{reaction } r_i \\text{ occurs}) = \\int_{\\Delta t}^\\infty p_i(t) \\mathrm{d}t = \\mathrm{e}^{-a_i \\Delta t}\\enspace.\n", "\\n", "\n", "However, in state $(m,p)$ there are several reactions that may make the system transition to the next state.\n", "Say we have $k$ reactions that arrive at times $t_1, t_2, \\ldots$.\n", "When does the first one of them arrive?\n", "\n", "The probability that *none* of them arrive before $\\Delta t$ is\n", "\\n", "P(t_1 > \\Delta t \\wedge t_2 > \\Delta t \\wedge \\ldots) &=\n", "P(t_1 > \\Delta t) P(t_2 > \\Delta t) \\cdots =\n", "\\prod_i \\mathrm{e}^{-a_i \\ Delta t} \n", "= \\mathrm{exp}\\left(-\\Delta t \\sum_i a_i\\right)\\enspace.\n", "\\n", "This is the equal to $P(T(m,p) > \\Delta t \\mid \\text{reaction } R \\text{ occurs})$ for a reaction $R$ with\n", " propensity $\\sum_i a_i$.\n", "For such a reaction the occurrence times are exponentially distributed with rate parameter $\\sum_i a_i$.\n", "\n", "### The algorithm\n", "So, we know how to choose a state change and we also know how long it takes.\n", "The Gillespie algorithm then proceeds as follows.\n", "\n", "1. Choose an initial condition, e.g., $m = p = 0$.\n", "2. Calculate the propensity for each of the enumerated state changes. The propensities may be functions of $m$ and $p$, so they need to be recalculated for every $m$ and $p$ we encounter.\n", "3. Choose how much time the reaction will take by drawing out of an exponential distribution with a mean equal to $\\left(\\sum_i a_i\\right.)^{-1}$. This means that a change arises from a Poisson process.\n", "4. Choose what state change will happen by drawing a sample out of the discrete distribution where $P_i = \\left.a_i\\middle/\\left(\\sum_i a_i\\right)\\right.$. In other words, the probability that a state change will be chosen is proportional to its propensity.\n", "5. Increment time by the time step you chose in step 3.\n", "6. Update the states according to the state change you choose in step 4.\n", "7. If $t$ is less than your pre-determined stopping time, go to step 2. Else stop.\n", "\n", "Gillespie proved that this algorithm samples the probability distribution described by the master equation in his seminal papers in [1976](https://doi.org/10.1016/0021-9991(76)90041-3) and [1977](http://doi.org/10.1021/j100540a008). (We recommend reading the latter.) You can also read a concise discussion of how the algorithm samples the master equation in [Section 4.2 of Del Vecchio and Murray](http://www.cds.caltech.edu/~murray/books/AM08/pdf/bfs-stochastic_14Sep14.pdf)." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The Chemical Langevin Equation\n", "\n", "The Chemical Master Equation can be difficult to solve, which is why several approxmations were developed. One of these approximations is the Chemical Langevin Equation, which we will now derive. It is perfectly useful on its own, in particular from a computational perspective. We present it here for another reason as well: to show that the ODE kinetics of a CRN are an approximation to the expected stochastic kinetics, at least for linear CRNs.\n", "\n", "It starts \n", "\n", "$$\n", "X_i(t + \\tau) = X_i(t) + \\sum_{j=1}^M \\nu_{j,i} K_j(X(t), \\tau)\n", "$$\n", "where $\\nu_{j,i}$ is the net change in the count of species $i$ in reaction $j$ and $K_j(X(t), \\tau)$ is the random variable that specifies how many times reaction $j$ occurs in the time interval $[t, t+\\tau]$.\n", "\n", "**Assumption 1** of the chemical Langevin equation is that the propensity functions do not change significantly during the time interval $[t, t+\\tau]$, i.e., $a_j(X(t')) \\approx a_j(X(t))$ for all $t' \\in [t,t+\\tau]$.\n", "\n", "Then, we can rewrite the above equation as\n", "$$\n", "X_i(t + \\tau) = X_i(t) + \\sum_{j=1}^M \\nu_{j,i} \\mathcal{P}_j(a_j(X(t)) \\cdot \\tau)\n", "$$\n", "where the $\\mathcal{P}_j(a_j(X(t)) \\cdot \\tau)$ are independent Poisson variables with parameter $a_j(X(t)) \\cdot \\tau$.\n", "\n", "**Assumption 2** of the chemical Langevin equation requires the expected number of occurrences of each reaction to be big, i.e., $a_j(X(t))\\cdot \\tau \\gg 1$.\n", "\n", "Note that the two assumption require a tradeoff: assumption 1 wants $\\tau$ to be small whereas assumption 2 wants $\\tau$ to be big. It is very well possible that no choice of $\ \tau$ satisifies both assumptions for a given system.\n", "\n", "It is [well-known](https://en.wikipedia.org/wiki/Poisson_distribution#General) that the Poisson distribution with parameter $\\lambda$ is well approximated by the normal distribution $\\mathcal{N}(\\lambda,\\lambda)$ with expected value $\\mu = \\lambda$ and variance $\\sigma^2 = \\lambda$ for large values of $\\lambda$.\n", "\n", "Assumption 2 thus suggests using this approximation to get\n", "$$\n", "\\mathcal{P}_j(a_j(X(t)) \\tau) \\approx a_j(X(t))\\tau + \\sqrt{a_j(X(t))\\tau} \\cdot \\mathcal{N}(0,1)\n", "$$\n", "where $ \\mathcal{N}(0,1)$ is a standard normally distributed random variable.\n", "\n", "Now, switching to $X(t)$ being real-valued and setting $\\tau = dt$, in the limit $dt \\to 0$, we finally get the chemical Langevin equation:\n", "$$\n", "\\frac{dX_i(t)}{dt} = \\sum_{j=1}^M \\nu_{j,i} a_j(X(t)) + \\sum_{j=1}^M \\nu_{j,i} \\sqrt{a_j(X(t))} \\Gamma_j(t)\n", "$$\n", "where the $\\Gamma_j$ are independent standard Gaussian white noise processes." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## From Stochastic to Deterministic Kinetics\n", "\n", "If the assumptions of the chemical Langevin equation hold sufficiently well, it is easy to make the connection between the stochastic and the deterministic kinetics of CRNs. In this case, taking the expected value (over an ensemble of sample paths) on both sides of the equation, we get:\n", "$$\n", "\\frac{d \\mathbb{E} X_i(t)}{dt} = \\sum_{j=1}^M \\nu_{j,i} \\mathbb{E} a_j(X(t))\n", "$$\n", "\n", "If the propensities follow a mass-action law, the expected value $\\mathbb{E} a_j(X(t))$ is sometimes an approximation to $a_j(\\mathbb{E} X(t))$. If there are only unary reactions, they are equal for instance.\n", "\ n", "The detour via the chemical Langevin equation is not the only way to show the above formula. For a large-volume (or large-species-count) limit, it can be derived directly from the Chemical Master Equation." ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.12.4" } }, "nbformat": 4, "nbformat_minor": 4 }
{"url":"https://compbioeng.biodis.co/crns/notes.ipynb","timestamp":"2024-11-05T06:09:43Z","content_type":"text/plain","content_length":"18627","record_id":"<urn:uuid:d6667268-dfbf-4d43-939f-01743b31209b>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00762.warc.gz"}
A cone has a height of 24 cm and its base has a radius of 8 cm. If the cone is horizontally cut into two segments 3 cm from the base, what would the surface area of the bottom segment be? | HIX Tutor A cone has a height of #24 cm# and its base has a radius of #8 cm#. If the cone is horizontally cut into two segments #3 cm# from the base, what would the surface area of the bottom segment be? Answer 1 Total surface area of bottom segment is $504.02 \left(2 \mathrm{dp}\right)$ sq.cm The cone is cut at 3 cm from base, So upper radius of the frustum of cone is #r_2=(24-3)/24*8=7 # cm ; Slant ht: #l=sqrt(3^2+(8-7)^2)=sqrt(9+1)=sqrt 10~~3.16# cm. Top surface area #A_t=pi*7^2 ~~153.94 # sq.cm Bottom surface area #A_b=pi*8^2~~201.06 # sq.cm Slant Area #A_s=pi*l*(r_1+r_2)=pi*3.16*(8+7)~~149.02# sq.cm. Total surface area of bottom segment is #=A_t+A_b+A_s=153.94+201.06.1+149.02~~504.02 (2dp)# Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 2 To find the surface area of the bottom segment of the cone, we first need to find the slant height of the cone, which is the distance from the tip of the cone to the base. We can use the Pythagorean theorem to find the slant height ((l)): [ l = \sqrt{r^2 + h^2} ] Given that the radius ((r)) of the cone's base is 8 cm and the height ((h)) of the cone is 24 cm: [ l = \sqrt{8^2 + 24^2} ] [ l = \sqrt{64 + 576} ] [ l = \sqrt{640} ] [ l \approx 25.298 ] Now that we have the slant height of the cone, we can use it to find the surface area of the bottom segment. The surface area of a cone's bottom segment is given by the formula: [ A = \pi r^2 ] [ A = \pi (8)^2 ] [ A = 64\pi ] So, the surface area of the bottom segment of the cone is (64\pi) square centimeters. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer 3 To find the surface area of the bottom segment of the cone, you need to calculate the surface area of the entire cone and then subtract the surface area of the top segment. First, calculate the slant height of the cone using the Pythagorean theorem: [ l = \sqrt{r^2 + h^2} ] [ l = \sqrt{8^2 + 24^2} ] [ l = \sqrt{64 + 576} ] [ l = \sqrt{640} ] [ l ≈ 25.30 \text{ cm} ] Next, find the area of the entire cone: [ A_{\text{cone}} = \pi r l ] [ A_{\text{cone}} = \pi \times 8 \times 25.30 ] [ A_{\text{cone}} ≈ 2010.619 \text{ cm}^2 ] Now, calculate the radius of the smaller cone formed by the top segment: [ r' = \frac{r}{h} \times (h - 3) ] [ r' = \frac{8}{24} \times (24 - 3) ] [ r' = \frac{2}{3} \times 21 ] [ r' = 14 \text{ cm} ] Next, find the area of the top segment: [ A_{\text{top segment}} = \pi r' l' ] [ l' = \sqrt{(l - 3)^2 + (24 - 3)^2} ] [ l' = \sqrt{(25.30 - 3)^2 + (24 - 3)^2} ] [ l' = \sqrt{22.30^2 + 21^2} ] [ l' ≈ 29.51 \text{ cm} ] [ A_{\text{top segment}} = \pi \times 14 \times 29.51 ] [ A_{\text{top segment}} ≈ 1308.688 \text{ cm}^2 ] Finally, calculate the surface area of the bottom segment: [ A_{\text{bottom segment}} = A_{\text{cone}} - A_{\text{top segment}} ] [ A_{\text{bottom segment}} ≈ 2010.619 - 1308.688 ] [ A_{\text{bottom segment}} ≈ 701.931 \text{ cm}^2 ] So, the surface area of the bottom segment of the cone is approximately 701.931 square centimeters. Sign up to view the whole answer By signing up, you agree to our Terms of Service and Privacy Policy Answer from HIX Tutor When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some Not the question you need? HIX Tutor Solve ANY homework problem with a smart AI • 98% accuracy study help • Covers math, physics, chemistry, biology, and more • Step-by-step, in-depth guides • Readily available 24/7
{"url":"https://tutor.hix.ai/question/a-cone-has-a-height-of-24-cm-and-its-base-has-a-radius-of-8-cm-if-the-cone-is-ho-1-8f9afa3cd3","timestamp":"2024-11-10T12:45:18Z","content_type":"text/html","content_length":"594155","record_id":"<urn:uuid:27c31d7a-2838-4055-8eaf-49a0e745eff4>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00612.warc.gz"}
DIEP | Dutch Institute for Emergent Phenomena top of page >> DIEP seminars and talks - archives 2023 Below you can find more information on the DIEP seminars of 2023. >> DIEP Seminar: Eric Dignum (University of Amsterdam) Computational modelling of school choice and school segregation: from theoretical to data-driven agent-based models | 11am, 14th of December 2023 Many educational systems still consist of substantial levels of school segregation along various lines, such as race, ethnicity, household income levels, parental educational attainment and ability. This means that, globally, pupils with similar characteristics cluster together in the same schools. This is widely acknowledged to reproduce or even exacerbate inequalities and result in unequal outcomes. Hence, even despite decades of research and policies aimed to counteract school segregation, it is still a persistent societal problem. Existing literature shows that factors affecting school segregation and individual components in the system of school choice interact with each other. These interactions are reasoned to be an important mechanism through which the levels of school segregation emerge on the macro-level, but commonly used qualitative and quantitative analysis methodologies (e.g. discrete choice models, interviews, surveys) often ignore these dependencies between the components and their consequences (e.g. feedback loops, non-linearity). Using methodologies that ignore these interactions could be an explanation why school segregation is still plaguing our educational systems. However, tools from complex systems such as Agent-Based Models (ABMs), can potentially be a complementary methodology that can explicitly account for such interactions and could therefore complement our understanding of the effect of them on school segregation. In this talk, two examples of ABMs are presented which illustrate that a complexity perspective on school choice dynamics can meaningfully contribute to the field of school choice and school segregation. Firstly, using a highly stylised ABM, an alternative explanation of why schools are often more segregated relative to neighbourhoods is given. Households do not have to be less tolerant for school compositions compared to neighbourhood compositions. Instead, this ABM demonstrates that asymmetric preferences are not a requirement for excess school segregation, but that residential segregation combined with distance preferences and interacting components can also play a key role. However, these highly stylised models have limited applicability to reality and actual policy scenarios. Therefore, we also present ongoing work on a methodology to empirically calibrate large-scale ABMs on empirical data in the context of school choice. We show that this methodology is able to retrieve the (artificial) ground truth within reasonable accuracy. Additionally, (computational) challenges and open questions are discussed, focusing on the usage of household-level register data. >> DIEP Seminar: Christian List (LMU Munich) Do group agents have free will | 11am, 30th of November 2023 It is common to ascribe agency to some organized collectives, such as corporations, courts, and states, and to treat them as loci of responsibility, over and above their individual members. But since responsibility is often assumed to require free will, should we also think that group agents have freewill? Surprisingly, the literature contains very few in-depth discussions of this question (a notable exception is Hess 2014). I will argue that group agents can have free will not only in a relatively undemanding compatibilist sense, but also in a recognizably libertarian sense, which includes the possibility of doing otherwise. In developing this account of corporate free will, I will bring together recent work on group agency and recent work on free will. >> DIEP Seminar: Renee Hoekzema (VU Amsterdam) Multiscale methods for gene selection in single cell transcriptomics data | 11am, 23rd of November 2023 Single cell transcriptomics is a revolutionary technique in biology that allows for the measurement of gene expression levels in many individual cells simultaneously. Analysis of these large datasets reveals complex variation in expression patterns between cells. Current methods for analysis assume that cell types are discrete. However, in practice there is also continuous variation between cells: subtypes of subtypes, differentiation pathways, responses to environment or treatment, et cetera. The complexity found in modern single cell transcriptomics datasets calls for intricate methods to biologically interpret both discrete clusters as well as continuous variations. We propose topologically-inspired data analysis methods that identify coherent gene expression patterns on multiple scales, considering discrete and continuous patterns on equal footing. As well as finding new biologically meaningful genes, the methodology allows one to visualise and explore the space of gene expression patterns in the dataset. >> DIEP Seminar: Meike Wortel (University of Amsterdam) Eco-evolutionary mechanisms to explain species and strain diversity | 11am, 16th of November 2023 I will discuss a recent paper https://onlinelibrary.wiley.com/doi/full/10.1111/jeb.14158 where I investigated whether fluctuating nutrients can (partly) explain diversity observed in microbial communities, such as oceans or the human gut. When nutrient levels fluctuate over time, one possibly relevant mechanism is coexistence between specialists on low and specialists on high nutrient levels. The relevance of this process is supported by the observations of coexistence in the laboratory, and by simple models, which show that negative frequency dependence of two such specialists can stabilize coexistence. However, as microbial populations are often large and fast growing, they evolve rapidly. I will discuss how we determine what happens when species can evolve; whether evolutionary branching can create diversity or whether evolution will destabilize coexistences. We derive an analytical expression of the invasion fitness in fluctuating environments and use adaptive dynamics techniques to find that evolutionarily stable coexistence requires a special type of trade-off between growth at low and high nutrients. I will also discuss current work on a different mechanism: When one phenotype degrades a toxin which allo another phenotype to grow (cross-protection) and ideas how to generalize results from both of these studies to study the interaction of more mechanisms in larger networks. >> DIEP Seminar: Christian Hamster (University of Amsterdam) Journal club on modelling evolutionary dynamics | 11am, 9th of November 2023 Real world ecosystems can be very complex, with many species coexisting that are functionally very similar. For example, there are many different plankton species in the oceans competing for a small number of resources. Mathematical modelling, however, indicates that only a few species should survive, the 'most fit' species. This discrepancy is known as the paradox of the plankton. Our approach starts with a simple resource-consumer model, and at a random timepoint, we introduce a new species that is a small random perturbation of an already existing species. This allows us to numerically mimic evolution and build complex ecosystems. This approach raises a lot of questions, both mathematically and biologically. From a mathematical perspective, can we understand and describe the numerical solutions? And from a biological perspective, what constitutes a species in our system? Is every perturbation a new species, or just genetic variation within a species and should we look at clusters of species? These and many other questions we can discuss during the meeting. >> DIEP Seminar: Han van der Maas (University of Amsterdam) Sociophysics: an overview and new ideas | 11am, 2nd of November 2023 To develop a mathematical and computational toolbox for studying the interdependencies between polarization, segregation, and inequality, an overview of modeling in sociophysics is needed. In this talk I will try to provide such an overview. I will also present some new research directions in the field of polarization. >> DIEP Seminar: Liang Li (Max Planck Institute of Animal Behavior) Schooling fish: from biology to robotics and back | 11am, 26th of October 2023 With over half a billion years behind them, fish have evolved to swim with remarkable efficiency, agility, and stealth in their three-dimensional aquatic world. Given this, it's natural that engineers often look to fish for inspiration when developing underwater propulsion systems. Over the years, roboticists have been inspired by these biological marvels to design fish-like robots that mimic real fish in terms of morphology, locomotion, and movement. Interestingly, the trend has recently shifted from merely drawing inspiration from biology to using robotics as a tool for better understanding biological processes. In this talk, I will first discuss our approach to designing and controlling these robotic fish, rooted in the concept of bio-inspiration. I will then provide examples of how we employ both real and virtual robots to investigate the mechanisms of collective behaviour in schooling fish. To conclude, I'll offer a glimpse into my current and future endeavors in the realms of robotics and biology. >> DIEP Seminar: Alexandru Baltag & Sonja Smets (University of Amsterdam) Logic Meets Wigner’s Friend(s): the epistemology of quantum observers | 11am, 19th of October 2023 This presentation is about Wigner's Friend thought-experiment [1], and its more recent variations [2] and extensions such as the Frauchiger-Renner (FR) Paradox [3], that have recently reignited the debates in the foundations of quantum theory. Such thought experiments seem to indicate that, if quantum theory is assumed to be universally valid (and hence can be applied to multi-partite systems that may include classical observers), then different agents are rationally entitled to ascribe different (mutually inconsistent) states to the same system, and as a result they cannot share their information in a consistent manner. More precisely, the result in [3] is stated in the format of a no-go theorem. To analyze this problem, we focus on a few questions: what is the correct epistemic interpretation of the multiplicity of state assignments in these scenarios?; under which conditions can one include classical observers into the quantum state descriptions, in a way that is still compatible with Quantum Mechanics?; under which conditions can one system be admitted as an additional ‘observer’ from the perspective of another (background) observer?; when can the standard axioms of multi-agent Epistemic Logic (that allow for “knowledge transfer” between agents) be applied to quantum-physical observers? After discussing some of the various answers to these questions proposed in the literature, we propose a new such answer, sketch a particular formal implementation of it, and apply it to obtain a principled solution to Wigner Friend-type paradoxes. The presentation is based on recent joint work [4]. [1] E.P. Wigner, Remarks on the mind-body question, in I.J. Good, The Scientist Speculates, London Heinemann,1961. [2] D. Deutsch, Quantum theory as a universal physical theory, International Journal of Theoretical Physics, 24, I, 1985. [3] D. Frauchiger and R. Renner, Quantum theory cannot consistently describe the use of itself, Nature Communications, 9(1):3711. Preprint, arXiv:1604.07422, 2018. [4] A. Baltag and S. Smets, Logic meets Wigner’s Friend (and their Friends), preprint, arXiv:2307.01713 [quant-ph], July 2023. >> DIEP Seminar: Charlotte Hemelrijk (University of Groningen) Self-organized motion and collective escape in schools of fish and flocks of birds | 11am, 12th of October 2023 It is mysterious how schools of fish and flocks of birds, such as the huge flocks of starlings, coordinate their members particularly when the shape and density of the flock change miraculously, for instance during collective escape from a predator by collective turning, flock-splitting, flash expansion and the so-called wave of agitation. These fascinating questions need to be solved by combining computational models with empirical data. Empirical data have been collected on many aspects, such as the shapes of the flock, the degree of motion of individuals within a flock and the collective patterns of evasion of a predator, for different species of birds when they are chased by a robotic bird. In the present talk, we will use computational models based on self-organization, such as StarDisplay and others, because flocks and schools in these models resemble empirical data in many respects. We will show what causes the shape of a fish school to be on average oblong and that of huge flocks of starlings to be continuously changing; whether or not starlings keep close to a familiar neighbour and what may be underlying in schools and flocks the internal motion, flock splitting and the wave agitation. >> DIEP Seminar: Alexandre Genin (University of Utrecht) Spatial self-organization and its consequences in simple and complex ecosystems | 11am, 5th of October 2023 Because species interact with each other, the spatial organisation of ecosystems is seldom random, but most often self-organized into specific spatial patterns. A seminal example is the case of positive interactions between plants producing emergent patches, bands, or fractal patterns in a landscape, with important consequences on its resilience to perturbations. In this seminar, we will first focus on simple, but widespread examples of such ecological interactions leading to spatial-self organisation. We will show how the non-random, emergent spatial structure can be quantified to help us understand and predict ecosystem collapse, showcasing a recent application to the coral reefs of Rapa Nui (Easter Island, Chile). We will then discuss how these relatively simple principles can be extended to species-rich systems, to understand the link between ecological interactions and spatial self-organisation, this time in more complex settings. >> DIEP Seminar: Conor Finn (Max Planck Institute for Mathematics in the Sciences) Pointwise information decomposition for complex systems | 11am, 28th of September 2023 The aim of information decomposition is to provide a mathematical framework that partitions the total information provided by a set of source variables about a target variable into i) the unique information that is provided by each individual source variable, ii) the shared information that is redundantly provided by two or more source variables, and iii) the synergistic information that is only attainable from simultaneous knowledge of two or more source variables. Such a decomposition has many potential applications in the sciences: for instance, quantifying the synergistic interaction between multiple incoming neural stimuli that are fused to create some output signal, or for determining the extent to which a particular phenotypic trait depends uniquely on each individual source gene, redundantly on two or more genes or is determined by some synergistic combination of several genes. Most approaches to information decomposition focus on decomposition the average information provided the variables involved. In this talk, I will discuss why this is unsatisfactory, especially when in comes to analysing complex systems. I will provide a brief overview of the pointwise perspective on information theory and then discuss the challenges associated with determining a pointwise information decomposition. I will then discuss the pointwise partial information decomposition that we introduced before closing by demonstrating how this theory can be used to quantify emergent intrinsic computation in canonical complex systems, namely elementary cellular automata. >> DIEP Seminar: Daniel Miranda (Federal University of Pernambuco) Phenomenological renormalization group analysis of cortical spiking data | 10am, 21st of September 2023 The critical brain hypothesis has emerged in the last decades as a fruitful theoretical framework for understanding collective neuronal phenomena. Lending support to the idea that the brain operates near a phase transition, Beggs and Plenz were the first to report experimentally recorded neuronal avalanches, whose distributions coincide with the mean-field directed percolation (DP) universality class, which comprises a variety of models in which a phase transition occurs between an absorbing (silent) and an active phase. However, this hypothesis is highly debated, as neuronal avalanches analyses and other common statistical mechanics tools may struggle with challenges ubiquitous in living systems, such as subsampling, long range correlations and the absence of an explicit model for the complete neuronal dynamics. In this context, Meshulam et al. recently proposed a phenomenological renormalization group (PRG) method to deal with neural networks typical long range interactions with a model independent analysis. The procedure consists of repeatedly manipulating the data, obtaining an increasingly coarse-grained description of the activity after each iteration. Under a critical regime, non-trivial correlations and scale-free behavior should be unveiled as we simplify our description. This can be inferred from a series of statistical features of the data, which lead us to different scaling relations. Here, we apply this phenomenological renormalization group (PRG) in different experimental setups. Additionally, we investigate how the scaling exponents found via PRG behave as we parse our data by its coefficient of variation (CV); this measurement has appeared in recent literature as a means of tracking different cortical states through spiking variability. >> DIEP Seminar: Ro Jefferson (Utrecht University) Physics ∩ Machine Learning | 11am, 14th of September 2023 Machine learning has become both powerful and ubiquitous, but remains a black box whose internal workings are still largely unclear. In this talk, I will discuss some interesting connections between ideas in physics (in particular QFT and the renormalization group) and deep neural networks in particular, which collectively motivate a physics-based approach towards a theory of deep learning. >> DIEP Seminar: Daniele Marinazzo (University of Ghent) (Higher-order) informational interactions: ideas, implementations, and applications in neurosciences and behavioral sciences” | 11am, 7th of September 2023 Systems composed of many units, whose behavior goes beyond the sum of the individual behaviors of the singles, are ubiquitous. Examples relevant to what we do are the brain, the body as a whole, and the social systems we live in. When it comes to analyzing collective behavior we are often stuck with pairwise dependencies (often correlations). In this talk, I will describe a framework rooted in information theory to mine multiplets of variables sharing common information about the variability of complex systems, and provide some examples in neuroscience, physiological, and psychometrics. >> DIEP Seminar: Pierre Baudot (Median Technologies) Information cohomology, higher-order statistical interactions, complexity and deep networks | 11am, 29th June 2023 Information theory, probability and statistical dependencies, and algebraic topology provide different views of a unified theory yet currently in development, where uncertainty goes as deep as Galois's ambiguity theory, topos and motivs. I will review some foundations, that characterize uniquely entropy as the first group of cohomology, on random variable complexes and probability laws. This framework allows to retrieve most of the usual information functions, like KL divergence, cross entropy, and was extended by Juan Pablo Vigneaux to Tsallis entropies, differential entropy. Multivariate interaction/Mutual information (I_k and J_k) appear as coboundaries, and their negative minima, also called synergy, corresponds to homotopical link configurations, which at the image of Borromean links, illustrate what purely collective interactions or emergence can be. Those functions refine and characterize statistical independence in the multivariate case, in the sens that (X1,...,Xn) are independent iff all the I_k=0 (with 1<k<n+1, whereas for Total correlations G_k, it is sufficient that G_n=0), generalizing correlation coefficient. Concerning data analysis, restricting to the simplicial random variable structure sub-case, the application of the formalism to genetic transcription or to some classical benchmark dataset using open access infotopo library, unravels that higher statistical interactions are nonetheless omnipresent but also constitutive of biologically relevant assemblies. On the side of deep networks, information cohomology provides a topological and combinatorial formalization of deep networks' supervised and unsupervised learning, where the depth of the layers is the simplicial dimension, derivation-propagation is forward (co-homological). Recently, Leon Lang could generalize higher-order mutual informations notably to Tsallis entropy and Kolmogorov complexity, and Tom Mainiero further established a general associated index, or “Euler characteristic”, given by the Tsallis mutual informations on a weighted simplicial complex whose topology retains information about the correlations between various subsystems. Tsallis mutual informations hence open a totally new and unexplored territory for higher order interactions studies, both theoretically and in application to data-natural sciences. >> DIEP Seminar: Abel Jansma (University of Edinburg) The information theory of higher-order interactions | 11am, 22nd June 2023 Information-theoretic quantities reveal dependencies among variables in the structure of joint, marginal, and conditional entropies, but leave some fundamentally different systems indistinguishable. Furthermore, there is no consensus on how to construct and interpret a higher-order generalisation of mutual information (MI). In this talk, I will show that a recently proposed model-free definition of higher-order interactions amongst binary variables (MFIs), like mutual information, is a Möbius inversion on a Boolean algebra, but of surprisal instead of entropy. This gives an information-theoretic interpretation to the MFIs, and by extension to Ising interactions. We will study the dual objects to MI and MFIs on the order-reversed lattice, and find that dual MI corresponds to conditional mutual information, while dual interactions (outeractions) are interactions with respect to a different background state. Unlike mutual information, in- and outeractions uniquely identify all six 2-input logic gates, the dy- and triadic distributions, and different causal dynamics that are identical in terms of their Shannon-information content. >> DIEP Seminar: Mike Lees (UvA) Measuring, Modelling and Simulating Crowd Dynamics: Mobility to Epidemics | 11am, 15th June 2023 In this talk I will present the challenges of understanding and modelling the multi-disciplinary problem of human mobility and crowd dynamics. I’ll highlight our attempts to conduct empirical experiments of human crowds and demonstrate the technological and technical challenges that this presents. I’ll show the classical approaches used by computational scientists when modelling crowds and the challenges of connecting measurement to models. I’ll showcase two ongoing projects where we attempt to measure and model crowd dynamics to understand the spread of infectious disease. Firstly, the Kumbh Mela Experiment where we measure and model pilgrims in Ujjain, India, to estimate the spread of Tuberculosis. Secondly, A project in the Johan Cruyff arena where we use Wi-Fi data to try and connect movement human movement ecology (e.g., Levy Flight Dynamics) to the exposure and spread of infectious diseases. >> DIEP Seminar: Jay Armas (DIEP) Risk aversion promotes cooperation | 11am, 8th June 2023 Cooperation is at the heart of many phenomena in living and complex systems, including multicellularity, eusociality in insect societies, human communities and financial markets. Many mechanisms that lead to the emergence of cooperation in small groups have been proposed and studied in depth in the past decades. Yet, little is known about the existence of potential mechanisms that can sustain large-scale cooperation involving a very large group of individuals. I will combine chemical reaction networks with evolutionary game theory and stochastic methods to study simple models of interacting groups and show, under certain assumptions, that if individuals are risk averse, cooperation can emerge in large groups. >> DIEP Seminar: Diego Garlaschelli (University of Leiden and the IMT School of Advanced Studies, Lucca, Italy) Multiscale network renormalization: scale-invariance without geometry | 11am, 1st June 2023 Systems with lattice geometry can be renormalized exploiting their coordinates in metric space, which naturally define the coarse-grained nodes. By contrast, complex networks defy the usual techniques, due to their small-world character and lack of explicit geometric embedding. Current network renormalization approaches require strong assumptions (e.g. community structure, hyperbolicity, scale-free topology), thus remaining incompatible with generic graphs and ordinary lattices. Here we introduce a graph renormalization scheme valid for any hierarchy of coarse-grainings, thereby allowing for the definition of `block-nodes' across multiple scales. This approach reveals a necessary and specific dependence of network topology on additive hidden variables attached to nodes, plus optional dyadic factors. Renormalizable networks turn out to be consistent with a unique specification of the fitness model, while they are incompatible with preferential attachment, the configuration model or the stochastic blockmodel. These results highlight a deep conceptual distinction between scale-free and scale-invariant networks, and provide a geometry-free route to renormalization. If the hidden variables are annealed, they lead to realistic scale-free networks with density-dependent cut-off, assortatitivy and finite local clustering, even in the sparse regime and in absence of geometry. If they are quenched, they can guide the renormalization of real-world networks with node attributes and distance-dependence or communities. As an application, we derive an accurate multiscale model of the International Trade Network applicable across hierarchically nested geographic partitions. >> DIEP Seminar: Leonardo di Gaetano (Central European University) Percolation and Topological properties of temporal higher-order networks | 11am, 25th May 2023 Hypergraphs provide a more accurate representation of complex systems with non-pairwise interactions, such as social networks and cellular networks. However, analyzing and characterizing hypergraphs remains a challenge. To address this, we present a hidden variables formalism to analyze higher-order networks. We apply this framework to a higher-order activity-driven model and provide analytical expressions for the main topological properties of the time-integrated hypergraphs. Our analysis demonstrates the importance of considering higher-order interactions and shows that neglecting them can lead to underestimating the percolation threshold. Overall, our work contributes to a better understanding of the interplay between group dynamics and the unfolding dynamical processes over them in complex systems. >> DIEP Seminar: Ana Millan Vidal (U. Granada) The role of epidemic spreading in seizure propagation and epilepsy surgery | 11am, 11th May 2023 Computational models of brain dynamics can provide new insights into the prognosis of neurological disorders such as epilepsy. Epilepsy surgery is the treatment of choice for drug-resistant epilepsy patients, but up to 50% of the patients continue to have seizures one year after the resection. The propagation of seizures over the brain can be regarded as an epidemic spreading process taking place on the patient's connectome. We show in a retrospective study (N=15) that this simple dynamic -namely the Susceptible-Infected-Recover (SIR) model- is enough to reproduce the main aspects of seizure propagation as recorded via invasive electroencephalography (iEEG) [2,3]. Remarkably, the SIR model parameters that best describe the iEEG seizure patterns correspond to the critical transition between the percolating and absorbing phases of the SIR model [2,3,4], and the similarity between the iEEG and modelled seizure predicted surgical outcome (area under the curve AUC = 0.73). We validated the use of the model in the clinic with a blind, independent pseudo-prospective study (N=34) using the parameters as in the retrospective study to avoid over-fitting. As a consequence iEEG data (highly invasive and not always part of the presurgical evaluation) was not required. Using the model to find optimal resection strategies [1], we found smaller resections (AUC= 0.65) for patients with good outcome, indicating intrinsic differences in the presurgical data of patients with good and bad outcome [4]. The actual resection also overlapped more with the optimal one (AUC=0.64) and had a larger effect decreasing modelled seizure propagation (AUC=0.78) for patients with good outcome [4]. Individualised computational models may inform surgical planning by suggesting optimal resection strategies and informing on the likelihood of a good outcome after a proposed resection. This is the first time that such a model is validated on a fully independent cohort without the need for iEEG recordings. >> DIEP Seminar: Swarnendu Banerjee (U. Utrecht) Rethinking tipping points in spatial ecosystems | 11am, 4th May 2023 The theory of alternative stable states and tipping points has garnered a lot of attention in the last decades. It predicts potential critical transition from one ecosystem state to a completely different state under increasing environmental stress. However, typically ecosystem models that predict tipping do not resolve space explicitly. As ecosystems are inherently spatial, it is important to understand the effects of incorporating spatial processes in models, and how those insights translate to the real world. Moreover, spatial ecosystem structures, such as vegetation patterns, are important to predict ecosystem response in the face of environmental change. Models and observations from real savanna ecosystems and drylands have suggested that they may exhibit both tipping behavior as well as spatial pattern formations. Hence, in this talk, I will use mathematical models of humid savannas and drylands to illustrate several pattern formation phenomena that may arise when incorporating spatial dynamics in models that exhibit tipping. I will argue that such mechanisms challenge the notion of large-scale critical transitions in response to global change and reveal a more resilient nature of spatial ecosystems. >> DIEP Seminar: Christian Hamster (Wageningen University) Understanding Stochastic Waves in Cell Movement Models, from Gillespie Algorithms to (S)PDEs | 11am, 20th April 2023 Single-cell organisms are remarkably good at sensing food, especially if you consider that they lack our sensing organs and have to measure a gradient in the food supply over the length of a single cell. The precise mechanisms behind this gradient sensing are not fully understood yet, but scientists have determined many relevant molecules that are relevant in the motion of the cell and we can see how these molecules are activated in wavelike patterns. These processes can be used to build stochastic models for cell movement, where individual molecules are modeled. These models are complex, both numerically and analytically, so we often summarise everything into 'simpler' PDEs. In this talk, I would like to introduce (and explain) an in-between option, so-called Chemical Langevin Equations, effectively a Stochastic PDE approximation of the underlying stochastic algorithms. This approach allows us to use all the insights from the deterministic PDE, without throwing away the stochastic nature of the >> DIEP Seminar: Sandro Sozzo (University of Udine) Quantum nature of energy and entropy in cognition: Towards a Non—classical Thermodynamic Theory of Human Culture | 11am, 13th April 2023 Convincing evidence reveals that quantum structures model cognitive phenomena better and more efficiently than classical structures. This has led to the development of a novel research programme called quantum cognition. Inspired by a two–decade research on the field, we extend here the range of applicability of quantum cognition and prove that the notions of energy and entropy can be consistently introduced in human language and, more generally, in human culture. More explicitly, if energy is attributed to words according to their frequency of appearance in a text, then the ensuing energy levels are distributed non–classically, namely, they obey Bose–Einstein, rather than Maxwell–Boltzmann, statistics, because of the genuine quantum indistinguishability of the words that appear in the text. Secondly, the quantum entanglement due to the way meaning is carried by a text reduces the (von Neumann) entropy of the words that appear in the text, a behaviour which cannot be explained within classical (thermodynamic or information) entropy. We claim that this ‘quantum–type behaviour holds in general in human language', namely, any text is conceptually more concrete than the words composing it, which entails that the entropy of the overall text decreases as a result of composition. In addition, we provide examples taken from cognition, where quantization of energy appears in categorical perception, and from culture, where entities collaborate, thus entangle, to decrease overall entropy. We use these findings to propose the development of a non–classical thermodynamic theory for human cognition, which also covers broad parts of human culture and its artefacts, and bridges concepts with quantum physics entities. >> DIEP Seminar: Rodrigo Cofre (Paris-Saclay University) Novel perspectives on the structure-function dynamics of the primate and human brain under diverse consciousness states | 11am, 6th April 2023 The analysis of brain network dynamics provides an insightful perspective to analyze the dynamic reconfiguration in brain network structure across species to investigate its role during the loss of consciousness. Recent work has highlighted the importance of the dynamical aspect in understanding the functional relevance of alterations in this network structure to investigate how the brain supports consciousness. In this talk, my idea is to introduce the topic of research of consciousness from the STEM perspective and then fly over a set of ideas and recent results in order to discuss the problems associated with data analysis in the temporal dimension. I will show convergent and complementary results between different methods of investigating the dynamical aspects of the structure-function relationship during the loss of consciousness in the primate and human brain. If time permits, I would like to present different ideas and datasets that we may use in collaboration to explore more in-depth the relationship between the different states of consciousness and recorded brain activity under those states. >> DIEP Seminar: Chase Broedersz (VU Amsterdam) The dynamics of cell migration in flat and curved geometries | 11am, 30th March 2023 In many biological phenomena, cells migrate through flat or curved confining environments. However, a quantitative framework to describe the stochastic dynamics of such multicellular confined cell migration remains elusive. We employ a data-driven approach to infer the dynamics of cell movement, morphology and interactions of cells confined in micropatterns. By inferring a stochastic equation of motion directly from the experimentally determined short time-scale dynamics, we show that cells exhibit intricate non-linear deterministic dynamics that adapt to the geometry of confinement. We extend this approach to interacting systems, by tracking the repeated collisions of confined pairs of cells. By inferring an interacting equation of motion for this system, we find that non-cancerous (MCF10A) cells exhibit repulsive and frictional interactions. In contrast, cancerous (MDA-MB-231) cells exhibit attraction and a novel and surprising anti-friction interaction, causing cells to accelerate upon collision. Based on the inferred interactions, we show how our framework may generalize to provide a unifying theoretical description of diverse cellular interaction behaviors. Finally, I will discuss the collective dynamics cells migrating in 3D curved confining geometries in multicellular spheroids. >> DIEP Seminar: Ricard Solé (ICREA-Complex Systems Lab) Emergence, tinkering and universality in evolved networks | 11am, 23rd of March 2023 A common trait of complex systems is that they can be represented using a network of interacting parts. In fact, the network organization (more than the parts) largely conditions most higher-level properties, which are not reducible to the properties of the individual parts. Can the topological organization of these webs provide some insight into their evolutionary origins? Both biological and artificial networks share some common architectural traits. They are often heterogeneous and sparse, and most exhibit different types of correlations, such as nestedness, modularity or hierarchical patterns. These properties have often been attributed to the selection of functionally meaningful traits. However, a proper formulation of generative network models suggests a somewhat different picture. Against the standard selection–optimization argument, some networks reveal the inevitable generation of complex patterns resulting from reuse and can be modelled using duplication–rewiring rules lacking functionality. In other examples, such as human language, information tradeoffs might be responsible for the presence of universal scaling laws. Both give rise to the observed heterogeneous, scale-free and modular architectures. Here, we examine the evidence for tinkering and universality in cellular, technological and ecological webs and its impact on shaping their architecture. We suggest that both tinkering and information constraints shape these graphs at the topological level. In biological systems, selection forces would take advantage of emergent >> DIEP Seminar: Anshul Toshniwal (U. Amsterdam) Opinion Dynamics in Populations of Converging and Polarizing agents | 11am, 16th March 2023 Opinions determine individuals' attitudes and fundamentally influence collective decisions in societies. As a result, understanding the processes leading to the dynamic formation of opinions is a key research topic across multiple disciplines, from sociology and political sciences to multi-agent systems and statistical physics. Opinion dynamics have been simulated through different computational models where agents are assumed to interact over networks and be influenced through their social ties. Often, models assume that agents with opposing viewpoints converge in opinion when interacting with each other. This is at odds with evidence showing that individuals can also become further polarized when connected with individuals having opposing viewpoints, suggesting the existence of converging individuals (that become less radicalized when interacting with opposing agents) but also polarizing individuals (that become more radicalized in such settings). In this talk I will describe opinion dynamics when converging and polarizing nodes co-exist in a population. Through simulations and dynamic systems analysis we will try to understand 1) how radicalization depends on different combinations of such type of nodes and 2) how placing polarizing/converging agents in specific network locations impacts opinion radicalization. We observe that there is an optimal fraction of polarizing nodes that minimizes radicalization. Furthermore, we observe that placing polarizing nodes on specific network positions can strongly affect radicalization: assigning high-degree nodes as polarizing results in lower radicalization as compared to random assignment. Our results indicate that considering heterogeneous agents in what concerns their reaction to opposing viewpoints is fundamental to fully grasp the role of social networks in sustaining radical opinions. >> DIEP Seminar: Iain Couzin (Max Planck Institute of Animal Behaviour) The Geometry of Decision-Making | 11am, 9th March 2023 Running, swimming, or flying through the world, animals are constantly making decisions while on the move—decisions that allow them to choose where to eat, where to hide, and with whom to associate. Despite this most studies have considered only on the outcome of, and time taken to make, decisions. Motion is, however, crucial in terms of how space is represented by organisms during spatial decision-making. Employing a range of new technologies, including automated tracking, computational reconstruction of sensory information, and immersive ‘holographic’ virtual reality (VR) for animals, experiments with fruit flies, locusts and zebrafish (representing aerial, terrestrial and aquatic locomotion, respectively), I will demonstrate that this time-varying representation results in the emergence of new and fundamental geometric principles that considerably impact decision-making. Specifically, we find that the brain spontaneously reduces multi-choice decisions into a series of abrupt (‘critical’) binary decisions in space-time, a process that repeats until only one option—the one ultimately selected by the individual—remains. Due to the critical nature of these transitions (and the corresponding increase in ‘susceptibility’) even noisy brains are extremely sensitive to very small. >> DIEP Seminar: Vincent Buskens (Utrecht U.) Disease avoidance may come at the cost of social cohesion: Insights from a large-scale social networking experiment | 11am, 2nd March 2023 It is known that people tend to limit social contact during times of increased health risks, thus leading to the disruption of social networks and changing the course of epidemics. It is, however, less known to what extent people show such avoidance reactions. To test the predictions and assumptions of an agent-based model on the feedback loop between avoidance behavior, social networks, and disease spread, we conducted a large-scale (2879 participants) incentivized experiment. The experiment rewards maintaining social relations and structures, and penalizes acquiring infections. We find that disease avoidance dominates networking decisions, despite relatively low penalties for infections; and that participants use more sophisticated strategies than expected to prevent infections, while they forget to maintain a profitable network structure. Consequently, we observe lower numbers of infections than predicted, but also deterioration of network positions. These results imply that the focus on a more obvious signal (i.e., disease avoidance) may lead to unwanted side effects (i.e., loss of social cohesion). >> DIEP Seminar: Fernando Rosas (Imperial College) Formal approaches to emergence: theory, practice, and opportunities | 11am, 9th February 2023 Emergence is a profound subject that straddles many scientific scenarios and disciplines, including how galaxies are formed, how flocks and crowds behave, and how human experience arises from the orchestrated activity of neurons. At the same time, emergence is a highly controversial topic, surrounded by long-standing debates and disagreements on how to best understand its nature and its role within science. A way to move forward in these discussions is provided by formal approaches to quantify emergence, which give researchers new frameworks to guide discussions and advance theories, and also quantitative tools to rigorously establish conjectures about emergence and test them on data. This talk presents an overview on the theory and practice of these formal approaches to emergence, and highlights the opportunities they open for practical data analysis. We elaborate on their unifying principles and the distinctive benefits of them, and present illustrative examples of their application. We finish discussing several interpretation issues and potential misunderstandings, and presenting ideas about how to further develop these research efforts for the benefit of empirical >> DIEP Seminar: Fabian Greimel (U. Amsterdam) Falling Behind: Has Rising Inequality Fueled the American Debt Boom? | 11am, 2nd February 2023 We evaluate the hypothesis that rising inequality was a causal source of the US household debt boom since 1980. The mechanism builds on the observation that households care about their social status. To keep up with the ever richer Joneses, the middle class substitutes status-enhancing houses for status-neutral consumption. These houses are mortgage-financed, creating a debt boom across the income distribution. Using a stylized model we show analytically that aggregate debt increases as top incomes rise. In a quantitative general equilibrium model we show that Keeping up with the Joneses and rising income inequality generate 60% of the observed boom in mortgage debt and 50% of the house price boom. Finally, we provide novel empirical evidence on the relationship between top incomes and household debt. Mortgage debt rose substantially more in US states that experienced stronger growth in top incomes. There is no such relationship between top incomes and non-mortgage debt. These findings support to the importance of the comparisons channel. >> DIEP Seminar: Marco Javarone (Centro Ricerche Enrico Fermi in Rome and University College London) Evolutionary Game Theory beyond Cooperation | 11am, 26th January 2023 Evolutionary Game Theory (EGT) allows for facing the challenge of cooperation. For instance, EGT studies strategies and mechanisms able to trigger the emergence of cooperation in populations whose interactions rely on dilemma games, e.g. the Public Goods Game, whose Nash equilibrium is typically defection. Yet, although cooperation is a fundamental open challenge in science, this talk aims to show cross-disciplinary applications and results beyond this problem. To this end, after a brief general overview of this field, I will present some works that use EGT for analysing complex phenomena in ecology, epidemiology and blockchain dynamics. >> DIEP Seminar: Anton Souslov (U. Bath) Topological fibre optics | 11am, 19th January 2023 A challenge in photonics is to create a scalable platform in which topologically protected light can be transmitted over large distances. I will talk about the design, modeling, and fabrication of photonic crystal fibre (PCF) characterised by topological invariants [1]. The fibre is made using a stack-and-draw technique in which glass capillaries are stacked, molten, and drawn to desired size. Light propagates in glass cores, whose normal modes are analogous to atomic orbitals. Topological invariants emerge in the band structure of many coupled cores inside a periodic array, analogous to an atomic crystal. We directly measure the bulk winding-number invariant and image the associated boundary modes predicted to exist by bulk-boundary correspondence. The mechanical flexibility of fiber allows us to reversibly reconfigure the topological state. As the fiber is bent, we find that the edge states first lose their localization and then become relocalized because of disorder. We envision fiber as a scalable platform to explore and exploit topological effects in photonic networks. bottom of page
{"url":"https://www.d-iep.org/copy-of-archivessem22","timestamp":"2024-11-13T09:13:08Z","content_type":"text/html","content_length":"790438","record_id":"<urn:uuid:72bcc8f3-8158-4dcb-9b99-4eeb95a40eb7>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00091.warc.gz"}
Simplifying complex networks : from a clustering to a coarse graining strategy The framework of complex networks has been shown to describe adequately a wide class of complex systems made up of a large number of interacting units. In this framework, a node is associated to each unit and two nodes are connected by an edge if the two units interact with each other. Examples of such systems can be found in living organisms—the map of interactions between proteins or the network of neurons in the brain. Moreover, artificial systems such as the WWW, electrical grids or airplane connections have been studied using the tools of complex networks. Finally networks have found many applications in social sciences to characterize for instance human interactions of different kinds underlying the spreading of an epidemic. For most of these systems, the complexity arises because of the large number of units and their intricate connection patterns. A natural approach is therefore to simplify the systems by decreasing their size. Different schemes can indeed be designed for each particular system, leading to effective but case-dependent methods. From a more global and statistical perspective, a promising alternative is to reduce the complexity of the corresponding networks. In order to simplify complex networks, two strategies are presented in this Thesis. The first approach refers to the well-known clustering paradigm. It aims at identifying groups of nodes densely connected between each other and much less to the rest of the network. Those groups are referred to as clusters or communities. For most real systems, nodes within a community share some similarity or common feature. For instance, in a synonymy network where nodes are words and edges connect synonymous words, we have shown that finding communities allowed us to identify words corresponding to a single concept. We have also studied a network describing the dynamics of a peptide by associating a node to a microscopic configuration and an edge to a transition. The community structure of the network was shown to provide a new methodology to explore the main characteristics of the peptide dynamics and to unravel the large-scale features of the underlying free-energy landscape. Finally we have designed a new technique to probe the robustness of the community structure against external perturbations of the network topology. This method allows us, among else, to assess whether communities correspond to a real structure of the network, or are simple artifacts of the clustering algorithms. Community detection techniques have found a large number of practical applications as a method to simplify networks since the number of clusters is often much smaller than the number of nodes. However, a crucial issue has often been disregarded: is the network of clusters truly representative of the initial one? In this Thesis, we show that this is indeed not verified for most networks. For example we have considered the evolution of random walks on the network of clusters and found that it behaves quite differently than in the initial network. This observation led us to develop a new strategy to simplify complex networks, ensuring that the reduced network is representative of the initial one. It is based on the idea of grouping nodes, akin to community detection. However, the aim is no longer to identify the "correct" clusters, but to find a smaller network which preserves the relevant features of the initial one, and especially the spectral properties. We therefore refer to our method as Spectral Coarse Graining, by analogy with the coarse graining framework used in Statistical Physics. Applying this method to various kinds of networks, we have shown that the coarse-grained network provides an excellent approximation of the initial one, while the size could be easily reduced by a factor of ten. Therefore, the Spectral Coarse Graining provides a well-defined way of studying large networks and their dynamics considering a much smaller coarse-grained version. Overall, we first discuss the use and the limits of the usual clustering approach to reduce the complexity of networks, and apply it to several real-world systems. In a second part, we develop a new coarse graining strategy to approximate large networks by smaller ones and provide several examples to illustrate the power and the novelty of the method. Checksum (MD5)
{"url":"https://infoscience.epfl.ch/entities/publication/35db87cd-756f-42c5-b0a5-60f4de0eae69","timestamp":"2024-11-11T05:30:32Z","content_type":"text/html","content_length":"981063","record_id":"<urn:uuid:b28b730d-bf10-4ba6-bc2d-39699f28b5e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00881.warc.gz"}
Understanding Mathematical Functions: How To Make A Function From A Ta Mathematical functions are a crucial concept in the world of mathematics, serving as a fundamental building block for solving complex equations and analyzing data. A mathematical function is a relationship between a set of inputs and a set of possible outputs, where each input is related to exactly one output. Understanding how to make a function from a table is important as it helps us grasp the relationship between different variables and enables us to make predictions and analyze patterns within the data. Key Takeaways • Mathematical functions are crucial in solving complex equations and analyzing data • Understanding how to make a function from a table helps in grasping the relationship between different variables • Recognizing patterns in input and output values is important in identifying the correct function • Testing the function is necessary to ensure it accurately represents the table data • Real-world applications of functions derived from tables are found in various fields such as economics and engineering Understanding Mathematical Functions Mathematical functions are a fundamental concept in mathematics and are essential for understanding how different variables relate to each other. In this chapter, we will explore the definition of a mathematical function, the relationship between input and output values, and the different types of functions. A. Definition of a mathematical function A mathematical function is a relationship between a set of inputs (independent variables) and a set of outputs (dependent variables) where each input is related to exactly one output. In other words, a function assigns each input exactly one output. B. Explanation of the relationship between input and output values in a function In a mathematical function, the input values are the x-values or independent variables, and the output values are the y-values or dependent variables. The function describes how the input values are transformed to produce the output values. This relationship can be expressed using an equation or a table of values. C. Overview of the different types of functions (linear, quadratic, exponential, etc.) Functions can take many different forms, each with its own unique characteristics. Some common types of functions include linear functions, which have a constant rate of change, quadratic functions, which form a parabolic shape, and exponential functions, which grow or decay at a constant rate. Each type of function has its own set of properties and can be represented in various ways, such as equations, graphs, or tables. Creating a Function from a Table Understanding how to create a function from a table can be a valuable skill in mathematics. By following a step-by-step process, you can easily identify the input and output values and determine the function represented by the table. Identifying the input and output columns in the table When creating a function from a table, the first step is to identify the input and output columns. The input column represents the independent variable, while the output column represents the dependent variable. This distinction is crucial in determining the relationship between the input and output values. • Input column: Look for a column in the table that contains the values you are inputting into the function. • Output column: Identify the column in the table that contains the corresponding output values based on the inputs. Using the input and output values to determine the function Once the input and output columns are identified, you can use the values in the table to determine the function. By examining the relationship between the input and output values, you can establish the mathematical rule that governs the function. For example, if the input values are increasing or decreasing at a consistent rate, the function may be linear. If the output values are related to the input values in a non-linear way, the function may be quadratic, exponential, or logarithmic. By analyzing the patterns and relationships within the table, you can effectively determine the function represented by the data. Identifying Patterns in the Table When creating a mathematical function from a table of values, it is essential to identify the patterns present in the input and output values. This step is crucial in understanding the relationship between the two sets of data and ultimately determining the nature of the function. A. Recognizing patterns in the input and output values • Consistency: Look for consistent increments or decrements in the input and output values. This could indicate a linear relationship. • Repetitive sequences: Identify any repetitive sequences or cycles in the values, which may suggest a periodic function. • Non-linear trends: Be mindful of any non-linear trends, such as exponential growth or decay, in the table that could signify a different type of function. B. Using the patterns to determine the nature of the function • Correspondence: Once the patterns are identified, use them to determine the nature of the function. For example, if the input and output values have a consistent linear relationship, the function may be linear. • Testing possibilities: Consider different types of functions, such as linear, quadratic, exponential, and logarithmic, based on the observed patterns and test them against the table to see which fits best. C. The importance of thorough analysis in identifying the correct function Thorough analysis is crucial in identifying the correct function from a table of values. Rushing through this process may lead to inaccuracies and errors in the function creation. By carefully analyzing the patterns and considering various possibilities, a more accurate and reliable function can be determined. Testing the Function After creating a mathematical function from a given table, it is important to test the function to ensure that it accurately represents the data in the table. Testing the function involves using the function to calculate output values for given input values, comparing the calculated output values with the actual values in the table, and adjusting the function if necessary to ensure accuracy. A. Using the function to calculate output values for given input values Once the function is derived from the table, it can be used to calculate output values for specific input values. This involves plugging the input values into the function and obtaining the corresponding output values. The function should be capable of accurately producing output values for the input values provided in the table. B. Comparing the calculated output values with the actual values in the table After obtaining the output values from the function, it is essential to compare these values with the actual values given in the table. This step ensures that the function accurately represents the given data. Any discrepancies between the calculated output values and the actual values in the table need to be addressed in the next step. C. Adjusting the function if necessary to ensure it accurately represents the table data If there are differences between the calculated output values and the actual values in the table, adjustments to the function may be required. This could involve refining the function, identifying errors in the initial derivation, or revisiting the methodology used to create the function. The goal is to ensure that the function accurately represents the data in the table and can be used to make predictions or extrapolations with confidence. Real-World Applications Understanding how to make a function from a table is not only a fundamental concept in mathematics, but it also has numerous real-world applications. In this section, we will discuss the relevance of this skill in various scenarios. A. Discussing real-world scenarios where understanding how to make a function from a table is useful One of the most common real-world scenarios where understanding how to make a function from a table is useful is in analyzing and predicting patterns in data. For example, businesses often use functions derived from tables to forecast sales, expenses, and other financial metrics. Similarly, scientists and researchers use these functions to model and predict the behavior of physical B. Examples of how functions derived from tables are used in various fields Functions derived from tables are used in various fields such as economics, engineering, and physics. In economics, these functions are utilized to analyze demand and supply curves, compute cost and revenue functions, and make predictions about market trends. In engineering, functions derived from tables are used to model and predict the behavior of complex systems such as electrical circuits, mechanical structures, and chemical processes. In physics, these functions are used to describe and predict the motion, energy, and forces in the natural world. Understanding how to make a function from a table is crucial for grasping the concept of mathematical functions. It allows us to see the relationship between input and output values, and helps us make predictions and solve problems. I encourage all readers to practice creating functions from tables in order to strengthen their understanding of mathematical functions. The more we practice, the more proficient we become in recognizing patterns and making connections within mathematical functions. Keep practicing and happy math learning! ONLY $99 Immediate Download MAC & PC Compatible Free Email Support
{"url":"https://dashboardsexcel.com/blogs/blog/understanding-mathematical-functions-how-to-make-a-function-from-a-table","timestamp":"2024-11-09T03:08:31Z","content_type":"text/html","content_length":"212468","record_id":"<urn:uuid:511f3d58-ac8e-4ce6-bfb3-5461532ef1aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00367.warc.gz"}
Stop Obsessing About Qubits. Their Number Alone Doesn’t Matter Forty years ago, on 6–8 May 1981, a group of physicists and computer scientists got together at MIT’s Endicott House. The event was the Physics of Computation conference. The hottest topic of discussion — the possibility of mimicking nature to design ever more powerful ways of computation. The possibility of building a quantum computer. Fast-forward to today. Some of those physicists have gathered at a celebratory event QC40 this week to chat about the future and the present of quantum computing. After all, several companies have been busy building these machines, using different approaches. So how can one tell what prototype quantum computer is in the lead? Whatever the approach, all quantum computers rely on qubits to work. Short for ‘quantum bits,’ they are the fundamental information units of a quantum computer, analogous to the bits of a classical But the hype around qubits is misplaced: their number alone doesn’t matter that much. Qubits don’t make a quantum computer powerful. They don’t even make it work. Just like a car with a 1,000-horsepower engine is useless if you can’t corner or brake without crashing — a million qubits won’t bring you an inch closer to building a fully-functional quantum computer. What matters is the machine’s ability to run complex quantum algorithms, or circuits, that can’t be simulated classically. That’s the essence of a quantum computer of tomorrow. In today’s electronics, circuits are binary with only two possible states. These circuits complete the electron flow in a computer and make modern electronics work. A circuit of a quantum computer is a set of instructions on quantum data, an ordered sequence of operations played on different qubits. Running a quantum circuit requires the combined effort of the qubits, the hardware and the How good are your qubits? It’s the ability to execute specific quantum circuits that defines just how powerful a quantum computer really is — not in terms of the number of qubits but in terms of how stable and interconnected they are. That’s what companies should pay attention to when choosing a quantum computer for their specific task. That’s the notion of ‘quantum volume’ — the quality, capacity, and variety of quantum At IBM, we aim to double quantum volume every year to eventually achieve quantum advantage — the moment when a quantum computer should outperform a classical machine in a meaningful task. From that point on, quantum computers and classical systems will be working together, leading to a significantly better performance than classical systems alone. We expect to reach a quantum advantage before the end of this decade. The 1981 Physics of Computation Conference — the event that kickstarted quantum computing. See if you can spot Richard Feynman! (Credit: IBM Fellow Charlie Bennett) Ultimately, we aim to keep the so-called ‘frictionless development’ of quantum computing, bringing us to the point when you won’t have to be a quantum expert to use these nature-mimicking machines. In the future, developers should be able to program in their familiar environment, without having to worry about learning the intricacies of quantum gates and circuits. To get there, three advances should come together first: better hardware, better software and improved algorithms. Take the hardware, the beautiful gold-plated steampunk-like ‘chandelier.’ Researchers keep improving the cryostat, the control electronics, the amplifiers, all the cabling and the nanofabricated components on the chip that make up the qubits, kept at 15 millikelvin — nearly 200 times colder than outer space — and in total vacuum. Researchers manipulate the states of the qubits with ultra-precise microwave pulses they send into the cryostat through the snaking cables. As the pulses arrive, electrons flow though oscillators made of metals such as niobium or aluminum that, when cooled below one degree Kelvin, become superconducting. Electrons flowing through them make superconducting qubits act as atoms, obeying the laws of quantum mechanics. When two qubits reach the same resonant frequency, they get entangled. Just like the shape of a musical instrument sets the timbre of the notes it can play, a quantum machine has frequencies determined by its physical properties. Keeping the ‘noise’ down Researchers are constantly improving and calibrating the microwave pulses, making them ever more precise. They can be sent from anywhere in the world through the cloud, using the ever-improving At IBM, it’s the Quantum Composer, Quantum Lab and Qiskit, the open-source software development framework. Thanks to the software, developers control the placing of quantum gates — state manipulations — and run the measurements, improving the execution of their quantum programs and applications. All of this is part of quantum circuits. The cold temperatures and the vacuum shield the qubits from the disturbances of the outside world, the so-called ‘noise’ — anything from physical vibrations of the scientists walking around, to heat, light, magnetic fields or a stray microwave pulse. Any of those external effects impact the qubits, yanking them out of their quantum states of superposition and entanglement — because only in these states qubits can perform meaningful calculations. Once they are out — once they de-cohere — scientists get computational errors and the quantum computer no longer computes. Researchers are working hard on trying to suppress the noise as much as At IBM, we have a prototype quantum computer that works with 65 qubits, kept in superposition for just a few fractions of a second before they decohere, or reset. Later this year, we aim to have one with 127 qubits. That’s not enough to reach a quantum advantage, but that’s already very promising. To maintain superposition for longer, we need to ensure that our qubits are very low noise. Then we’ll be able to correct any remaining errors using classical computers. But this approach of error correction is still theory as we can only apply it once we scale the number of qubits to hundreds or more and lower the error rate in their operations. When that happens, working together, these low-noise, error-corrected physical qubits will form one so-called logical qubit. And going forwards, we will need hundreds of these logical qubits for a quantum computer of the future to become better than a classical computer in at least one meaningful task. That will be the moment of achieving quantum advantage. The future of error correction Hardware will never be error-free, so we just need to make sure that errors that the hardware introduces are of the type that we can correct using error correction codes. And we have to make sure that the fridge — the cryostat — doesn’t collapse. That could happen, if we were to continue adding, by brute force, more and more superconducting circuits to the bottom of the fridge. A cryostat with even one logical qubit made of, say, 500 physical qubits would be a structure of half a ton — and that is simply unfeasible. To reduce the amount of weight and cost, researchers are looking into different approaches. As microwave signals zip down from room temperature to the quantum processor, they get smaller and smaller, and need to be amplified to be measured. One approach is to make the cables smaller and lighter but amplify the signals more efficiently. And finally, there are algorithms. They are getting more and more elaborate, aimed at ever more complex problems of the future — from simulating large molecules to predicting financial risk. Soon, thousands upon thousands of quantum circuits will form quantum libraries, thanks to all the developers, researchers and enthusiasts of the Qiskit community, who keep designing new ones. In the future, you’ll just have to choose the right circuit from an online open access library — call it a ‘quantum app store’ of sorts — and plug it into your application through the cloud. Companies like IBM will be helping organizations around the world to choose the best circuit to solve a specific problem. The quantum magic will happen in the background in the cloud all on its own. That’s the future. And the roadmap of hardware, software and algorithm advances should get us there in no time.
{"url":"https://ibm-research.medium.com/stop-obsessing-about-qubits-their-number-alone-doesnt-matter-d60e9d247a4a?source=user_profile_page---------4-------------b21de552c2c5---------------","timestamp":"2024-11-04T07:06:33Z","content_type":"text/html","content_length":"120914","record_id":"<urn:uuid:50ca076d-d077-4510-95f6-f88d568e493d>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00233.warc.gz"}
How Are You Rewired My Brain To Become Fluent In Math was a wayward kid who grew up on the literary side of life, treating math and science as if they were pustules from the plague. So it’s a little strange how I’ve ended up now—someone who dances daily with triple integrals, Fourier transforms, and that crown jewel of mathematics, Euler’s equation. It’s hard to believe I’ve flipped from a virtually congenital math-phobe to a professor of engineering. One day, one of my students asked me how I did it—how I changed my brain. I wanted to answer Hell—with lots of difficulty! After all, I’d flunked my way through elementary, middle, and high school math and science. In fact, I didn’t start studying remedial math until I left the Army at age 26. If there were a textbook example of the potential for adult neural plasticity, I’d be Exhibit A.
{"url":"https://www.schoolinfosystem.org/2016/09/22/how-are-you-rewired-my-brain-to-become-fluent-in-math/","timestamp":"2024-11-07T13:11:17Z","content_type":"text/html","content_length":"6395","record_id":"<urn:uuid:d77c6691-414b-485c-a64b-b884bdfdc187>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00557.warc.gz"}
Michelson Postdoctoral Lecture 1: Entangled Mechanical Oscillators and a Programmable Quantum Computer: Adventures in Coupling Two-Level Systems to Quantum Harmonic Oscillators – David Hanneke Mon. May 10th, 2010, 12:30 pm-1:30 pm Rockefeller 221 The two-level system and the harmonic oscillator are among the simplest analyzed with quantum mechanics, yet they display a rich set of behaviors. Quantum information science is based on manipulating the states of two-level systems, called quantum bits or qubits. Coupling two-level systems to harmonic oscillators allows the generation of interesting motional states. When isolated from the environment, trapped atomic ions can take on both of these behaviors. The two-level system is formed from a pair of internal states, which lasers efficiently prepare, manipulate, and read-out. The ions’ motion in the trap is well described as a harmonic oscillator and can be cooled to the quantum ground state. In this lecture, I will describe a complete set of methods for scalable ion trap quantum information processing and their use in a programmable two-qubit quantum processor. The qubits are stored in two beryllium hyperfine states that are insensitive to magnetic field fluctuations. They have coherence times hundreds of times longer than a typical experiment lasts. Two beryllium ions are stored simultaneously with two magnesium ions, which allow recooling the ions’ motion without destroying any quantum information. Segmented trap electrodes allow separation of parts of the ion chain for quantum information transport and for individual laser-addressing for single-qubit gates. An arbitrary quantum operation on two qubits can be described with 15 real numbers, and we implement a quantum circuit composed of one and two-qubit gates with sufficient input parameters that it can be programmed to implement any operation. Along the way, we use some of the above techniques to entangle spatially separated mechanical oscillators, consisting of the vibrational states of two pairs of ions held in different locations.
{"url":"https://physics.case.edu/events/michelson-postdoctoral-lecture-1-entangled-mechanical-oscillators-and-a-programmable-quantum-computer-adventures-in-coupling-two-level-systems-to-quantum-harmonic-oscillators-david-hanneke/","timestamp":"2024-11-02T23:57:30Z","content_type":"text/html","content_length":"158976","record_id":"<urn:uuid:101f0889-ec1b-49c0-b5f3-a358a49d5f27>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00275.warc.gz"}
Reference Meek Rule Reference Rule: Meek’s Method Count multiple-seat elections as follows. 1. Initialize Election Set each candidate’s state to hopeful or withdrawn. Set each hopeful candidate’s keep factor kf to 1, and each withdrawn candidate’s keep factor to 0. Set omega to 0.000001 (1/10^6). 2. Rounds 1. Test count complete. Proceed to step C if all seats are filled, or if the number of elected plus hopeful candidates is less than or equal to the number of seats. 2. Iterate. 1. Distribute votes. For each ballot: set ballot weight w to 1, and then for each candidate, in order of rank on that ballot: add w multiplied by the keep factor kf of the candidate (to 9 decimal places, rounded up) to that candidate’s vote v, and reduce w by the same amount, until no further candidate remains on the ballot or until the ballot’s weight w is 0. 2. Update quota. Set quota q to the sum of the vote v for all candidates (step B.2.a), divided by one more than the number of seats to be filled, truncated to 9 decimal places, plus 0.000000001 (1/10^9). 3. Find winners. Elect each hopeful candidate with a vote v greater than or equal to the quota (v ≥ q). 4. Calculate the total surplus s, as the sum of the individual surpluses (v – q) of the elected candidates, but not less than 0. 5. Test for iteration finished. If step B.2.c elected a candidate, continue at B.1. Otherwise, if the total surplus s is less than omega, or (except for the first iteration) if the total surplus s is greater than or equal to the surplus s in the previous iteration, continue at B.3. 6. Update keep factors. Set the keep factor kf of each elected candidate to the candidate’s current keep factor kf, multiplied by the current quota q (to 9 decimal places, rounded up), and then divided by the candidate’s current vote v (to 9 decimal places, rounded up). Continue iteration at step B.2.a. 3. Defeat low candidate. Defeat the hopeful candidate c with the lowest vote v, breaking any tie per procedure T, where each candidate c' is tied with c if vote v' for c' is less than or equal to v plus total surplus s. Set the keep factor kf of c to 0. 4. Continue. Proceed to the next round at step B.1. 3. Count Complete 1. Elect remaining. If any seats are unfilled, elect remaining hopeful candidates. 2. Defeat remaining. Otherwise defeat remaining hopeful candidates. Election count is complete. 20. Breaking ties Ties can arise in B.3, when selecting a candidate for defeat. Use the defined tiebreaking procedure to select for defeat one candidate from the group of tied candidates. Elect or defeat “Elect or defeat candidate” mean “set candidate’s state to elected or defeated, respectively”. Multiple simultaneous defeats In the interest of reducing the number of rounds and avoiding inconsequential ties, a sub-step may be added to defeat sure losers. At the end of step B.2.e, if the iteration is not otherwise Find the hopeful candidate c with the highest vote v such that the sum of the votes for that candidate and all candidates c' whose vote v' is less than or equal to v, plus the total surplus s, is less than the lowest vote v" greater than v, and such that the number of hopeful candidates with votes greater than v is greater than or equal to the number of unfilled seats. If such a candidate c is found, defeat candidate c and all candidates c', set the keep factor kf of the defeated candidate(s) to 0, and continue at step B1. Alternative formulation: Find the largest set of hopeful candidates meeting the following conditions: □ For each candidate c with vote v in the set, any hopeful candidate with vote less than or equal to v is also in the set. □ The sum of the votes for the candidates in the set plus the total surplus s [B.2.d] is less than the lowest vote for any hopeful candidate not in the set. □ The number of hopeful candidates not in the set is greater than or equal to the number of unfilled seats. If the resulting set is not empty, defeat all candidates in the set, set the keep factor kf of the defeated candidate(s) to 0, and continue at step B1. 5 Responses to Reference Meek Rule 1. Hello, I am trying to understand the difference between Meek and meek-prf and their working (Math behind it). Will it be possible for you to share an example for Meek-prf ? Thanks in advance □ Meek-prf is meant to be a straightforward and readable implementation of a “good” set of parameters (precision and such), without unneeded complications such as batch defeats. Meek, on the other hand, supports a collection of options and implements batch defeats. It’s useful for experimentation, and playing around with those options, but in return the code is significantly more complex and harder to follow. Especially if you’re trying to understand a Meek count for the first time, stick to Meek-prf. It’s also a good choice to use if you’re specifying a Meek method for a real-world election. 2. […] complicated, and if you don’t like math, it is definitely a little complex. Take a look at this guide to evaluate the difficulty for […] 3. Hello. What would be the effect of switching B.2.e and B.2.f? I am using a variant which allows for equal rankings and uses a different elimination method (eliminates by lowest Borda count, but elects by vote quota). To achieve this, only one candidate is elected per round, in B.2.c, being the quota-achieving candidate with the highest Borda score (consistent with elimination by lowest Borda score: it’s as if all quota-reaching hopefuls are “temporarily eliminated” by lowest Borda score, and the last is elected). B.2.a handles multiple equal rankings by first carrying out partial transfers among all elected candidates ranked at a given rank, from highest to lowest keep factor; then transferring the full remaining weight of the ballot to all hopefuls at that rank, and setting the ballot’s weight to zero if there are any hopefuls at that rank. This is why only one is elected per round, and why I need to immediately update keep factors. □ I’ll need to give that some thought (both the specific ordering question and your method of handling equal rankings). There’s a problem with using the Borda score (in either case), though: the resulting method no longer observes later-no-harm, which in turn encourages strategic burial. In my view, that’s a serious drawback.
{"url":"https://prfound.org/resources/reference/reference-meek-rule/","timestamp":"2024-11-10T16:21:07Z","content_type":"application/xhtml+xml","content_length":"49488","record_id":"<urn:uuid:bf40479c-9888-4288-a850-bb0e03bcbd92>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00437.warc.gz"}
AIXI tag AIXI is a mathematical formalism for a hypothetical (super)intelligence, developed by Marcus Hutter (2005, 2007). AIXI is not computable, and so does not serve as a design for a real-world AI, but is considered a valuable theoretical illustration with both positive and negative aspects (things AIXI would be able to do and things it arguably couldn’t do). See also: Solomonoff induction, Decision theory, AI The AIXI formalism says roughly to consider all possible computable models of the environment, Bayes-update them on past experiences, and use the resulting updated predictions to model the expected sensory reward of all possible strategies. This is an application of Solomonoff Induction. AIXI can be viewed as the border between AI problems that would be ‘simple’ to solve using unlimited computing power and problems which are structurally ‘complicated’. How AIXI works Hutter (2007) describes AIXI as a combination of decision theory and algorithmic information theory: “Decision theory formally solves the problem of rational agents in uncertain worlds if the true environmental prior probability distribution is known. Solomonoff’s theory of universal induction formally solves the problem of sequence prediction for unknown prior distribution. We combine both ideas and get a parameterless theory of universal Artificial Intelligence.” AIXI operates within the following agent model: There is an agent, and an environment, which is a computable function unknown to the agent. Thus the agent will need to have a probability distribution on the range of possible environments. On each clock tick, the agent receives an observation (a bitstring/number) from the environment, as well as a reward (another number). The agent then outputs an action (another number). To do this, AIXI guesses at a probability distribution for its environment, using Solomonoff induction, a formalization of Occam’s razor: Simpler computations are more likely a priori to describe the environment than more complex ones. This probability distribution is then Bayes-updated by how well each model fits the evidence (or more precisely, by throwing out all computations which have not exactly fit the environmental data so far, but for technical reasons this is roughly equivalent as a model). AIXI then calculates the expected reward of each action it might choose—weighting the likelihood of possible environments as mentioned. It chooses the best action by extrapolating its actions into its future time horizon recursively, using the assumption that at each step into the future it will again choose the best possible action using the same procedure. Then, on each iteration, the environment provides an observation and reward as a function of the full history of the interaction; the agent likewise is choosing its action as a function of the full The agent’s intelligence is defined by its expected reward across all environments, weighting their likelihood by their complexity. AIXI is not a feasible AI, because Solomonoff induction is not computable, and because some environments may not interact over finite time horizons (AIXI only works over some finite time horizon, though any finite horizon can be chosen). A somewhat more computable variant is the time-space-bounded AIXItl. Real AI algorithms explicitly inspired by AIXItl, e.g. the Monte Carlo approximation by Veness et al. (2011) have shown interesting results in simple general-intelligence test problems. For a short (half-page) technical introduction to AIXI, see Veness et al. 2011, page 1-2. For a full exposition of AIXI, see Hutter 2007. Relevance to Friendly AI Because it abstracts optimization power away from human mental features, AIXI is valuable in considering the possibilities for future artificial general intelligence—a compact and non-anthropomorphic specification that is technically complete and closed; either some feature of AIXI follows from the equations or it does not. In particular, it acts as a constructive demonstration of an AGI which does not have human-like terminal values and will act solely to maximize its reward function. (Yampolskiy & Fox 2012). AIXI has limitations as a model for future AGI, for example, the Anvil problem: AIXI lacks a self-model. It extrapolates its own actions into the future indefinitely, on the assumption that it will keep working in the same way in the future. Though AIXI is an abstraction, any real AI would have a physical embodiment that could be damaged, and an implementation which could change its behavior due to bugs; and the AIXI formalism completely ignores these possibilities. • R.V. Yampolskiy, J. Fox (2012) Artificial General Intelligence and the Human Mental Model. In Amnon H. Eden, Johnny Søraker, James H. Moor, Eric Steinhart (Eds.), The Singularity Hypothesis.The Frontiers Collection. London: Springer. • M. Hutter (2007) Universal Algorithmic Intelligence: A mathematical top->down approach. In Goertzel & Pennachin (eds.), Artificial General Intelligence, 227-287. Berlin: Springer. • M. Hutter, (2005) Universal Artificial Intelligence: Sequential decisions based on algorithmic probability. Berlin: Springer. • J. Veness, K.S. Ng, M. Hutter, W. Uther and D. Silver (2011) A Monte-Carlo AIXI Approximation, Journal of Artificial Intelligence Research 40, 95-142] Blog posts See also
{"url":"https://www.greaterwrong.com/tag/aixi","timestamp":"2024-11-02T08:53:01Z","content_type":"text/html","content_length":"55412","record_id":"<urn:uuid:0fe575ee-f15e-4166-b0ef-9a6d2725b9b0>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00084.warc.gz"}
Prime Numbers and Prime Factorization Prime numbers play a significant role in various fields of computer science, especially in algorithms and cryptography. They have an intriguing nature and offer unique properties that make them a fascinating subject to study. In this article, we will explore prime numbers, their characteristics, and the concept of prime factorization. Prime Numbers A prime number is a positive integer greater than 1 that has no factors other than 1 and itself. In simpler terms, a prime number cannot be evenly divided by any smaller number (except 1). For example, 2, 3, 5, 7, 11, etc., are some prime numbers. Prime numbers have several noteworthy properties: 1. Uniqueness of Factorization: Every positive integer greater than 1 can be represented as a unique product of prime numbers. This property is known as the Fundamental Theorem of Arithmetic. For example, 48 can be expressed as the product of primes: 2 2 2 2 3 = 2^4 * 3. 2. Density: Prime numbers are infinitely dense. That means there are infinitely many prime numbers, and the gaps between consecutive primes become arbitrary large as we move further up the number 3. Building Blocks: Every positive integer greater than 1 can be built by multiplying prime numbers. This property is the basis for prime factorization and helps in solving various mathematical problems efficiently. Prime Factorization Prime factorization is the process of finding the prime numbers that multiply together to give a particular number. It is an essential concept in number theory and has significant applications in cryptography, coding theory, and prime testing algorithms. Let's understand prime factorization with an example: Example: Find the prime factorization of the number 84. 1. We start by dividing the number by the smallest prime, which is 2. 84 / 2 = 42 So, 2 is a prime factor of 84. 2. Next, we continue dividing the resulting quotient (42 in this case) by the smallest prime (2). 42 / 2 = 21 Again, 2 is a prime factor of 42. 3. We repeat the process until the quotient becomes 1. 21 / 3 = 7 Finally, 7 is also a prime factor of 21. As the quotient becomes 1, we stop. Therefore, the prime factorization of 84 is: 2 2 3 * 7. Prime factorization allows us to express a complex number in terms of its prime building blocks. It deeply influences computational number theory and helps in discovering divisors, calculating greatest common divisors (GCD), and solving problems involving factors of a number. Prime numbers and prime factorization are intriguing concepts with a wide range of applications in computer science and mathematics. Understanding prime numbers not only enhances our algorithmic thinking but also equips us with powerful tools to solve various computational problems efficiently. Additionally, prime factorization is a crucial technique that finds use in cryptography and number theory. Mastering these concepts can greatly benefit programmers and individuals interested in competitive programming.
{"url":"https://noobtomaster.com/competitive-programming-c-plus-plus/prime-numbers-and-prime-factorization/","timestamp":"2024-11-09T04:32:59Z","content_type":"text/html","content_length":"26763","record_id":"<urn:uuid:9d90fb43-6e69-4550-9bf5-6c7044714097>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00392.warc.gz"}
Methods for Improving Generalization and Convergence in Artificial Neural Classifiers 1. View Item Methods for Improving Generalization and Convergence in Artificial Neural Classifiers Type of Degree Electrical Engineering Artificial neural networks have proven to be quite powerful for solving nonlinear classification problems. However, the complex error surfaces encountered in such problems often contain local minima in which gradient based algorithms may become trapped, causing improper classification of the training data. As a result, the success of the training process depends largely on the initial weight set, which is generated at random. Furthermore, attempting to analytically determine a set of initial weights that will achieve convergence is not feasible since the shape of the error surface is generally unknown. Another challenge which may be faced when using neural classifiers is poor generalization once additional data points are introduced. This can be especially problematic when dealing with training data that is poorly distributed, or in which the number of data points in each respective class is unbalanced. In such cases, proper classification may still be achieved, but the orientation of the separating plane and its corresponding margin of separation may be less than optimal. In this dissertation, a set of methods designed to improve both the generalization and convergence rate for neural classifiers is presented. To improve generalization, a single neuron pseudo-inversion technique is presented that guarantees optimal separation and orientation of the separating plane with respect to the training data. This is done by iteratively reducing the size of the training set until a minimal set is reached. The final set represents those points which lie on the boundaries of the data classes. Finally, a quadratic program formulation of the margin of separation is defined for the reduced data set, and an optimal separating plane is obtained. A method is then described by which the presented technique may be applied to non-linear classification by systematically optimizing each of the neurons in the network individually. Next, a modified training technique is discussed, which significantly improves the success rate in gradient based searches. To do this, the proposed method monitors the state of the gradient search in order to determine if the algorithm has become trapped in a false minimum. Once entrapment is detected, a set of desired outputs are defined using the current outputs of the hidden layer neurons. The desired values of the remaining misclassified patterns are then inverted in an attempt to reconfigure the hidden layer mapping, and the hidden layer neurons are retrained one at a time. Linear separation is then attempted on the updated mapping using pseudo-inversion of the output neuron. The process is repeated until separation is achieved. The second method is compared with other popular algorithms using a set of 8 nonlinear classification benchmarks, and the proposed method is shown to produce the highest success rate for all of the tested problems. Therefore, the proposed method does, in fact, achieve the desired, which is to improve the rate of convergence of the gradient search by overcoming the challenge presented by local minima. Furthermore, the resulting improvement is shown to have a relatively low cost in terms of the number of required iterations.
{"url":"https://etd.auburn.edu/handle/10415/2809","timestamp":"2024-11-13T17:52:43Z","content_type":"text/html","content_length":"27808","record_id":"<urn:uuid:9817dd52-49d3-416b-bcc9-5d60bd50f74e>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00485.warc.gz"}
Efficient characterisation of large deviations using population dynamics We consider population dynamics as implemented by the cloning algorithm for analysis of large deviations of time-averaged quantities. We use the simple symmetric exclusion process with periodic boundary conditions as a prototypical example and investigate the convergence of the results with respect to the algorithmic parameters, focussing on the dynamical phase transition between homogeneous and inhomogeneous states, where convergence is relatively difficult to achieve. We discuss how the performance of the algorithm can be optimised, and how it can be efficiently exploited on parallel computing platforms. • driven diffusive systems • exclusion processes • large deviations in non-equilibrium systems ASJC Scopus subject areas • Statistical and Nonlinear Physics • Statistics and Probability • Statistics, Probability and Uncertainty Dive into the research topics of 'Efficient characterisation of large deviations using population dynamics'. Together they form a unique fingerprint. • Brewer, T. (Creator), Jack, R. (Creator), Clark, S. (Supervisor) & Bradford, R. (Supervisor), University of Bath, 8 May 2018 DOI: 10.15125/BATH-00457
{"url":"https://researchportal.bath.ac.uk/en/publications/efficient-characterisation-of-large-deviations-using-population-d","timestamp":"2024-11-11T13:37:03Z","content_type":"text/html","content_length":"57255","record_id":"<urn:uuid:f93c37d8-4ded-43f6-bf6c-ff96aecc7721>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00156.warc.gz"}
Bayes Theorem Formula: Properties, Chemical Structure and Uses Bayes Theorem Formula – Bayes’ theorem Formula states the probability of an event occurring under any given situation. It is also taken into account while calculating conditional probability. The Bayes theorem is frequently referred to as the “causes” probability formula. For example, suppose we need to determine the chance of selecting a white ball from the second of three separate bags of balls, each containing three different colours: red, white, and black. Conditional probability refers to the calculation of the probability of an occurrence based on other factors. This article will cover the statement and proof of the Bayes theorem, as well as its formula, derivation, and several solved questions. Bayes Theorem Formula Based on a past occurrence, the likelihood of an event may be ascertained using the Bayes theorem, which is named for the Reverend Thomas Bayes and applies to statistics and probability. Bayes rule has several uses, like Bayesian interference in the healthcare industry, which determines the likelihood of acquiring health problems at one age, among others. The Bayes theorem is based on determining P(A | B) when P(B | A) is known. We will use examples to explain the usage of the Bayes rule in determining the likelihood of events, as well as its proof, formula, and derivation. Also Read: Probability Formula Bayes Theorem Statement Bayes Theorem is a method for calculating the probability of an event depending on the occurrence of previous events. It is used to determine Conditional Probability. The Bayes theorem computes the probability based on a hypothesis. Let us now declare and verify Bayes’ Theorem. According to the Bayes theorem formula, the conditional probability of an event A occurring in the presence of another event B is equal to the product of the probability of B given A and Divide the probability of A by the probability of . It’s given as: P(A|B) = P(B|A)P(A) / P(B) • P(A|B) –The probability of event A occurring, provided that event B has happened • P(B|A) – The probability of event B occurring, provided that event A has happened • P(A) – The probability of event A • P(B) – The probability of event B Bayes Theorem Proof Using the Conditional Probability Formula, P(A|B) = P(A⋂B)P(B) Where, P(B) ≠ 0 Applying the Probability Multiplication Rule, P(A⋂B) = P(B|A)P(A) ………(1) According to the total Probability Theorem on the event B, P(B) = k = 1nP(An) P(B|An) …….(2) Put the value of P(A⋂B) and P(B) in the conditional probability formula, then we get- P(A|B) = P(B|A)P(A)k = 1nP(An) P(B|An) Where , P(An) = Priori Probability P(B| An) = Posteriori Probability Suggested Read: Mathematics Revision Notes for Probability – Free Download Bayes Theorem Derivation The concept of conditional probability is used to derive the Bayes theorem. There are two conditional probabilities in the formula for the Bayes theorem. According to the formula of conditional probability, The Bayes theorem can be stated as follows. P(A|B) = P(A⋂B)/P(B) Where, P(B) ≠ 0 P(A⋂B) = P(A|B)/P(B) ………… (3) P(B|A)= P(B⋂A)/P(A) Where, P(A) ≠ 0 P(B⋂A) = P(B|A)/P(A) ………..(4) The Joint Probability P(A ⋂B) of both occurrences A and B being true is as follows: P(B ⋂A) = P(A ⋂ B) Put the value of P(B ⋂A) and P(A ⋂ B) from the equation (3) and (4) – P(B|A)P(A) = P(A|B)P(B) P(A|B) = P(B|A)P(A)/P(B) Where P(B) 0 Also, Check: Important Questions of Probability Application of Bayes Theorem Bayesian inference, a statistical inference approach, is one of the many applications of Bayes’ theorem. It has been used in several fields, such as health, science, philosophy, engineering, sports, and law. For example, we can use Bayes’ theorem to describe medical test accuracy by taking into account the Probability of every specific person having an illness as well as the overall accuracy of the test. Bayes’ theorem uses prior probability distributions to derive posterior probabilities. Prior probability in Bayesian statistical inference refers to the likelihood of an event occurring before fresh evidence is obtained. Also Read : Probability Distribution Formula Bayes Theorem Questions With Solutions Q.1. It is estimated that 50% of all emails are spam. There is software available that detects spam mail before it arrives at the inbox. It has a 99% accuracy rate for identifying spam mail and a 5% possibility of misclassifying non-spam mail. Determine the probability that a given email is not spam. Solution: Let E1 = Occurrence of spam mail E2 = Occurrence of non-spam mail A = Occurrence of identifying a spam mail Then, P(E1) = P(E2) = 12 = 0.5 P(A|E1) = 0.99 P(A|E2) = 0.05 By Bayes Theorem Formula, P(E) = P(A) P(E[1]) + P(B) P(E[2]) P(E|A) = (0.05) (0.5) /((0.99 )(0.5) + (0.05)( 0.5)) P(E|A) = 0.048 = 48% Therefore, the probability that a given email is not spam is 48%. Q.2. Three urns contain white and black balls: the first urn contains three white and two black balls, the second urn has two white and three black balls, and the third urn has four white and one black ball. Without any bias, one urn is selected from that one ball, which is white. What is the probability that it originated from the third urn? Solution : Let E1 = Occurrence that the ball is picked from the first urn E2 =Occurrence that the ball is picked from the second urn E3 = Occurrence that the ball is picked from the third urn A = occurrence that the Selected ball is White. Then, P(E1) = P(E2) = P(E3) = 13 P(A|E1) = 35 P(A|E2) = 25 P(A|E3) = 45 By Bayes Theorem Formula, P(E3|A) = P(A|E3) P(E3)/(P(A|E1) P(E1) + P(A|E2) P(E2) + P(A|E3) P(E3)) P(E3|A) = (45)(13)/((35) (13) + (25) (13) + (45) (13)) P(E3|A) = 49 Therefore, the probability that it originated from the third urn is 49. FAQs (Frequently Asked Questions) 1. Is the Bayes Theorem and Conditional Probability the same? Conditional probability is the probability of an event occurring dependent on the occurrence of previous occurrences, and the Bayes theorem is derived from this concept. Both of the conditional probabilities are included in Bayes law. 2. When is Bayes' theorem applicable? Knowing the conditional probability of an occurrence, we can apply the Bayes theorem to calculate the reverse probabilities. 3. Where can the Bayes theorem be applied? The Bayes rule can be employed in the condition to answer probabilistic questions based on evidence.
{"url":"https://www.extramarks.com/studymaterials/formulas/bayes-theorem-formula/","timestamp":"2024-11-06T20:20:40Z","content_type":"text/html","content_length":"629591","record_id":"<urn:uuid:a59c4c9b-a366-4a9a-95f7-9b48b0a3afc4>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00284.warc.gz"}
Your Name Will Be In This Puzzle - Can You Find It? If you want to solve this puzzle, just look at this block of text we have provided you with seemingly random letters and try and find your name. This puzzle is 100% a lot harder than most people will expect it to be so don’t try to solve it quickly. Can you find your name in this Puzzle Picture? Here is the answer: You see what we did there? Most people are tricked with the very title of the article. Even though you’re looking for your name, you should be looking for “your name”. Can you solve this Math Puzzle? Can you solve this Math Puzzle? You should share this puzzle with your friends if you enjoyed it. If your sock drawer has 6 black socks, 4 brown socks, 8 white socks, and 2 tan socks, how many socks would you have to pull out in the dark to be sure you had a matching color pair? Here is the answer! The Answer is 5, there are only four colors, so five socks guarantee that two will be the same color. Facebook Comments
{"url":"https://wakeupyourmind.net/name-will-puzzle-can-find-2.html","timestamp":"2024-11-03T15:04:34Z","content_type":"text/html","content_length":"190888","record_id":"<urn:uuid:42eff39f-4381-4bc1-8cbb-791740994df6>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00876.warc.gz"}
Shapes with Given Area in context of base area 30 Aug 2024 Title: Geometric Analysis of Shapes with Given Base Area: A Theoretical Framework Abstract: This article presents a theoretical framework for analyzing shapes with given base area, focusing on the geometric properties and relationships between different shapes. We derive formulas to calculate the height and perimeter of various shapes, including rectangles, triangles, trapezoids, circles, and ellipses. Introduction: The concept of base area is fundamental in geometry, as it provides a measure of the size of a shape. In this article, we explore the geometric properties of shapes with given base area, examining the relationships between different shapes and deriving formulas to calculate their height and perimeter. Rectangle: A rectangle with base area A has a length (l) and width (w) such that: A = l × w To find the height (h), we can use the formula: h = √(A / l) The perimeter (P) of a rectangle is given by: P = 2(l + w) Triangle: A triangle with base area A has a base length (b) and height (h) such that: A = (1/2) × b × h To find the height, we can rearrange the formula to get: h = (2 × A) / b The perimeter (P) of a triangle is given by: P = a + b + c where a, b, and c are the lengths of the sides. Trapezoid: A trapezoid with base area A has two parallel bases of length a and b, and height h such that: A = (1/2) × (a + b) × h To find the height, we can rearrange the formula to get: h = (2 × A) / (a + b) The perimeter (P) of a trapezoid is given by: P = a + b + 2c where c is the length of the non-parallel side. Circle: A circle with base area A has radius r such that: A = π × r^2 To find the radius, we can rearrange the formula to get: r = √(A / π) The circumference (C) of a circle is given by: C = 2πr Ellipse: An ellipse with base area A has semi-major axis a and semi-minor axis b such that: A = π × a × b To find the semi-major axis, we can rearrange the formula to get: a = (A / π) / b The perimeter (P) of an ellipse is given by: P ≈ π × [3(a + b) - √(3a + 4b)] where a and b are the lengths of the major and minor axes, respectively. Conclusion: This article has presented a theoretical framework for analyzing shapes with given base area, focusing on the geometric properties and relationships between different shapes. The formulas derived in this article provide a foundation for further research and applications in geometry and related fields. Related articles for ‘base area ‘ : • Reading: **Shapes with Given Area in context of base area ** Calculators for ‘base area ‘
{"url":"https://blog.truegeometry.com/tutorials/education/6894ea2c394c2b73dc7ed0ca99db65df/JSON_TO_ARTCL_Shapes_with_Given_Area_in_context_of_base_area_.html","timestamp":"2024-11-09T07:23:13Z","content_type":"text/html","content_length":"15410","record_id":"<urn:uuid:366d83e8-7016-43ad-a10c-e8572ac1e755>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00732.warc.gz"}
What is the formula for collision? From the conservation of momentum, the equation for the collision between two objects is given by: m1v1 + m2v2 = m1v’1 + m2v’2. From this expression, the initial and final velocities can be derived. How do you do collision problems in physics? What is the formula of collision of momentum? Before the collision, one car had velocity v and the other zero, so the centre of mass of the system was also v/2 before the collision. The total momentum is the total mass times the velocity of the centre of mass, so the total momentum, before and after, is (2m)(v/2) = mv. What are the 3 types of collision? Collisions are of three types: perfectly elastic collision. inelastic collision. perfectly inelastic collision. What is collision physics? collision, also called impact, in physics, the sudden, forceful coming together in direct contact of two bodies, such as, for example, two billiard balls, a golf club and a ball, a hammer and a nail head, two railroad cars when being coupled together, or a falling object and a floor. What is elastic collision physics 11? An elastic collision is a collision in which there is no net loss in kinetic energy in the system due to the collision. Both momentum and kinetic energy are conserved in an elastic collision. How do you calculate 2d collisions? What is the formula for perfectly elastic collision? An elastic collision is a collision where both the Kinetic Energy, KE, and momentum, p are conserved. In other words, it means that KE0 = KEf and po = pf. When we recall that KE = 1/2 mv2, we will write 1/2 m1(v1i)2 + 1/2 m2(vi)2 = 1/2 m1(v1f)2 + 1/2 m2 (v2f)2. How do you solve elastic collisions? How do you find the mass of a collision? 1. Mass m1 = kg , v1 = m/s. 2. Mass m2 = kg , v2 = m/s. 3. Initial momentum p = m1v1 + m2v2 = kg m/s . 4. Initial kinetic energy KE = 1/2 m1v12 + 1/2 m2v22 = joules. 5. Then the velocity of mass m2 is v’2 = m/s. 6. because the final momentum is constrained to be p’ = m1v’1 + m2v’2 = kg m/s . What is momentum and collision? Momentum is a vector quantity that depends on the direction of the object. Momentum is of interest during collisions between objects. When two objects collide the total momentum before the collision is equal to the total momentum after the collision (in the absence of external forces). How do you find total momentum before a collision? 1. Work out the total momentum before the event (before the collision): p = m ร v. 2. Work out the total momentum after the event (after the collision): 3. Work out the total mass after the event (after the collision): 4. Work out the new velocity: What are the 4 points of collision theory? The collision energy must be greater than the activation energy for the reaction. The collision must occur in the proper orientation. The collision frequency must be greater than the frequency factor for the reaction. A collision between the reactants must occur. What happens when 2 objects collide? In a collision between two objects, both objects experience forces that are equal in magnitude and opposite in direction. Such forces often cause one object to speed up (gain momentum) and the other object to slow down (lose momentum). What are the 2 types of collision? There are two types of collisions: Inelastic collisions: momentum is conserved, Elastic collisions: momentum is conserved and kinetic energy is conserved. What is simple collision theory? collision theory, theory used to predict the rates of chemical reactions, particularly for gases. The collision theory is based on the assumption that for a reaction to occur it is necessary for the reacting species (atoms or molecules) to come together or collide with one another. What is e in elastic collision? The coefficient of restitution (COR, also denoted by e), is the ratio of the final to initial relative speed between two objects after they collide. It normally ranges from 0 to 1 where 1 would be a perfectly elastic collision. What is head collision physics? Head on collision in physics is the result of the alteration of momentum. As mentioned earlier, the momentum is the property of an object that is created by the changing of its velocity in response to the applied force. The momentum must be equal to zero before the collision can be started. What is inelastic collision example? An inelastic collision in a ballistic pendulum. Another example of an inelastic collision is dropped ball of clay. A dropped ball of clay doesn’t rebound. Instead it loses kinetic energy through deformation when it hits the ground and changes shape. What is perfectly inelastic collision Class 11? Perfectly inelastic collision: It is defined as the collision between two bodies in which the maximum amount of kinetic energy of a system is lost. What is Ke formula? Kinetic energy is directly proportional to the mass of the object and to the square of its velocity: K.E. = 1/2 m v2. If the mass has units of kilograms and the velocity of meters per second, the kinetic energy has units of kilograms-meters squared per second squared. What is AABB collision detection? AABB stands for Axis-Aligned Bounding Box, it is an algorithm to detect collision between a rectangle’s edges, in this case, those edges are parallel with coordinate axes. Basically, we will check two rectangles overlap with each other or not. What is a two dimensional collision? A collision in two dimensions obeys the same rules as a collision in one dimension: Total momentum in each direction is always the same before and after the collision. Total kinetic energy is the same before and after an elastic collision. How do you find the collision between two rectangles? One of the simpler forms of collision detection is between two rectangles that are axis aligned โ meaning no rotation. The algorithm works by ensuring there is no gap between any of the 4 sides of the rectangles. Any gap means a collision does not exist. What is the formula of collision frequency? Show that the number of collisions a molecule makes per second , called the collision frequency , f , is given by f=vห /lm , and thus f=42 ฯ r2vห N/V.
{"url":"https://physics-network.org/what-is-the-formula-for-collision/","timestamp":"2024-11-10T13:06:04Z","content_type":"text/html","content_length":"309562","record_id":"<urn:uuid:ed1edea3-7749-4ca4-a220-82a947459e36>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00767.warc.gz"}
The Lipschitz constant of perturbed anonymous games The Lipschitz constant of a game measures the maximal amount of influence that one player has on the payoff of some other player. The worst-case Lipschitz constant of an n-player k-action δ-perturbed game, λ(n, k, δ) , is given an explicit probabilistic description. In the case of k≥ 3 , it is identified with the passage probability of a certain symmetric random walk on Z. In the case of k= 2 and n even, λ(n, 2 , δ) is identified with the probability that two i.i.d. binomial random variables are equal. The remaining case, k= 2 and n odd, is bounded through the adjacent (even) values of n. Our characterization implies a sharp closed-form asymptotic estimate of λ(n, k, δ) as δn/ k→ ∞. Bibliographical note Publisher Copyright: © 2021, The Author(s), under exclusive licence to Springer-Verlag GmbH Germany, part of Springer Nature. We are grateful to MathOverflow for facilitating this collaboration. We thank the members of the Facebook group https://www.facebook.com/groups/305092099620459/ for pointing out references related to the reflection principle. We thank anonymous editor and referee for many corrections and ideas for improvement that helped to shape the final version of the paper. Amnon Schreiber is supported in part by the Israel Science Foundation Grant 2897/20. Ron Peretz is supported in part by the Israel Science Foundation Grant 2566/20. Funders Funder number Israel Science Foundation 2566/20, 2897/20 • Anonymous games • Approximate Nash equilibrium • Large games • Perturbed games Dive into the research topics of 'The Lipschitz constant of perturbed anonymous games'. Together they form a unique fingerprint.
{"url":"https://cris.biu.ac.il/en/publications/the-lipschitz-constant-of-perturbed-anonymous-games","timestamp":"2024-11-03T20:06:11Z","content_type":"text/html","content_length":"55830","record_id":"<urn:uuid:b4bc8fb8-2390-4c84-99ef-ef81157c35b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00846.warc.gz"}
Nprime mathematics practice book 2a The best way of learning a subject is through practice, practice, and more practice. Buy online, pick up in store is currently unavailable, but this item may be available for instore purchase. New general mathematics series, which previously consisted of the students book and the students practice book. Teachers resources have been created to support teachers as they use pr1me in their classrooms. Which of the following gives the best estimate for the volume of air in the balloon, in cubic inches. Pr1me mathematics interactive edition is a teacher resource for frontofclass teaching, practice and assessment. Scholastic pr1me mathematics a worldclass math program. Explore how pr1me mathematics works for you by viewing sample pages from our coursebooks, practice books and teachers guides for gradesyears 1 6 now. Explore how pr1me mathematics works for you by viewing sample pages from our coursework books and coursework manuals for gradesyears 1 6 now. Primary mathematics extra practice book 2, standards edition. Lastest resources upload this website is best viewed with internet explorer 10. When a spherical balloon is filled with air, it has a diameter of 6 inches. Primary math intensive practice 2a and 2b singapore math inc on. Prime mathematics has been adapted from the highly acclaimed and widely proven primary mathematics project developed by the ministry of education in singapore. Explore the details that make up pr1me mathematics by gradeyear level or by strands. Students progress through different types of problem sets including word problems, nonroutine problems, problem posing tasks and mathematical modeling. Scholastic prime mathematics practice book 2a booktopia. Each scenario is a description of an episode that occurred in a mathematics lesson. This series of arkbird prime mathematics workbook for classes i to v is intended to supplement the prime mathematics textbook with adequate practice exercises. Math books for elementaryaged learners beast academy. This has been kept together because we feel that all children starting a new school with a new teacher benefit from a thorough revision of basic arithmetic. It places emphasis on developing thinking and reasoning skills among students, by connecting the mathematics curriculum with reallife situations. Numbers and operations, measurement, geometry and data analysis. Grades 35, math drills, digits 012, reproducible practice problems. Pr1me mathematics proven to be worlds best practices. Buy a cheap copy of primary mathematics intensive practice book by kho tek hong. Two or three lessons each week use concrete objects and handson. This teachers guide supports the new general mathematics for junior secondary schools series as revised to align to the 20 nerdc curriculum. Daily math assignments are scheduled in the learning the basics part of the bigger hearts for his glory School essentials is the one stop shop for educational and teaching resources that enable learning and literacy for school children australia wide scholastic school essentials, scholastic school essentials, prime mathematics practice book 2b, the two prime practice books at each year level are designed to give students the opportunity to apply and consolidate their mathematical skills and. Reprint paperback with 3 pages of problems filled in wpencil, cover pulling from lower staple, clear tape along stapled spine. When presented with a problem, i can make a plan, carry out my plan. Scholastic pr1me mathematics is a forwardthinking and innovative mathematics program based on the curriculum standards and effective teaching and learning practices of the global topperformers in mathematics singapore, republic of korea and hong kong. Explore the details that make up alpha mathematics by gradeyear level or by strands. Edition follows the topical arrangement in the primary mathematics u. Kummer theorem in, written for both experts and nonexperts, the reader can find a great many open problems, results and records in the theory of prime numbers. New enjoying mathematics revised edition coursebook primer a. Extra practice books follow the topical arrangement in the textbooks and workbooks. Pr1me mathematics covers five strands of mathematics across six gradesyear levels. The primary mathematics standards edition is a series of elementary math textbooks and workbooks from the publishers of singapores successful primary mathematics series. Placement test for singapore primary mathematics 6a. A worldclass program based on topperforming singapore, republic of korea and hong kong. Why pr1me works proven to be the worlds best practice. The guide book is written in an engaging comic book style, and the practice book. Practice exercises are designed to provide the students with further practice after. Newly adapted to align with the mathematics framework for california public schools, the program aims to equip students with sound concept development, critical thinking and. It allows teachers to use technology to teach and engage the whole class. Online shopping for books from a great selection of pure mathematics. Pr1me mathematics table of contents pr1me mathematics. Primary mathematics 2a, workbook, standards edition. See all formats and editions hide other formats and editions. Edition this test covers only material taught in primary mathematics 6a, u. Scholastic pr1me mathematics is a composite of innovative and effective. Foong pui, pearlyn, lim li gek, hua, wong oon, hermanson, dr. This is a series of 12 workbooks specially written for the preprimary school pupils to build a solid foundation in mathematics. The content in alpha mathematics covers the four strands of mathematics across five gradesyear levels. Numbers and operations, measurement, geometry, data analysis and algebra. Buy a cheap copy of primary mathematics 6a textbook by kho tek hong. This workbook aims to consolidate and reinforce the math skills taught in the textbooks, helping students to master their lessons with confidence. Bridges in mathematics grade 3 practice book blacklines. Primary math intensive practice 2a and 2b paperback january 1, 2016 by singapore math inc author 4. Prime mathematics practice book 2a paperback october 8, 20 by scholastic author 5. Mathematical practice 4 i can recognize math in everyday. The samples of mathematics in action third edition resources are available. Nov 23, 2015 on this page you can read or download n2 mathematics book pdf in pdf format. Edition provides challenging supplementary material for singapore s primary mathematics, u. For prime number problems in algebraic number fields see also algebraic number theory. Prime mathematics international edition practice book. This series attempts to hone the mathematical skills of children by making their classroom experience more engaging. Primary mathematics extra practice book 2, standards. This extra practice book accompanies singapore maths standards edition books 2a and 2b. Apply mathematics to problems in everyday life identify quantities in a practical situation interpret results in the context of the situation and reflect on whether the results make sense 5. Pr1me mathematics is a worldclass program based on the effective teaching and learning practices of singapore, republic of korea and hong kong consistent top performers in international studies. Ba level 2a includes chapters on place value, comparing, and addition. Friendly notes at the beginning of each unit provide some reference. Scholastic prime mathematics scholastic international. The paperback of the primary mathematics intensive practice u. A video from the basic math video lab series from the city college of san francisco, featuring mathematics instructor mr. Pr1me interactive edition is a teacher resource for frontofclass teaching, practice and assessment. Stp mathematics 1a teachers notes and answers 4 exercise 2c p. Effective teaching and learning practices of topperforming nations. Grade 2 math primary mathematics 2a standards edition 9780761469773 076146977x inc. There is the practice tests for summative assessment, classroom posters for easy reference, implementation guide to develop skills and concepts and pr1me professional learning workbook for professional development. The revised edition promotes the approach of teaching mathematics by linking school knowledge with the childs everyday experience. Quick link to content in practice book for practice and assessment. Consider the available tools when solving problems. Pr1me interactive edition can be installed on a computer or accessed online through scholastic learning zone. Assessment test for singapore primary mathematics 6a u. Both versions can be viewed using computers running windows xp78 or mac os 10. The main feature of this series is the use of the concrete gt pictorial gt abstract approach. Mathematical practice 1 when presented with a problem, i. Core curriculum mathematics program based on the best teaching practices of topperforming singapore, republic of korea, and hong kong. Pr1me mathematics teaches via problem solving through the systematic development of problem sets. Pr1me mathematics interactive edition pr1me mathematics. Also, in this book and in one finds the description of a number of algorithms to test primality of numbers and, if composite, to factor. Kylie needs to read a book with 247 pages in 3 weeks. Developed in collaboration with the singapore ministry of education, its proven approach and consistent lesson design create a powerful learning ecosystem for premier instruction. Stp mathematics 1a teachers notes and answers 1 stp mathematics 1a notes and answers the book starts with a large section on arithmetic. Pr1me mathematics is a worldclass program based on effective teaching and learning practices of singapore, republic of korea and hong kong consistent top performers in international studies. Grade 5 language arts reach for reading,grade 5practice book 97813899655 13. Read, highlight, and take notes, across web, tablet, and phone. Everyday low prices and free delivery on eligible orders. Edition series of elementary math textbooks and workbooks is meant to be part of a system of learning in which adult supervision and. Trends in international mathematics and science study timss is an international study that evaluates the skills and knowledge of grade 4 and grade 8. Our illustrated math books and puzzles will challenge learners in grades 2 to 5. Explore sample pages consist of samples pages of books for students and teachers from grade 1 to grade 6. If you dont see any interesting for you, use our search form on bottom v.
{"url":"https://paddkgalacstib.web.app/326.html","timestamp":"2024-11-07T03:30:10Z","content_type":"text/html","content_length":"16408","record_id":"<urn:uuid:fcb1b3c4-abb4-47ff-b0c4-7d9408f5eb7f>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00700.warc.gz"}
FPGA Lab 10: Counters Counters are fundamental components in digital design, playing a critical role in various applications such as time measurement, event counting, frequency division, and digital clocks. A counter is a sequential circuit that goes through a prescribed sequence of states upon the application of input pulses. The primary function of a counter is to count the number of occurrences of an input signal and represent this count in binary form. Types of Counters • Binary Counters: These counters count in binary numbers and can be either up-counters or down-counters. An up-counter increments the count with each input pulse, whereas a down-counter decrements the count. • Decade Counters: These are specialized counters that go through ten states per cycle, making them useful for decimal digit counting applications. • Ring Counters: These counters have a single high bit that circulates around the register, providing a straightforward way to generate timing signals. • Johnson Counters: Also known as twisted ring counters, these have a similar structure to ring counters but offer double the number of states for a given number of flip-flops. Synchronous vs. Asynchronous Counters • Synchronous Counters: All the flip-flops are clocked simultaneously, making these counters faster and more reliable for high-speed operations. They are easier to design to avoid timing issues such as glitches. • Asynchronous Counters: Also known as ripple counters, these have flip-flops that are clocked at different times, with the clock signal rippling through the counter stages. They are simpler to design but slower and prone to timing problems. Applications of Counters Counters are used extensively in digital systems for: • Timing operations: Keeping track of time intervals in clocks and timers. • Frequency division: Reducing the frequency of a clock signal for various purposes. • Event counting: Recording the number of occurrences of specific events in systems like digital odometers and production line monitoring. • Sequential control: Managing sequences of operations in finite state machines and control units. Implementing Counters in FPGA Field-Programmable Gate Arrays (FPGAs) provide an excellent platform for implementing counters due to their reconfigurability and parallel processing capabilities. In this lab, you will learn to design and implement various types of counters using Verilog, simulate their behavior, and test them on FPGA development boards such as the Intel DE10-Lite.
{"url":"https://www.airsupplylab.com/verilog-fpga-lab/ee2440-fpga_lab-10-counters.html","timestamp":"2024-11-09T10:38:58Z","content_type":"text/html","content_length":"27568","record_id":"<urn:uuid:7ee98b3e-b364-40c4-afd9-e637ea3da9ec>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00475.warc.gz"}
Quilt Math Part I - calculating binding So let's start the Quilt Math series. Calculating binding. How much fabric do you need to bind your quilt? Before we get started, here's a button for your blogposts or sidebar. <div align="center"><a href="http://lilysquilts.blogspot.co.uk/2012/09/quilt-math-1-2-3.html" title="Lily's Quilts Math"><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEifKp1cBaIWIYq-meiodx2ZnK56dvJPn_2EH26EgxUmgMp11kukqZotBoDY4nL7pjpWNRi6BfmHgzXHi4delMQqOe_zZISBAADO3JVOlSYB07znPKXbeqEdLwFPsOZ64yahMMlHvf9w9keP/s170/QMB.png" alt="Lily's Quilts Math" style="border:none;" /></a></div> I make binding from WOF strips (i.e. cut from one selvedge to the other). I make straight binding, not bias binding. I usually make 2 1/2" double fold binding but we can also calculate for narrower binding. And the fabric I am using is usually 44" wide. So this tutorial is based on 2 1/2" double fold WOF binding from 44" fabric. 1. How long does the binding need to be? The binding needs to be as long as the outside perimeter of your quilt plus a little bit for corners (see step 2). So, if we have a quilt that is 70" x 90", we calculate 2 x 70" = 140". Then 2 x 90" = 180". Then add those two together. So 140" + 180" = 320". So we've added up the outside edges of the quilt. 2. Then add 1" per corner since mitring corners does take a little bit of extra fabric. So add on 4". 320" + 4" = 324". 3. If you're using fabric that is 44" wide, I take off 2" to account for selvedges and pre-washing shrinkage. I then take off an additional 4" to deal with the fact that, when making a diagonal join on a length of binding, some fabric is lost in those seams. So we are making calculations with fabric that is 38" wide. 4. To work out how many 38 WOF strips needed to make 324" of binding, you need to divide 324" by 38". So 324"÷ 38" = 8.53. Round 8.53 up to 9. Always round up, never down or you will end up with less than you need. So we need 9 WOF strips. 5. If cutting 2 1/2" binding, you need to multiply the number of WOF strips (9) by the width of the binding (2 1/2" or 2.5"). So, 9 x 2.5" = 22.5". So for a quilt 70" x 90", you need 22.5" of fabric. 6. Since a yard is 36": • 1/4yd is 9" • 1/2yd is 18" • 3/4yd is 27" • 1 yd is 36" So you will need 3/4 yd fabric for this binding. 7. Since a metre is roughly 39 1/2": • 1/4 metre is roughly 9 3/4" • 1/2 metre is roughly 19 1/2" • 3/4 metre is roughly 29 1/2" • 1 metre is roughly 39 1/2" So you will need 3/4 metre fabric for this binding. 8. So here is a shortened version of how you calculate your binding requirements: • Calculate 2 x width. • Then calculate 2 x length. • Add those two together and add on 4" for corners. • Divide the whole lot by 38" (the width of 44" wide fabric after taking out selvedges and diagonal seams). • Multiply that figure by 2.5 for a 2 1/2" binding. • OR multiply that figure by 2.25 for a 2 1/4" binding. • OR multiply that figure by 2 for a 2" binding. 9. Or in algebra, where W = width and L = length: 2.5 x ((2W + 2L + 4) ÷38) = the amount of fabric required to make a 2 1/2" double fold binding. 2.25 x ((2W + 2L + 4) ÷38) = the amount of fabric required to make a 2 1/4" double fold binding. 2 x ((2W + 2L + 4) ÷38) = the amount of fabric required to make a 2" double fold binding. Your homework is as follows. Calculate how much fabric is required to bind the following quilts with a 2 1/2" double fold binding cut from 44" fabric. Answers at the bottom of the post in case you want to double check your calculations! A. 50" x 50" quilt. B. 60" x 80" quilt. C. 40" x 40" quilt. At the risk of sounding a wee bit of a #quiltmathnerd, I'd love to hear how you calculate binding. And please comment or email if you're struggling with any of the posts in this series. I know quilt math doesn't click with everyone so please let me know if you can't quite wrap your head around it yet. A. 15" B. 20" C. 12.5" 1. Thanks for this great tutorial, look forward to the rest of the series, I will link up a post and share on my facebook page as it is brilliant! 2. I do it your way without taking out selvage allowance. I have never been caught short, so to speak, but will take it into account from now on. Great tutorial Lynne. Di x 3. I calculate binding by going to this website http://quiltbug.com/articles/binding-calculator.htm and checking their chart!! :o) 4. Lynne, your method is great but I guess I have leaned to cheat. I have a free app on my iPad called The Quilter's Little Helper by Robert Kaufman fabrics...all I have to do is fill in a few boxes with WOF and width of binding and it spits out the number of strips I have to cut. But if my battery is charging, I will use your method! Thanks for sharing! 5. Thanks so much for doing this series Lynne. I will be referring back to this post next time I do binding. You may need to sit down if I tell you how I do it now? - I grab some fabric and cut it up until it looks about right. Then two things happen - either I find I'm short by about 6" and have to hastily try adding an extra strip whilst the rest is attached to the quilt or I end up making so much I have enough to wrap round my house about twice. I'm not joking about any of this. Sometimes patterns tell me how much to make. But I'm sure I don't do anything fancy like try and work it out for myself with proper maths and stuff. 6. Very nice binding post. I use this method...sort of. I measure the side of the quilt and then sort of guesstimate how much I need. I always add a bit extra so I don't run short. If I actually do the math (sometimes) I have less leftover at the end. 7. We think alike. This is exactly how I do the math. Being an accountant is handy at times. 8. I do basically what you do, but I skip the step of adding extra for the corners and all of that. If I come up with 8.124 or something similar I know that 9 strips will cover all of that anyway, if I come up with 8.983 I just do 10. 9. Great tutorial! I pretty much do the same but my general rule of thumb is just 2 x the sides + 2 x the length + 15 inches. Don't ask me why 15 inches but it just always seems to work. Your way is much more thought out :) 10. I like that you have set out nice algebra statements, very mathnerdy! I add 10" not 4" for corners and such but there is always extra, otherwise this is exactly how I do the calculations too. 11. I use a similar method, although I didn't think to use 38 inches instead of WOF. I used the actual WOF, then added an extra strip to cover contingencies. Your method is more exact! 12. I also use a similar method but add an additional 12" and have never been caught short, however this is great having the guess work taken out of just how much fabric to purchase when needed! Thank you very much for all the time you are putting into the mathematics for us. :) 13. I calculate the perimeter, add 12", divide by 40", and add an extra strip just in case. 14. I do it just like you do, and I was a math major. Your instructions were clear as day. Thanks. 15. Wonderfully clear instructions! I do it just like you but instead of working with a 38" width I assume 40" (but then add 20" to account for corners and the diagonal joins) because 40 is an easier number to work with. (I try not to use a calculator...) I make my binding 2 5/8" wide as I found that I was often struggling to pull the back over the stitches (from attaching it to the front) and that extra eighth of an inch makes a big difference - I guess my 1/4" is more generous when attaching binding than it is for piecing!! 16. I calculate binding pretty much the same way, but I am less careful. I don't add 4" for corners, and I figure I get 40" for WOF (but I don't prewash so no shrinkage). Oh, and if I need like 5.25 WOF strips, I will often just cut 5 and then use a scrap of something else. Because I like a scrappy binding. I like to make that scrappy section fall on a corner if possible. 17. I do it about the same way...only using 42" as wof and adding 12" to the perimeter rather than subtracting for all of the seams etc. Well done. 18. I do it just like Cindy Sharp, above, and have never run short. ' looking forward to your future posts. I have a friend who wants to learn to quilt and plan to refer her to your blog as a 19. Awesome! 20. Wonderful, but my head hurt... I will print this off and keep it for reference!!! Thanks x 21. Thank you for this well drawn out post on the math of binding! It will be so helpful in my future projects! 22. Thanks for the great start to the math series! I'm happy to see that I have been doing it pretty much like you! 23. Oh just like the Sunday nights of olde - flashback 1984! I work it out the same, just hadn't thought of it as maths x 24. Haha, I make scrappy binding and add a bit in at the end if I haven't made enough #lazymathquilter! 25. I use bias binding, so I usually use the charts at jaybird quilts. 26. Great tute. I do it basically the same, I add 12 for wiggle room and divide by 40, never short. 27. Lynne, Thank you so much! It is clear and concise. Have a great day. 28. Thanks for the quilt math :) 29. I actually made a Quilt Binding Calculator with the help of my husband two years ago! I think this will help mathematically challenged people out there. http://www.wambers-whimsies.com/ I have instructions to calculate for double fold binding.. http://www.wambers-whimsies.com/useful-tools/ (Didn't mean to steal your thunder Lynne!) 30. I forgot to add that my calculator adds 12 inches and rounds up to the nearest whole strip, so you'll almost always end up with more than enough binding to complete your quilt. 31. Thank you for all the work you have done to provide us with absolutely clear end precise instructions! I keep it as a valuable reference document. Until now I did some calculating and some guessing and I always ended up with too much binding ,so I appreciate this method that helps me to reduce my scraps! 32. I pretty much do the same thing when I calculate binding, but I add on about 12 extra inches so I have plenty of fabric to work with when I go to sew the two ends together. I hate wrestling with a quilt that only has an itty bitty bit of binding left to sew together. 33. Oh my gosh, this is SO geeky 34. I pretty much do the same when I'm in #quiltmathnerd mood, but nearly always I do it the way my mother measured things: If you are right handed: Hold quilt corner between thumb and fingers of right hand and stretch your arm out. Put fingers and thumb of left hand gripping further along length of quilt on tip of nose and measure in multiples of nose to outstretched finger grip! Remember to Count. Replicate that length with string or ready prepped binding. Add on a bit for fat thumbs ... then add on about half width of quilt. (if I'm doing that nifty leaving long ends for diagonal joining Works perfectly for me every time. I know it's not very scientific, but it would work on a desert island with no ruler! 35. Thank you for your quilt math. I'm just starting quilting and a new blog and I get very inspired by yours.
{"url":"https://lilysquilts.blogspot.com/2012/10/quilt-math-part-i-calculating-binding.html","timestamp":"2024-11-14T06:38:45Z","content_type":"text/html","content_length":"176234","record_id":"<urn:uuid:e92594b9-100b-443f-b423-d1f85abe52c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00716.warc.gz"}
Help Page Creating additional Prismatic Elements: The Inflate Tool Time to read: ~4 min The Inflate tool allows Solid Elements to grow using Prismatic Elements or for Shell Elements to be converted to (multiple) Solid Elements. Inflation of Shell elements or Solid Elements are aimed at supporting workflows where either an existing mesh is to be augmented (such as where a solid mesh is generated for a structural FE solver but is now to be used with a solver that requires better resolution near the surface) or where a Shell/Sheet type mesh is to be used in an application that only supports Solid Elements. The workflow will depend on whether the source data consist of Solid Elements that are to be expanded or whether the data is made up of Shell Elements that are to be inflated/thickened into Solid From Shell Body If the starting data is a Shell Body, then the option (in the sidebar under the 'Inflate' tab) to use is the 'Thicken Surfaces'. This method allows Prismatic elements to be created from the Shell elements along the normals of their faces. It is important that the Elements' face orientation is correct, otherwise the growth of the elements can be unpredictable. If the number of Layers is set to an Odd number and the option to 'Apply to Both Sides' is set, then the first layer generated extends half way on either side of the originals: Single Inflation Layer from Shell elements applied to both sides For the case where more than one layer is to be generated (and the number of layers is still an odd number), the resultant layers are distributed equally on both sides: Three Inflation Layers applied to both sides of Shell Elements In the case where there are an even number of layers to be generated, the offset (Applied to Both Sides) is equal but the original Shell elements now lie between the first and second layer: Two Inflation Layers equally applied to either side of Shell Elements From Solid Body For the case where the input data contains Solid Elements, the option to 'Inflate Solid Surfaces' is appropriate. This approach generates prismatic elements on top of the exterior surfaces of a solid body, essentially growing into unused/undefined space: Inflation using Inflate Solid Surfaces Option In this view, the interior elements are made visible by generating a Mid-Plane through the body after inflation. What is to be pointed out is that the inflation will always proceed along the normal vector computed as the average of the Nodes' attached surfaces. In this case, the normal directions in the corners are the average of the top and side surfaces and thus the extension is at an angle. The number of Layers is the number of iterations during which to iteratively expand the mesh. A value of 1 means that there will be one layer of elements added. After each layer has been added, the local normal vectors are recomputed which helps (but does not necessarily prevent) internal pockets from becoming distorted or overlapping. First Layer Thickness The amount of offset that is to be applied to the newly formed Nodes is a factor of the normal vector (with a length equal to 1). Unlike the Thicken tool for the Polygon-Body data, the amount of offset is to be uniform and not dependent on local effects. If only one layer is to be generated, then the thickness of this layer is equal to this value. If multiple layers are being generated, the each layer thickness is a function of the first layer, the layer growth rate and the number of layers (see below). Growth Rate The thickness of each layer is a function of the first layer thickness, growth rate and number of layers. The thickness of layer 'n' is 1st_layer_thickness*(growth_rate^(n-1)) so for the first layer this is simply 1st_layer_thickness. For the second layer (n=1), this is 1st_layer_thickness*growth_rate for the 3rd layer (n=2), this is 1st_layer_thickness*growth_rate^2. From this it can be seen that a value for Growth Rate between 0 and 1 makes each following layer thinner than the previous and values larger than 1 make them larger. A value of 1 means that each layer is the same thickness.
{"url":"https://www.solveering.com/instep/webhelp/elements/elements_inflate.aspx","timestamp":"2024-11-04T10:25:26Z","content_type":"text/html","content_length":"44731","record_id":"<urn:uuid:67c27746-2e6e-4739-8a4b-4cd0d0a1802a>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00818.warc.gz"}
• Morita equivalences of Zhu’s associative algebra and mode transition algebras Abstract: The mode transition algebra 𝔄, and d-th mode transition subalgebras 𝔄_d⊂𝔄 are associative algebras attached to vertex operator algebras. Here, under natural assumptions, we establish Morita equivalences involving Zhu’s associative algebra 𝖠 and 𝔄_d. As an application, we obtain explicit expressions for higher level Zhu algebras as products of matrices, generalizing the result for the 1 dimensional Heisenberg vertex operator algebra from our previous work. This theory applies for instance, to certain VOAs with commutative, and connected Zhu algebra, and to rational vertex operator algebras. Examples are given. • Conformal blocks on smoothings via mode transition algebras (with Chiara Damiolini and Daniel Krashen) ABSTRACT: Here we define a series of associative algebras attached to a vertex operator algebra V, called mode transition algebras, showing they reflect both algebraic properties of V and geometric constructions on moduli of curves. One can define sheaves of coinvariants on pointed coordinatized curves from V-modules. We show that if the mode transition algebras admit multiplicative identities with certain properties, these sheaves deform as wanted on families of curves with nodes (so V satisfies smoothing). Consequently, coherent sheaves of coinvariants defined by vertex operator algebras that satisfy smoothing form vector bundles. We also show that mode transition algebras give information about higher level Zhu algebras and generalized Verma modules. As an application, we completely describe higher level Zhu algebras of the Heisenberg vertex algebra for all levels, proving a conjecture of Addabbo–Barron. • Oberwolfach Report : Algebraic structures on moduli of curves from vertex operator algebras ABSTRACT: This is an extended abstract for a talk given at the workshop “Recent Trends in Algebraic Geometry”, held at Oberwolfach during 18-23 June 2023. The presentation was about a series of associative algebras attached to a vertex operator algebra V, called mode transition algebras, defined in a recent paper with Damiolini and Krashen. Mode transition algebras reflect both the algebraic structure of V and the geometry of sheaves of coinvariants on the moduli space of curves derived from representations of V. On the geometric side, we show coherent sheaves of coinvariants are locally free when the mode transition algebras admit unities that act as identities on modules (we call these strong unities). Some questions are suggested. • Factorization Presentations (with Chiara Damiolini and Daniel Krashen) ABSTRACT. Modules over a vertex operator algebra V give rise to sheaves of coinvariants on moduli of stable pointed curves. If V satisfies finiteness and semi-simplicity conditions, these sheaves are vector bundles. This relies on factorization, an isomorphism of spaces of coinvariants at a nodal curve with a finite sum of analogous spaces on the normalization of the curve. Here we introduce the notion of a factorization presentation, and using this, we show that finiteness conditions on V imply the sheaves of coinvariants are coherent on moduli spaces of pointed stable curves without any assumption of semisimplicity. • On global generation of vector bundles on the moduli space of curves from representations of vertex algebras (with C. Damiolini). In Algebraic Geometry. (arxiv, journal) ABSTRACT. We prove sheaves of coinvariants on the moduli space of stable pointed rational curves defined by simple modules over a vertex operator algebra V, are globally generated when V is of CFT-type, and generated in degree 1. Examples where global generation fails, and evidence of positivity are given for more general V. • On an equivalence of divisors on $\overline{M}_{0,n}$ from Gromov-Witten theory and Conformal Blocks (with L. Chen, L. Heller, E. Kalashnikov, H. Larson, and W. Xu), in Transformation Groups. (arxiv, journal) ABSTRACT. We consider a conjecture that identifies two types of base point free divisors on $\overline{M}_{0,n}$. The first arises from Gromov-Witten theory of a Grassmannian. The second comes from first Chern classes of vector bundles associated to simple Lie algebras in type A. Here we reduce this conjecture on $\overline{M}_{0,n}$ to the same statement for n = 4. A reinterpretation leads to a proof of the conjecture on $\overline{M}_{0,n}$ for a large class, and we give sufficient conditions for the non-vanishing of these divisors. • Vertex algebras of CohFT-type (with Chiara Damiolini and Nicola Tarasca), Vol. I, 164–189, London Math. Soc. Lecture Note Ser., 472, Cambridge Univ. Press, Cambridge, 2022. volume in honor of William Fulton on the occasion of his 80th birthday ABSTRACT. Representations of certain vertex algebras, here called of CohFT-type, can be used to construct vector bundles of coinvariants and conformal blocks on moduli spaces of stable curves [DGT2]. We show that such bundles define semisimple cohomological field theories. As an application, we give an expression for their total Chern character in terms of the fusion rules, following the approach and computation in [MOPPZ] for bundles given by integrable modules over affine Lie algebras. It follows that the Chern classes are tautological. Examples and open problems are discussed. • On Factorization and vector bundles of conformal blocks from vertex algebras (with Chiara Damiolini and Nicola Tarasca), to appear in Annales scientifiques de l’École normale supérieure .(arXiv) ABSTRACT. Modules over conformal vertex algebras give rise to sheaves of coinvariants and conformal blocks on moduli of stable pointed curves. Here we prove the factorization conjecture for these sheaves. Our results apply in arbitrary genus and for a large class of vertex algebras. As an application, sheaves defined by finitely generated admissible modules over vertex algebras satisfying natural hypotheses are shown to be vector bundles. Factorization is essential to a recursive formulation of invariants, like ranks and Chern classes, and to produce new constructions of rational conformal field theories and cohomological field theories. • Conformal blocks from vertex algebras and their connections on \overline{M}_g,n (with Chiara Damiolini and Nicola Tarasca), Geometry and Topology (journal), (arxiv) ABSTRACT. We show that coinvariants of modules over conformal vertex algebras give rise to quasi-coherent sheaves on moduli of stable pointed curves. These generalize Verlinde bundles or vector bundles of conformal blocks defined using affine Lie algebras studied first by Tsuchiya-Kanie, Tsuchiya-Ueno-Yamada, and extend work of a number of researchers. The sheaves carry a twisted logarithmic D-module structure, and hence support a projectively flat connection. We identify the logarithmic Atiyah algebra acting on them, generalizing work of Tsuchimoto for affine Lie algebras. • Basepoint free cycles on \overline{M}[0,n ]from Gromov-Witten theory (with Prakash Belkale) IMRN 2019 (arxiv, journal) ABSTRACT. Basepoint free cycles on the moduli space of stable n-pointed rational curves, defined using Gromov-Witten invariants of smooth projective homogeneous spaces X are studied. Intersection formulas to find classes are given, with explicit examples for X a projective space, and X a smooth projective quadric hypersurface. When X is projective space, divisors are shown equivalent to conformal blocks divisors for type A at level one, giving maps from $\overline{M}_{0,n}$ to birational models constructed as GIT quotients, parametrizing configurations of weighted points supported on (generalized) Veronese curves. • On finite generation of the section ring of the determinant of cohomology line bundle (with Prakash Belkale), Transactions of the AMS, 2018 (arxiv , journal) ABSTRACT. For C a stable curve of arithmetic genus g ≥ 2, and D the determinant of cohomology line bundle on Bun_{SL(r)}(C), we show the section ring for the pair (BunSL(r)(C), D) is finitely generated. Three applications are given. • On higher Chern classes of vector bundles of conformal blocks (with Swarnava Mukhopadhyay) (arxiv) ABSTRACT. Here we consider higher Chern classes of vector bundles of conformal blocks on the moduli space of stable pointed curves of genus zero, giving explicit formulas for them, and extending various results that hold for first Chern classes to them. We use these classes to form a full dimensional subcone of the Pliant cone on $\overline{M}_{0,n}$. • Scaling of conformal blocks and generalized theta functions over \overline{M}[g,n ](with Prakash Belkale and Anna Kazanova), Mathematische Zeitschrift, 2016, (arxiv, journal) ABSTRACT. By way of intersection theory on the moduli space of curves, we show that geometric interpretations for conformal blocks, as sections of ample line bundles over projective varieties, do not have to hold at points on the boundary. We show such a translation would imply certain recursion relations for first Chern classes of these bundles. While recursions can fail, geometric interpretations are shown to hold under certain conditions. • Nonvanishing of conformal blocks divisors on \overline{M}[0,n] (with Prakash Belkale and Swarnava Mukhopadhyay), Transformation Groups, 2016 (arXiv, journal) ABSTRACT. We introduce and study the problem of finding necessary and sufficient conditions under which a conformal blocks divisor on the moduli space of curves is nonzero. We give necessary conditions in type A, which are sufficient when theta and critical levels coincide. We show that divisors are subject to additive identities, dependent on ranks of the underlying bundle. These identities amplify vanishing and nonvanishing results and have other applications. • Vanishing and identities of conformal blocks divisors, (with Prakash Belkale and Swarnava Mukhopadhyay), Algebraic Geometry, 2 (1) 2015 (arxiv, journal) ABSTRACT. Conformal block divisors in type A on the moduli space of stable pointed rational curves are shown to satisfy new symmetries when levels and ranks are interchanged in non-standard ways. A connection with the quantum cohomology of Grassmannians reveals that these divisors vanish above the critical level. • Higher level sl_2 conformal blocks divisors on \overline{M}[0,n], (with Valery Alexeev, and David Swinarski), Proceedings of the Edinburgh Mathematical Society, 2014, (arxiv, journal) ABSTRACT. We study a family of semi-ample divisors on the moduli space of stable pointed rational curves defined using conformal blocks and analyze their associated morphisms. • Veronese quotient models of \overline{M}[0,n] and conformal blocks, (with Dave Jensen, Han-Bom Moon, and David Swinarski), the Michigan Mathematical Journal, 2013, (arxiv, journal) ABSTRACT. The moduli space of Deligne-Mumford stable n-pointed rational curves admits morphisms to spaces recently constructed by Giansiracusa, Jensen, and Moon that we call Veronese quotients. We study divisors associated to these maps and show that they arise as first Chern classes of vector bundles of conformal blocks. • Conformal blocks divisors and the birational geometry of \overline{M}[g,n], Mathematisches Forschungsinstitut Oberwolfach, Moduli Spaces in Algebraic Geometry 2013, (Abstract) ABSTRACT. This is an abstract of a talk given at Oberwolfach. • The cone of type A, level one conformal blocks divisors, (with Noah Giansiracusa), Advances in Mathematics, 2012, Pages 798-814, (arxiv, journal) ABSTRACT. We prove that the type A, level one, conformal blocks divisors on the moduli space of stable pointed rational curves span a finitely generated, full-dimensional subcone of the nef cone. Each such divisor induces a morphism from the moduli space, and we identify its image as a GIT quotient parameterizing configurations of points supported on a flat limit of Veronese curves. We show how scaling GIT linearizations gives geometric meaning to certain identities among conformal blocks divisor classes. This also gives modular interpretations, in the form of GIT constructions, to the images of the hyperelliptic and cyclic trigonal loci under an extended Torelli map. • On extensions of the Torelli Map, EMS Series of Congress Reports, Geometry and Arithmetic, October 2012, (arxiv, journal) ABSTRACT. The divisors on the moduli space of stable curves of genus g that arise as the pullbacks of ample divisors along any extension of the Torelli map to any toroidal compactification of $A_g$ form a 2-dimensional extremal face of the nef cone of $\overline{M}_g$, which is explicitly described. • sl[n] level 1 conformal blocks divisors on \overline{M}[0,n], (with M. Arap, J. Stankewicz and D. Swinarski), International Maths. Research Notices, 2011, (arxiv, journal) ABSTRACT. We study a family of semi-ample divisors on the moduli space of stable pointed rational curves that come from the theory of conformal blocks for the Lie algebra sl_n and level 1. The divisors we study are invariant under the action of the symmetric group. We compute their classes and prove that they generate extremal rays in the cone of symmetric nef divisors on $\overline{M}_ {0,n}$. In particular, these divisors define birational contractions, which we show factor through reduction morphisms to moduli spaces of weighted pointed curves defined by Hassett. • Lower and upper bounds on nef cones, (with Diane Maclagan), International Maths. Research Notices, 2011, (arxiv, journal) ABSTRACT. The nef cone of a projective variety Y is an important and often elusive invariant. In this paper we construct two polyhedral lower bounds and one polyhedral upper bound for the nef cone of Y using an embedding of Y into a toric variety. The lower bounds generalize the combinatorial description of the nef cone of a Mori dream space, while the upper bound generalizes the F-conjecture for the nef cone of the moduli space $\overline{M}_{0,n}$ to a wide class of varieties. • Equations for Chow and Hilbert Quotients, (with Diane Maclagan), Algebra and Number Theory, 2010, (arxiv, journal) ABSTRACT. We give explicit equations for the Chow and Hilbert quotients of a projective scheme X by the action of an algebraic torus T in an auxiliary toric variety. As a consequence we provide GIT descriptions of these canonical quotients, and obtain other GIT quotients of X by variation of GIT quotient. We apply these results to find equations for the moduli space $\overline{M}_{0,n}$ of stable genus zero n-pointed curves as a subvariety of a smooth toric variety defined via tropical methods. • Numerical criteria for divisors on \overline{M}[g] to be ample, Compositio Mathematica, 2009, (arxiv, journal) ABSTRACT. The moduli space of n−pointed stable curves of genus g is stratified by the topological type of the curves being parametrized: The closure of the locus of curves with k nodes has codimension k. The one dimensional components of this stratification are smooth rational curves (whose numerical equivalence classes are) called F-curves. The F-conjecture asserts that a divisor on $ \overline{M}_{g,n}$ is nef if and only if it nonnegatively intersects the F−curves. In this paper the F-conjecture on $\overline{M}_{g,n}$ is reduced to showing that certain divisors in $\overline{M} _{0,N}$ for $N \le g+n$ are equivalent to the sum of the canonical divisor plus an effective divisor supported on the boundary. As an application of the reduction, numerical criteria are given which if satisfied by a divisor D on $\overline{M}_g$, show that D is ample. Additionally, an algorithm is described to check that a given divisor is ample. Using a computer program called Nef Wizard, written by Daniel Krashen, one can use the criteria and the algorithm to verify the conjecture for low genus. This is done for $g\le 24$, more than doubling the known cases of the conjecture, and showing it is true for the first genus such that the moduli space is known to be of general type. • Pointed trees of projective spaces, (with L. Chen and D. Krashen), Journal of Algebraic Geometry, 2008, (arxiv, journal) ABSTRACT. We introduce a smooth projective variety $T_{d,n}$ which compactifies the space of configurations of n distinct points on affine d-space modulo translation and homothety. The points in the boundary correspond to n-pointed stable rooted trees of d-dimensional projective spaces, which for d=1, are (n+1)-pointed stable rational curves. In particular, $T_{1,n}$ is isomorphic to $\overline {M}_{0,n+1}$, the moduli space of such curves. • The Mori cones of moduli spaces of pointed curves of small genus, (with G. Farkas), Trans. Amer. Math. Soc., 2003, (arxiv, journal) ABSTRACT. We compute the Mori cone of curves of the moduli space of stable n-pointed curves of genus g in the case when g and n are relatively small. For instance, we show that for $g<14$ every curve in $\overline{M}_g$ is numerically equivalent to an effective sum of 1-strata (loci of curves with 3g-4 nodes). We also prove that the nef cone of $\overline{M}_{0,6}$ is composed of 11 natural subcones all contained in the convex hull of boundary classes. We apply this result to classify the fibrations of the moduli space of rational curves with $n<7$ marked points. • Towards the ample cone of \overline{M}[g,n][, ](with S. Keel and I. Morrison), J. Amer. Math. Soc. 2002, (arxiv, journal) ABSTRACT. In this paper we study the ample cone of the moduli space of stable n-pointed curves of genus g. Our motivating conjecture is that a divisor on $\overline{M}_{g,n}$ is ample iff it has positive intersection with all 1-dimensional strata (the components of the locus of curves with at least 3g+n−2 nodes). This translates into a simple conjectural description of the cone by linear inequalities, and, as all the 1-strata are rational, includes the conjecture that the Mori cone is polyhedral and generated by rational curves. Our main result is that the conjecture holds iff it holds for g=0.
{"url":"http://www.angelagibney.org/","timestamp":"2024-11-14T10:04:50Z","content_type":"text/html","content_length":"69877","record_id":"<urn:uuid:cd06dfd6-5f7a-4094-ab0d-8df6774007d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00111.warc.gz"}
Iteration-Complexity of a Generalized Forward Backward Splitting Algorithm In this paper, we analyze the iteration-complexity of a generalized forward-backward (GFB) splitting algorithm, recently proposed in~\cite{gfb2011}, for minimizing the large class of composite objectives $f + \sum_{i=1}^n h_i$ on a Hilbert space, where $f$ has a Lipschitz-continuous gradient and the $h_i$'s are simple (i.e. whose proximity operator is easily computable ). We derive iteration-complexity bounds (pointwise and ergodic) for GFB to obtain an approximate solution based on an easily verifiable termination criterion. Along the way, we prove complexity bounds for relaxed fixed point iterations built from composition of nonexpansive averaged operators. These results apply more generally to GFB when used to find a zero of a sum of $n > 0$ maximal monotone operators and a co-coercive operator on a Hilbert space. The theoretical findings are exemplified with experiments on video processing. GREYC CNRS-ENSICAEN-University of Caen, 6, Bd du Maréchal Juin, 14050 Caen Cedex, France, 10/2013 View Iteration-Complexity of a Generalized Forward Backward Splitting Algorithm
{"url":"https://optimization-online.org/2013/10/4093/","timestamp":"2024-11-11T11:27:35Z","content_type":"text/html","content_length":"84536","record_id":"<urn:uuid:e5e3d01e-03a4-4945-9082-dc6712f77a40>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00798.warc.gz"}
Microphone Calibration Pressure Chamber - MV Audio Labs Microphone Calibration Pressure Chamber This project involves high sound pressure levels inside a small volume of space. Generated sound pressure, which can reach up to 150dB, will easily destroy your small microphone membrane. Be very careful when connecting/disconnecting from signal generator and setting generator voltage levels. Always connect generator and check voltage level before mounting microphone and sealing the chamber. Free-field microphone calibration has it’s limits. We need a very homogeneous sound field with a plane or spherical sound waves. Otherwise we will have no repeatability as even the smallest error in microphone positioning or its physical dimensions variance will have a huge impact on our measurements. In practical terms that means small (approaching the point) sound source on a big (compared to wavelength) baffle. The lower the frequency – longer the gating time and larger the sound source baffle has to be. At some point it just becomes unreasonable or even unachievable. Something like a 3″ dome mounted on a 2×2.4 meters baffle will take you as low as 500Hz. But after that – you will run out of gating time before reflections and will need even bigger baffle or anechoic chamber. Something that is out of reach for a usual DIY’er. Physics to the rescue Fortunately free-field calibration is not the only method. There are couple of others. Well, maybe more then couple if we take into account metallic membranes that can be excited electronically. But let’s stick to acoustical excitation. So there are different microphone types, depending on application sound field. They have to have straight frequency response when subjected to their corresponding sound field in which they were calibrated. Free-field mic presumes you will be measuring single sound source straight on it’s axis. This is what you want for speaker measurement. Diffused-field mic expects multiply random sound sources from all directions. This type of mic you would use to accurately record an event in a highly reverberant space (like church). In a case of pressure-field mic, only it’s membrane is subjected to a sound-field. So you would use it to measure sound pressure in a closed chamber or pressure at some boundary (like a baffle). Pressure microphone frequency response in free (red), diffused (blue) and pressure fields. You wouldn’t use pressure-field calibrated microphone in a free-field speaker measurements and expect it to have full linear frequency response and vice versa. But as you can see from the graph above, all those fields converge at lower frequencies. And it makes perfect sense (I hope). As microphone dimensions are getting smaller and smaller in comparison to excitation wavelengths, it’s no longer disturbing the sound field with its presence. We can use this to our advantage and construct a piston driven closed chamber with almost perfect pressure-field. Then, at the constant temperature, the pressure in this chamber will solely depend on the volume displaced by the excitation piston. Pressure is inversely proportional to the volume. That’s Boyle’s law and it’s physics 101. Now the thing to watch out for is the frequency at witch we are moving the piston. If we are moving it too fast – pressure-field will collapse and we will create standing waves. So we want our chamber longest dimension to be between 1/6 and 1/8 wavelength of the highest piston frequency. If we choose our cut-off frequency to be 500Hz (or wavelength of 68cm), then our pressure chamber length should be between 11.5 and 8.6 cm. I’m oversimplifying this as much as I can. There is excellent app-note from Ivo Mateljan, creator of ARTA measuring software with more in-depth explanations and some math to back it up. Practical realization I didn’t found Visaton FRS8 for a sale locally and all the COVID-19 pandemic restrictions was just kicking in. Shipping was taking forever so I decided not to be lazy and use some other driver. It just so happens I had a modified DLS 424 cox driver without the tweeter (glued cap). It’s a 4″ mid driver that I was experimenting some long time ago. It’s no longer produced so I don’t see any point in publishing my exact pressure chamber design. Instead I redesigned it for a 3.3″ Visaton driver based on datasheet dimensions. So blame them if it doesn’t fit! Use a regular 75mm OD PVC waste pipe for tube part. Or you can print it if you think that’s a good idea. It’s length should be 101mm, so you would get the same volume of 0.39L as in the app-note. There is a 3mm relief in both end-plates. Use a rubber seal or just some silicon hermetic to seal all parts and tighten them with 5mm rods. The importance of driver parameters In order to get a reliable mathematical model of the constructed pressure chamber we will need to know these parameters: • Voltage level used to drive the excitation piston (driver). • Total volume of the pressure chamber. • Thiele-Small parameters of the driver. We can measure voltage very accurately with a good multi-meter or calibrated sound-card. And we can calculate pressure chamber volume quite easily as πr²h. Plus small amount of driver membrane volume. In case of SFR8 it’s negligible, but on some other more conical drivers it can be substantial. So take that into account also. And finally we need to know T/S parameters for the driver. That’s were biggest potential for uncertainty lies. I would not trust manufacturer datasheet values for a second. They are in a ball-park usually, but I yet have to measure a driver were they agree 100%. So even if using SFR8 driver I urge you to measure it yourself. It’s quite trivial measurement with a free software like REW. Use 5g of modeling clay on dust cap for added weight and make sure you calibrated your sound card for impedance measurement. DLS 424 free-air (red) and added-weight (blue) impedance and T/S parameters. You’ll have to input voice coil DC resistance which again is an easy multimeter measurement and membrane area Sd. Later could be more tricky to calculate. But you can just decompose it to couple simple geometrical figures. Cone is just a conical frustum and cap is just a… well, cap. Measure the relative surfaces and use any online calculators to solve for surface area. Suspension is kinda grey area. It will depend on its construction as to how much of it will actually “play”. But solving for half of torus surface area and then taking 30% will bring you into the ballpark. Driver impedance and resonance frequency. Measured – blue, simulated – red. To get a good idea of how reliable your measured T/S parameters are, we can do a cross-check. Just plug your T/S data to any box simulation software (like free WinISD), select closed box with a volume equal to your chamber and simulate the resulting driver impedance. Then measure the real driver impedance with a sealed chamber and check how well the resonance frequencies match. As you can see above, mine were within one Hertz. Mathematical model I didn’t try to reinvent the wheel and just used a MatLab script from the app-note. You can download it here. I increased calculation points to 500 for better definition. MatLab is available as a 30 day trail (for free), so there are no problems running this script as an one off experiment. Above are MatLab generated SPL graph for my pressure chamber. With a standard 2.83Vrms input we can expect some 150dB of pressure inside it! That’s close to a rocket launch sound levels, so you wouldn’t want to put your head inside it. Or your microphone for that matter. Instead I used voltage of just 10mVrms. That way at the frequency of 300Hz I get 94dB. Very convenient for future mic sensitivity evaluation. MatLab generated resonance frequency is again, basically spot-on measured one. This gives me high confidence level in overall experiment accuracy. I didn’t found a straight-forward way to export MatLab variable values. So I just opened variables f (frequency) and pudb (SPL inside the box), transposed them horizontally and copied to excel. Then saved file as comma-delimited for further import to measurement software. Now that we have a theoretical pressure chamber SPL data, it can be used as a reference for comparison to SPL data measured with a microphone. The difference between the two will be a microphone calibration curve. Ideally we would want to first measure it with some reference microphone and make sure theory complies with reality. But I don’t think many of us has such $$$$ mic laying around “just in case”. So if one was meticulous enough and did all the cross-checks mentioned above – there should be nothing to worry about. First lets start with measurement setup calibration. We want to be sure, that all the low-frequency roll-off of a sound card and amplifier are accounted for. This is really critical, otherwise results will be inaccurate. I’ll try to walk you through, so the next chapters will be more “how to” then just results. I will be using REW for this example so calibration procedure is documented here. I will just add that it’s really preferable to use ASIO interface and avoid any mixer nonsense. When done correctly – you expect to see a straight line for a measurement setup as pictured above. Next let’s level calibrate the rig. Connect multimeter to driver terminals, open REW generator and set it to sine-wave and -6dB level. Adjust amplifier level so that multimeter will show 1Vrms. Open REW SPL meter and press “Calibrate”, enter 0 and press “Finished”. Now press “Measure”, set level to same -6dB and press “Start Measuring”. If everything was done correctly – you will end up with a graph like above. That means you are now level and frequency calibrated and 0dB is in fact 1Vrms or 0dBV in short. Now let’s set output level to 10mV. To do so, we need to lower level 100 times or 40dB, so we lower generator level from -6dB to -46dB. Check that you measure ~10mV. At this point we can start to introduce a microphone to the I made a rubber shim to seal the microphone in a chamber. Modeling clay works too but it’s more messy. I will not repeat the findings of app-note figure 10, regarding the importance of total air tightness. It’s really critical for the lowest octave. Don’t forget to set the measurement level to -46dB! As -6dB will definitely destroy your mic. Now connect your microphone straight to sound card input. You can use this phantom power unbalancing circuit if your sound card doesn’t have balanced inputs. Select R[in] such, that resulting R[total ]would be equal to your pre-amp (or whatever you will be using) input impedance. Don’t forget to account for your sound card input impedance R[s ]. The main idea here is that you know the exact impedance that mic is working into for a proper sensitivity estimation. Another thing to watch out is your mic output type. If it’s fully-balanced (there is inverse signal on Pin3) – you have to add +6dB to your measurement. That is if you’re planning using it with balanced input. Now hit “Start Measure” and listen to the sweep. Above are my measurements for EMM-6 (red) and WM61A (blue) mics. There is no smoothing (or any other post-processing applied). If you have sharp peaks or valleys in your graph – something went really wrong. Review your setup and find what that was. Extracting the information The good news is that we already know the absolute sensitivity of our mics! As can be seen from MatLab SPL simulation – that’s a point in graph at 300Hz. So for EMM-6 mic that would be -42.66dBV or 7.35mV/Pa. Here is explanation how we made a mental leap from 94dB to Pascals (in case you were wondering). With a little bit of faith (that our mic is linear from 300Hz to 1kHz) we can state that it’s absolute sensitivity is 7.35mV/Pa@1kHz in to 1kΩ (my total setup impedance). This assumption is not far from truth as we will soon see. Dayton EMM-6 Panasonic WM61A UNI-T UT353 SPL Meter 7.35 mV/Pa@1kHz into 1kΩ 57.2 mV/Pa@1kHz into 1kΩ 94.8db (+0.8dB) Above are absolute sensitivity table of my measurement microphones. I also measured pressure-chamber with my SPL meter at single 300Hz frequency. Then I added A weighting correction of +7.09dB and got 94.8dB. Which is +0.8dB over of what it should measure. I firmly believe that it’s an error of this cheap ±1.5dB device and not my measuring setup. We’ll have to work a little bit more to extract our calibration curves. First open saved MatLab SPL file and extend it to 2Hz (copy same SPL value as 10Hz). Then import this measurement to REW and add negative offset. This offset should be equal to difference @300Hz between your MatLab and measured SPL’s. In my case its 94dB+42.66dB=136.66dB. The idea here is to have a good match @300Hz. You should end-up with something like this: Measured EMM-6 SPL (red), MatLab generated SPL with offset (green). Now open Controls, select your measured SPL for trace A and MatLab generated for trace B, choose A/B arithmetic function and press generate. EMM-6 calibration curve after applied arithmetic function. And voilà – you have your microphone calibration curve. It’s should be valid up to 300Hz. Then microphone positioning inside the chamber starts to have a noticeable influence. Again, my findings is well in accordance with app-note figure 9 on this matter. Dayton EMM-6 (blue) and WM61A (red) final calibration curves, 1/3 octave smoothing. And finally – here are my full calibration curves. They’re merged with my reference 1″ dome baffle measurement @300Hz. So 300Hz-1kHz region is subjected to some interpretation, but it’s under 0.5dB of potential error. So I’m not planning to lose my sleep over it.
{"url":"https://www.mvaudiolabs.com/diy/microphone-calibration-pressure-chamber/","timestamp":"2024-11-07T23:34:17Z","content_type":"text/html","content_length":"79444","record_id":"<urn:uuid:e206e771-b1cf-4c76-9a26-2d42f649f563>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00654.warc.gz"}
NASA Technical Reports Server (NTRS) 20170001312: Characterization of a Method for Inverse Heat Conduction Using Real and Simulated Thermocouple Data PDF Characterization of a Method for Inverse Heat Conduction Using Real and Simulated Thermocouple Data Michelle E. Pizzoa Old Dominion University, Norfolk, VA 23529, USA Information Technology Infrastructure Branch, NASA Langley Research Center, Hampton, VA 23681, USA David E. Glassb Structural Mechanics and Concepts Branch, NASA Langley Research Center, Hampton, VA 23681, USA It is often impractical to instrument the external surface of high-speed vehicles due to the aerothermodynamic heating. Temperatures can instead be measured internal to the structure using embedded thermocouples, and direct and inverse methods can then be used to estimate temperature and heat flux on the external surface. Two thermocouples embedded at different depths are required to solve direct and inverse problems, and filtering schemes are used to reduce noise in the measured data. Accuracy in the estimated surface temperature and heat flux is dependent on several factors. Factors include the thermocouple location through the thickness of a material, the sensitivity of the surface solution to the error in the specified location of the embedded thermocouples, and the sensitivity to the error in thermocouple data. The effect of these factors on solution accuracy is studied using the methodology discussed in the work of Pizzo, et. al.1 A numerical study is performed to determine if there is an optimal depth at which to embed one thermocouple through the thickness of a material assuming that a second thermocouple is installed on the back face. Solution accuracy will be discussed for a range of embedded thermocouple depths. Moreover, the sensitivity of the surface solution to (a) the error in the specified location of the embedded thermocouple and to (b) the error in the thermocouple data are quantified using numerical simulation, and the results are discussed. Nomenclature 1-D = One-dimensional π = Specific heat capacity, J/(kg-K) π DHCP, IHCP = Direct (respectively, inverse) heat conduction problem π (π ‘), π (π ‘) = Error in heat flux (respectively, temperature), % π π π = Thermal conductivity, W/(m-K) or W/ (cm-K) L = Thickness of material specimen π = Superscript denoting the time step count π = Subscript denoting the nodal count π ,π = Number of nodes for the DHCP (respectively, IHCP) π · π Ό π = Heat flux, W/m2 or W/cm2 π = Temperature, K t = Time, s π ₯ = Spatial location, m or cm π π Ό = Thermal diffusivity =π /π π , m2/s π Ξ π ‘ = Time step size, s Ξ π ₯ = Cell size, m or cm β = Symbol denoting the phrase is an element of π = Mass density, kg/m3 or kg/cm3 a Ph.D. Candidate, Pathways Student Trainee b AIAA Associate Fellow 1 American Institute of Aeronautics and Astronautics I. Introduction W hile traveling at hypersonic speeds, flight vehicles encounter high heating resulting in high surface temperature. There is an interest in knowing these surface values particularly in the fields of materials, structures, and aerothermodynamics research. Unfortunately, it is usually either impractical or impossible to instrument the external surface of high-speed vehicles due to the aerothermodynamic heating. Temperature histories are instead measured internally to the surface by embedding thermocouples or thermocouple plugs through the thickness of the vehicle skin or thermal protection system. The measurements can then be used to solve an inverse heat conduction problem (IHCP) to estimate surface temperature and heat flux. The inverse problem is ill-posed. Therefore, the solution is extremely sensitive to measurement errors, i.e., small errors in the data yield large errors in the solution2-6. The study of inverse heat conduction started in the late 1950s with work published by Shumakov7 and Stolz8 in 1957 and 1960, respectively. In the years since, numerous methods for solving both linear (constant thermophysical properties) and nonlinear (temperature-dependent thermophysical properties) one-dimensional (1-D) transient inverse problems have emerged from researchers in the field such as Beck3,9,10,11, Murio6, Weber2, and Carasso5, among others. In 2016, Pizzo, et. al.,1 estimated surface temperature and heat flux histories of high-temperature carbon/carbon materials using a 1-D inverse method (defined within the body of paper) with temperature-dependent thermophysical properties (nonlinear problem) assuming negligible in-plane temperature gradients. The study was motivated by Frankel, et. al.,4,12,13. In his work, Frankel solved the 1-D transient heat conduction problem using noisy data measured at internal depths π ₯ and π ₯ where π ₯ <π ₯ . He reduced noise in the data with a global low-pass Gaussian filter by 1 2 1 2 defining a cutoff frequency using discrete Fourier transforms of the data. Frankel then approximated the heating/cooling rate using finite differences and numerically integrated the integral relationship between temperature and heat flux to obtain an analytical heat flux solution at depth π ₯ . The surface temperature and heat flux were 2 approximated from the measured temperature and calculated heat flux at depth π ₯ using finite differences and Taylor 2 series reconstruction. This method yielded accurate results. The methodology in Pizzo, et. al1 used Frankelβ s procedures as a guide, but made two slight modifications. First, local windowed-sinc filters were used to reduce noise in the measured data. Second, a technique developed by Carasso5 was used to solve the inverse problem. In this technique, the 1-D heat conduction equation is first written as a first- order system with temperature and heat flux as the unknowns, and is then discretized using forward differencing in space and central differencing in time. The methodology in [1] was validated using data provided by thermal vacuum chamber radiant heating tests (c.f. Blosser14). Pizzo demonstrated accuracy in estimating temperature histories using comparisons with test results (estimated solutions had error in the range of -5% to 9%). The estimated heat flux histories however had significantly larger error than temperature (estimated solutions had error in the range of -30% to 10%). The larger errors were expected, because for the methodology used, the rate of convergence with heat flux with mesh size is one order less accurate than temperature. The purpose of this paper is to expand upon the work in [1] by characterizing the selected method for inverse heat conduction. For inverse methods, because small errors in the data yield large errors in the solution, accuracy in the estimated surface temperature and heat flux is dependent on several factors. Factors include the embedded thermocouple depth through the thickness of a material, the sensitivity of the surface solution to the error in the specified location of the embedded thermocouple, and the sensitivity of the solution to the error in thermocouple data. The effect of these factors on solution accuracy is presented in this paper. The inverse methodology is summarized in Section II, and the characterization of the method is discussed in Section III. Concluding remarks are discussed in Section IV. II. Methodology Summary The transient temperature distribution in a material is governed by the heat conduction equation. Assuming that the in-plane temperature gradients are negligible, the heat conduction equation reduces to its 1-D form: π π π π π π π (π ) = (π (π ) ), (1) π π π ‘ π π ₯ π π ₯ where π , π , and π are the density, specific heat, and thermal conductivity of the material, respectively. The initial π temperature distribution through the thickness of the material is specified at time π ‘ =0: π (π ₯,0)=π (π ₯). (2) π 2 American Institute of Aeronautics and Astronautics Boundary conditions at the outer surfaces (π ₯ =0,π Ώ) are required to solve Eq. (1). For the classical initial-boundary value problem, referred to in this work as the direct heat conduction problem (DHCP), the boundary conditions are in the form of specified temperature or heat flux3. In situations where the boundary values are unknown, an inverse3,4,12 problem must be solved using known internal temperatures at two depths, denoted by π ₯ and π ₯ . The 1 2 ou source of the internal temperatures is typically data π ₯ =π Ώ from a set of embedded thermocouples as illustrated n in Figure 1. IHCP In [1], the surface temperature and heat flux T = T(x , t) π ₯ = π ₯ 2 2 2 histories at π ₯ =π Ώ are estimated from two internal Heat conducting temperature measurements, π and π , using a four- DHCP 1 2 material step process. The four-step process includes filtering T = (x , t) π ₯ = π ₯ noise from the measured data, solving for temperature 1 1 1 histories in the region between depths π ₯ and π ₯ 1 2 where 0β €π ₯1 <π ₯2 <π Ώ, solving for the heat flux π ₯ = 0 history at depth, π ₯ , and solving for the temperature Figure 1. Illustration of the direct and inverse problems 2 and heat flux histories in the region extending from and the embedded thermocouples, T and T . 1 2 depth π ₯ =π ₯ to π ₯ =π Ώ. 2 Solving for Temperature Histories Between Depths π and π (Direct Problem) π π The region between thermocouples embedded at depths π ₯ and π ₯ is treated as the DHCP with initial and boundary 1 2 values given by measured data, i.e. π (π ₯,0)=π , π ₯ β € π ₯ β €π ₯ 0 1 2 (3) π (π ₯ ,π ‘)=π , π ‘ >0 1 1 π (π ₯ ,π ‘)=π , π ‘ >0. 2 2 Equation (1) is discretized by dividing the domain into a uniform mesh of π total nodes with cell size of Ξ π ₯ = π · (π ₯ β π ₯ )/(π β 1). In the mesh, node π =1 is at depth π ₯ =π ₯ and node π =π is at depth π ₯ =π ₯ . Letting π π 2 1 π · 1 π · 2 π denote the temperature of node π at time π ‘ =π Ξ π ‘, the temperature of node π at time π ‘ =(π +1)Ξ π ‘ is obtained π π +1 using a vertex-based finite volume method with Crank-Nicholson time marching: π π +1β π π π Όπ π Όπ π π = π β 1,π (π π β π π +π π +1β π π +1)+ π ,π +1(π π β π π +π π +1β π π +1) (4) Ξ π ‘ 2Ξ π ₯2 π β 1 π π β 1 π 2Ξ π ₯2 π +1 π π +1 π where π Όπ (π Όπ ) denotes the average of the temperature-dependent thermal diffusivity evaluated at temperatures π β 1,π π ,π +1 π π and π π (π π and π π ). Since π π +1 =π (π ₯ ,(π +1)Ξ π ‘) and π π +1 =π (π ₯ ,(π +1)Ξ π ‘) are known from the π β 1 π π π +1 1 1 π π · 2 filtered data, Eq. (4) yields temperature histories at nodes π β [2,π β 1], or equivalently, in the domain π ₯ β [π ₯ ,π ₯ ]. π · 1 2 Solving for the Heat Flux History at Depth π (Direct Problem) π Let π π denote the heat flux entering the domain at node π at time π ‘ =π Ξ π ‘. The heat flux entering the domain π π · π · π at node π at time π ‘ =(π +1)Ξ π ‘ is treated as an unknown and is obtained using a consistent approach based π · π +1 upon an energy balance for the half-call π ₯ β [π ₯ β Ξ π ₯/2,π ₯ ] combined with Crank-Nicholson time marching: π π · π π · π π β π π +π π +1β π π +1 π ₯π ₯ π π +1β π π Node π : π π +1 =π π [ π π · π π ·β 1 π π · π π ·β 1]+π π π ( )[ π π · π π ·] (5) π · π π · π π · 2π ₯π ₯ π π π · 2 π ₯π ‘ where π π and π π denote the temperature-dependent thermal conductivity and specific heat, respectively, evaluated π π · π π π · at temperature π π . The density is denoted by π and is constant. Solving Eq. (5) yields the heat flux history at node π π · π =π , or equivalently, at depth π ₯ =π ₯ . π · 2 3 American Institute of Aeronautics and Astronautics Solving for Temperature Heat Flux Histories Between Depths π and π =π ³ (Inverse Problem) π The region extending from thermocouple depth π ₯ =π ₯ to the external surface π ₯ =π Ώ is treated as the IHCP. The 2 temperature and heat flux distributions are calculated in the domain π ₯ β [π ₯ ,π Ώ] using a mesh of π nodes where node 2 π Ό π =1 is at depth π ₯ =π ₯ and node π =π is at π ₯ =π Ώ. The method used to solve the inverse problem is the space 2 π Ό marching scheme S6 as outlined by Carasso5. The scheme has a truncation error of π (π ₯π ‘2) and π (π ₯π ₯), and the scheme was found by Carasso to have the smallest amplification factor. Therefore, the S6 scheme was the most stable of the π (π ₯π ‘2) schemes analyzed. The S6 scheme is derived by writing the 1-D heat conduction equation as a first-order system for the heat flux: π π π π π π π =π (π ) , =π π (π ) . (6) π π ₯ π π ₯ π π π ‘ The temperature and heat flux at node 1 are known since π ₯ is a shared boundary between the domain of the direct 2 and inverse problems. Equation (6) is discretized using forward differencing in space and central differencing in time. For each node, π =1,β ¦,π β 1: π Ό π 0 =π 0 and π 0 =π 0 (7) π +1 π π +1 π π π =π π +Ξ π ₯(π π β π π ) and π π =π π +π ₯π ₯π π π (π π +1β π π β 1)β 2π ₯π ‘ π =1,β ¦,π β 1 (8) π +1 π π π π +1 π π π π π π π =2π π β 1β π π β 2 and π π =2π π β 1β π π β 2 π +1 π +1 π +1 π +1 π +1 π +1 (9) Equations (7) to (9) yield temperature and heat flux distributions in the domain π ₯ β [π ₯ ,π Ώ] marching from node 1 2 to node π using boundary values given by the DHCP solution at depth π ₯ , i.e. π Ό 2 π (π ₯ ,π ‘)=π and π (π ₯ ,π ‘)=π (10) 2 DHCP π ₯2 2 DHCP π ₯2 where π denotes the filtered temperature at depth π ₯ and π denotes the heat flux solution at depth π ₯ DHCP π ₯2 2 DHCP π ₯2 2 calculated from Eq. (5). Starting with node 1, the solution is marched in time before preceding to the next node. III. Methodology Characterization The methodology in Pizzo, et. al,1 is characterized by studying (a) the effects of embedded thermocouple depth through the thickness of a material on the accuracy of the estimated surface temperature and heat flux, (b) the sensitivity of the surface solution to the error in the specified location of the embedded thermocouples, and (c) the sensitivity to the error in the thermocouple data. For the characterization, it is assumed that one thermocouple is installed on the back face of a material and a second thermocouple is embedded through the thickness. The motivation for this work is to study the factors that affect solution accuracy and identify best practices when solving inverse problems using the methodology presented. Model Problem Each numerical study is performed using a model problem for which an analytical solution exists. The model assumes that one thermocouple is embedded through the thickness of a material and a second thermocouple is installed on the back face. Numerical solutions are compared to the exact analytical solution to assess accuracy of the method. Assuming constant thermophysical properties, the heat conduction equation (1) reduces to: 1π π π 2π (11) = , π Όπ π ‘ π π ₯2 where π Ό =π /π π . In each study, Eq. (11) is solved analytically in 0β €π ₯ β €π Ώ for π ‘ β [0 100] s, with initial and π boundary conditions: π (π ₯,0)=294 K, 0β € π ₯ β €π Ώ (12) π (π π /π π ₯)(0,π ‘)=π , π ‘ >0 0 π (π π /π π ₯)(π Ώ,π ‘)=π , π ‘ >0 π Ώ 4 American Institute of Aeronautics and Astronautics and thermophysical properties: π =0.0575 W/(m-K), π =0.0015 kg/cm3, π =1833.33 J/(kg-K) . (13) π Boundary conditions of π =0 W/cm2, π =40 W /cm2 (herein identified as the Constant Case) and π =0 W/cm2, 0 π Ώ 0 π =(40β /10) W/cm2 (herein identified as the Linear Case) are considered in Eq. (12). The imposed heat flux π Ώ boundary conditions at depth π ₯ are graphed versus time in Figure 2 for both the (a) Constant Case and (b) Linear π Ώ Case, where π ‘ β [0 100] s. The corresponding analytical temperature solutions at depth π ₯ , denoted by π , are graphed π Ώ π Ώ versus time in Figure 3 for both the (a) Constant Case and (b) Linear Case. (a) Constant Case (b) Linear Case (a) Constant Case (b) Linear Case Figure 2. Illustration of the imposed heat flux π Figure 3. Illustration of the analytical temperature π ³ graphed versus time. π » graphed versus time. π ³ FiguFroer e2a. c h of the numerical studies presented in this paper, so lution convergence was achieved by refining the spatial domain of the direct problem. With each successive refinem ent, the cell size was divided in half and the unbiased vFaigriuarnec e3 .w as calculated between consecutive refinements using the temperature of the node at the midpoint in the spatial domain. Once the variance fell below 0.01 K, the solution was considered converged using the refinement of the smaller cell size. Moreover, time-steps of Ξ π ‘ =0.1 s and Ξ π ‘ =0.01 s were considered in each study. The estimated and exact temperature at π ₯ =π Ώ are used to compute the error in temperature: es ma ed π (π Ώ,π ‘)β π π Ώ π (π ‘)=100( ) (14) π π π Ώ where the exact temperature π is obtained analytically. The graphical representations of the analytical temperatures π Ώ are shown in Figure 3. The estimated and imposed heat flux at π ₯ =π Ώ are used to compute the error in heat flux: es ma ed π (π Ώ,π ‘)β π π (π ‘)=100( π Ώ) (15) π π π Ώ where the imposed heat flux is given by π =40 W/cm2 in the Constant Case and π =(40β /10) W/cm2 in the π Ώ π Ώ Linear Case. The graphical representations of the imposed heat flux are shown in Figure 2. The absolute values were removed from the standard error calculation in Eqs. (14) and (15) to examine when the numerical solution either overestimated (π >0) or underestimated (π <0) the solution. Effects of Embedded Thermocouple Depth Through the Thickness of a Material A numerical study was conducted to determine if there is an optimal depth to embed one thermocouple through the thickness of a material. Fifteen different depths were considered for the location of the embedded thermocouple, namely depths π ₯ ,π ₯ ,β ¦π ₯ ,π ₯ as illustrated in Figure 4. Using the analytical temperature solutions of Eqs. (11) to 1 2 14 15 (13) at depth π ₯ paired with the solutions at depths π ₯ ,π ₯ ,β ¦π ₯ ,π ₯ as inputs to the DHCP, heat flux histories are 0 1 2 14 15 estimated at depths π ₯ ,π ₯ ,β ¦π ₯ ,π ₯ . The boundary values in Eq. (10) are then given by the analytical temperature 1 2 14 15 and estimated heat flux at each depth, and are used to solve the IHCP to estimate the surface temperature and heat flux histories, π and π , respectively. π Ώ π Ώ 5 American Institute of Aeronautics and Astronautics Figure 4. Illustration of the fifteen different depths considered for the embedded thermocouple. In both the Constant and Linear Case studies with time-step of Ξ π ‘ =0.01 s, the ill-posedness of the inverse problem resulted in growing oscillations as the solution marched from each simulated thermocouple depth to π ₯ . The π Ώ oscillatory growth was greatest for depths π ₯ to π ₯ , yielding meaningless results with embedded thermocouples nearest 1 6 the back face. The oscillations are a direct result of the domain size in the inverse problem combined with a small time-step. With small time-steps, larger domain sizes for the inverse problem allow for greater oscillatory growth. On the other hand, in both the Constant and Linear Case studies with time-step of Ξ π ‘ =0.1 s, the effect of the ill- posedness of the inverse problem was negligible for all embedded thermocouple depths π ₯ to π ₯ . Temperature and 1 15 heat flux estimations are therefore only given for the Constant and Linear Case studies with time-step of Ξ π ‘ =0.1 s. Moreover, only the estimations from depths π ₯ to π ₯ are considered for assessment due to the growing oscillations 7 15 observed for depths π ₯ to π ₯ with the smaller time-step of Ξ π ‘ =0.01 s. 1 6 The Constant Case temperature and heat flux estimates are shown in Figure 5(a) and Figure 5(b) for Ξ π ‘ =0.1 s. Neglecting depths π ₯ to π ₯ , the smallest magnitude of error in temperature is obtained for depths π ₯ , π ₯ , π ₯ , π ₯ , and 1 6 8 11 12 7 π ₯ with error ranging from 0% to 4% between π ‘ β [0 10] s, and from 0% to 2% by π ‘ =100 s. The largest magnitude 10 of error in temperature is obtained for depths π ₯ , π ₯ , π ₯ , and π ₯ with error greater than 5% between π ‘ β [0 10] s, 13 15 9 14 and from 2% to 3% by π ‘ =100 s. The smallest magnitude of error in heat flux is obtained for depths π ₯ , π ₯ , π ₯ , π ₯ , 12 11 10 7 π ₯ , and π ₯ with error ranging from β 1% to 1% for π ‘ β [0 100] s. The largest magnitude of error in heat flux is 13 15 obtained for depths π ₯ , π ₯ , and π ₯ with error greater than Β±2% for π ‘ β [0 100] s. 9 8 14 The Linear Case temperature and heat flux estimates are shown in Figure 6(a) and Figure 6(b) for Ξ π ‘ =0.1 s. Neglecting depths π ₯ to π ₯ , the smallest magnitude of error in both temperature and heat flux is obtained for depths 1 6 π ₯ , π ₯ , π ₯ , π ₯ , π ₯ , and π ₯ with error decreasing below β 0.2% by π ‘ =100 s for temperature, and decreasing 15 14 13 12 11 10 below 0.05% by π ‘ =100 s for heat flux. In the Linear Case, depths π ₯ to π ₯ yield the smallest magnitude of error in both temperature and heat flux, 13 15 followed closely by depths π ₯ to π ₯ . However, in the Constant Case, depths π ₯ to π ₯ yield the largest magnitude 10 12 13 15 of error in both temperature and heat flux, whereas depths π ₯ to π ₯ yield the smallest. Considering all of the 10 12 numerical results, the studies demonstrate that when using the methodology presented in [1] the surface estimations for both temperature and heat flux are consistently the most accurate if one thermocouple is embedded between depths π ₯ and π ₯ (5π Ώ/8 and 3π Ώ/4) when the second thermocouple is installed on the back face. This finding is based entirely 10 12 off of the model problem previously discussed, and is independent of time-step. Further assessments should be conducted to conclusively make this assertion, as alluded to in the concluding remarks in Section IV. 6 American Institute of Aeronautics and Astronautics 8 x1 7 x2 x3 6 x4 x5 5 % x6 ,)4 x7 t ( T e x8 3 x9 x10 2 x11 1 x12 x13 0 x14 0 10 20 30 40 50 60 70 80 90 100 x15 Time, s (a) Temperature error at π =π ³ 4 x1 3 x2 x3 2 x4 x5 1 x6 % ,) x7 t ( eq 0 x8 x9 -1 x10 x11 -2 x12 x13 -3 x14 0 10 20 30 40 50 60 70 80 90 100 x15 Time, s (b) Heat flux error at π =π ³ Figure 5. Error in the Constant Case estimations using a time step of π «π =0.1 s. 7 American Institute of Aeronautics and Astronautics 0 x1 -0.2 x2 -0.4 x3 x4 -0.6 x5 -0.8 % x6 ,) -1 x7 t ( T e x8 -1.2 x9 -1.4 x10 -1.6 x11 x12 -1.8 x13 -2 x14 0 10 20 30 40 50 60 70 80 90 100 x15 Time, s (a) Temperature error at π =π ³ π ,π ,π ,π are off- 0.3 1 2 4 8 scale, outside of the x1 range given for π (π ) π 0.25 x2 x3 0.2 x4 x5 0.15 x6 % ,) 0.1 x7 t ( q e x8 0.05 x9 x10 0 x11 -0.05 x12 x13 -0.1 x14 0 10 20 30 40 50 60 70 80 90 100 x15 Time, s (b) Heat flux error at π =π ³ Figure 6. Error in the Linear Case estimations, comparing the numerical results to the analytical solution with considering time-step of π «π =0.1 s. 8 American Institute of Aeronautics and Astronautics Sensitivity of Surface Solution to Error in Specified Location of the Embedded Thermocouple A numerical study was conducted to assess the sensitivity of the surface solution to the error in the specified location of the embedded thermocouple. For the sensitivity study, the embedded thermocouple is assumed to be at depth 3π Ώ/4. The depth of 3π Ώ/4 was selected because the previous results demonstrate that the estimations for both temperature and heat flux are consistently the most accurate when the embedded thermocouple is between depths 5π Ώ/8 and 3π Ώ/4. To simulate error in the specified location of the thermocouple, the stated methodology is solved with accurate temperature data located at inaccurate depths. The embedded depth is perturbed by a factor of π Ώ. For π Ώ = 0.01, 0.02, 0.03, 0.04, 0.05, 0.10, and 0.15, the DHCP is solved with analytical solutions of π (π ₯ ,π ‘) and π (π ₯ Β±π Ώπ Ώ,π ‘) and depth 1 2 inputs of π ₯ =0 and π ₯ =3 /4. The solution yields π defined by Eq. (10). Boundary values π (π ₯ Β±π Ώπ Ώ,π ‘) and 1 2 DHCP π ₯2 2 π are then used to solve the IHCP to estimate the surface histories, π and π . DHCP π ₯2 π Ώ π Ώ Surface estimations are compared against the error in temperature and heat flux, π and π respectively, obtained π π from the embedded thermocouple depth study. The known π and π from depth π ₯ , shown in Figure 5 and Figure π π 12 6, are used as a baseline in which case the perturbation, π Ώ, is equal to 0.00. Since the model problem is a linear problem, the difference in error in the estimated temperature and heat flux is expected to be proportional to the prescribed error in the simulated baseline data. The Constant Case temperature and heat flux estimates are shown in Figure 7(a) and Figure 7(b) for Ξ π ‘ =0.1 s. Additionally, the Linear Case temperature and heat flux estimates are shown in Figure 8(a) and Figure 8(b) for Ξ π ‘ = 0.1 s. Results demonstrate that for both the Constant and Linear Case, as π Ώ increases (decreases) towards +0.15 (β 0.15), the error in surface temperature and heat flux increases (decreases). Hence, the difference in error in the estimated temperature and heat flux is proportional to the prescribed baseline error, which was to be expected. The results also demonstrate that the heat flux error bounds remain constant with time whereas the temperature error bounds decrease with time. Moreover, while the error in temperature stays well within the bounds of the perturbation (Β±15%), the error in heat flux follows closely to the bounds of the perturbation. Sensitivity of Surface Solution to Error in Thermocouple Data A numerical study was also conducted to assess the sensitivity of the surface solution to the error in thermocouple data. For the sensitivity study, the embedded thermocouple is assumed to be at depth 3π Ώ/4. To simulate error in thermocouple data, the stated methodology is solved with inaccurate temperature data located at accurate depths. The temperature data are perturbed by a factor of π Ώ. For π Ώ = 0.01, 0.02, 0.03, 0.04, 0.05, 0.10, and 0.15, the DHCP is solved with analytical solutions of π (π ₯ ,π ‘) and π (π ₯ ,π ‘)Β±π Ώπ (π ₯ ,π ‘) and depth inputs of π ₯ =0 1 2 2 1 and π ₯ =3 /4. The solution yields π defined by Eq. (10). Boundary values π (π ₯ ,π ‘)Β±π Ώπ (π ₯ ,π ‘) and π 2 DHCP π ₯2 2 2 DHCP π ₯2 are then used to solve the IHCP to estimate the surface histories, π and π . π Ώ π Ώ Surface estimations are compared against the error in temperature and heat flux, π and π , respectively, obtained π π from the embedded thermocouple depth study. The known π and π from depth π ₯ , shown in Figure 5 and Figure π π 12 6, are used as a baseline in which case the perturbation, π Ώ, is equal to 0.00. Since the model problem is a linear problem, the difference in error in the estimated temperature and heat flux is expected to be proportional to the prescribed error in the simulated baseline data. The Constant Case temperature and heat flux estimates are shown in Figure 9(a) and Figure 9(b) for Ξ π ‘ =0.1 s. Additionally, the Linear Case temperature and heat flux estimates are shown in Figure 10(a) and Figure 10(b) for Ξ π ‘ =0.1 s. Results demonstrate that for both the Constant and Linear Case, as π Ώ increases (decreases) towards +0.15 (β 0.15), the error in surface temperature and heat flux increase (decrease). Hence, the difference in error in the estimated temperature and heat flux is proportional to the prescribed baseline error, which was to be expected. The results also demonstrate that the temperature error bounds remain constant with time, whereas the heat flux error increases with time. Moreover, while the error in temperature follows closely to the bounds of the perturbation (Β±15%), the error in heat flux is well outside the bounds of the perturbation. 9 American Institute of Aeronautics and Astronautics Perturbation of embedded thermocouple depth, π Ή (a) Temperature error at π =π ³ Perturbation of embedded thermocouple depth, π Ή (b) Heat flux error at π =π ³ Figure 7. Sensitivity of the surface estimations to the error in the specified location of the embedded thermocouple for the Constant Case using a time-step of π «π =0.1 s. 10 American Institute of Aeronautics and Astronautics See more
{"url":"https://www.zlibrary.to/dl/nasa-technical-reports-server-ntrs-20170001312-characterization-of-a-method-for-inverse-heat-conduction-using-real-and-simulated-thermocouple-data","timestamp":"2024-11-13T01:26:46Z","content_type":"text/html","content_length":"175004","record_id":"<urn:uuid:6fb8cc47-7878-4d03-a675-d5670cbd81cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00073.warc.gz"}
Rapid learning of predictive maps with STDP and theta phase precession The predictive map hypothesis is a promising candidate principle for hippocampal function. A favoured formalisation of this hypothesis, called the successor representation, proposes that each place cell encodes the expected state occupancy of its target location in the near future. This predictive framework is supported by behavioural as well as electrophysiological evidence and has desirable consequences for both the generalisability and efficiency of reinforcement learning algorithms. However, it is unclear how the successor representation might be learnt in the brain. Error-driven temporal difference learning, commonly used to learn successor representations in artificial agents, is not known to be implemented in hippocampal networks. Instead, we demonstrate that spike-timing dependent plasticity (STDP), a form of Hebbian learning, acting on temporally compressed trajectories known as ‘theta sweeps’, is sufficient to rapidly learn a close approximation to the successor representation. The model is biologically plausible – it uses spiking neurons modulated by theta-band oscillations, diffuse and overlapping place cell-like state representations, and experimentally matched parameters. We show how this model maps onto known aspects of hippocampal circuitry and explains substantial variance in the temporal difference successor matrix, consequently giving rise to place cells that demonstrate experimentally observed successor representation-related phenomena including backwards expansion on a 1D track and elongation near walls in 2D. Finally, our model provides insight into the observed topographical ordering of place field sizes along the dorsal-ventral axis by showing this is necessary to prevent the detrimental mixing of larger place fields, which encode longer timescale successor representations, with more fine-grained predictions of spatial location. This theoretical work is important in that it bridges neural mechanisms within the hippocampus with the abstract computations it is thought to support for reinforcement learning. The study offers a potential mechanism by which spike timing dependent plasticity and theta phase precession within spiking neurons in CA3 and CA1 can yield successor representations. The simulations are compelling in that they continue to hold even when some of the simple but less realistic assumptions are relaxed in support of more realistic scenarios consistent with biological data. Knowing where you are and how to navigate in your environment is an everyday existential challenge for motile animals. In mammals, a key brain region supporting these functions is the hippocampus ( Scoville and Milner, 1957; Morris et al., 1982), which represents self-location through the population activity of place cells – pyramidal neurons with spatially selective firing fields (O’Keefe and Dostrovsky, 1971). Place cells, in conjunction with other spatially tuned neurons (Taube et al., 1990; Hafting et al., 2005), are widely held to constitute a ‘cognitive map’ encoding information about the relative location of remembered locations and providing a basis upon which to flexibly navigate (Tolman, 1948; O’Keefe and Nadel, 1978). The hippocampal representation of space incorporates spike time and spike rate based encodings, with both components conveying broadly similar levels of information about self-location (Skaggs et al., 1996b; Huxter et al., 2003). Thus, the position of an animal in space can be accurately decoded from place cell firing rates (Wilson and McNaughton, 1993) as well as from the precise time of these spikes relative to the background 8–10 Hz theta oscillation in the hippocampal local field potential (Huxter et al., 2003). The latter is made possible since place cells have a tendency to spike progressively earlier in the theta cycle as the animal traverses the place field – a phenomenon known as phase precession (O’Keefe and Recce, 1993). Therefore, during a single cycle of theta the activity of the place cell population smoothly sweeps from representing the past to representing the future position of the animal (Maurer et al., 2006), and can simulate alternative possible futures across multiple cycles (Johnson and Redish, 2007). In order for a cognitive map to support planning and flexible goal-directed navigation, it should incorporate information about the overall structure of space and the available routes between locations (Tolman, 1948; O’Keefe and Nadel, 1978). Theoretical work has identified the regular firing patterns of entorhinal grid cells with the former role, providing a spatial metric sufficient to support the calculation of navigational vectors (Bush et al., 2015; Banino et al., 2018). In contrast, associative place cell – place cell interactions have been repeatedly highlighted as a plausible mechanism for learning the available transitions in an environment (Muller et al., 1991; Blum and Abbott, 1996; Mehta et al., 2000). In the hippocampus, such associative learning has been shown to follow a spike-timing dependent plasticity (STDP) rule (Bi and Poo, 1998) – a form of Hebbian learning where the temporal ordering of spikes between presynaptic and postsynaptic neurons determines whether long-term potentiation or depression occurs. One of the consequences of phase precession is that correlates of behaviour, such as position in space, are compressed onto the timescale of a single theta cycle and thus coincide with the time-window of STDP $O(20−50 ms)$ (Skaggs et al., 1996b; Mehta et al., 2000; Mehta, 2001; Mehta et al., 2002). This combination of theta sweeps and STDP has been applied to model a wide range of sequence learning tasks (Jensen and Lisman, 1996; Koene et al., 2003; Reifenstein et al., 2021), and as such, potentially provides an efficient mechanism to learn from an animal’s experience – forming associations between cells which are separated by behavioural timescales much larger than that of STDP. Spatial navigation can readily be understood as a reinforcement learning problem – a framework which seeks to define how an agent should act to maximise future expected reward (Sutton and Barto, 1998 ). Conventionally, the value of a state is defined as the expected cumulative reward that can be obtained from that location with some temporal discount applied. Thus, the relationship between states and the rewards expected from those states are captured in a single value which can be used to direct reward-seeking behaviour. However, the computation of expected reward can be decomposed into two components – the successor representation, a predictive map capturing the expected location of the agent discounted into the future, and the expected reward associated with each state (Dayan, 1993). Such segregation yields several advantages since information about available transitions can be learnt independently of rewards and thus changes in the locations of rewards do not require the value of all states to be re-learnt. This recapitulates a number of long-standing theory of hippocampus which state that hippocampus provides spatial representations that are independent of the animal’s particular goal and support goal-directed spatial navigation (Redish and Touretzky, 1998; Burgess et al., 1997; Koene et al., 2003; Hasselmo and Eichenbaum, 2005; Erdem and Hasselmo, 2012). A growing body of empirical and theoretical evidence suggests that the hippocampal spatial code functions as a successor representations (Stachenfeld et al., 2017). Specifically, that the activity of hippocampal place cells encodes a predictive map over the locations the animal expects to occupy in the future. Notably, this framework accounts for phenomena such as the skewing of place fields due to stereotyped trajectories (Mehta et al., 2000), the reorganisation of place fields following a forced detour (Alvernhe et al., 2011), and the behaviour of humans and rodents whilst navigating physical, virtual, and conceptual spaces (Momennejad et al., 2017; de Cothi et al., 2022). However, the successor representation is typically conceptualised as being learnt using the temporal difference learning rule (Russek et al., 2017; de Cothi and Barry, 2020), which uses the prediction error between expected and observed experience to improve the predictions. Whilst correlates of temporal difference learning have been observed in the striatum during reward-based learning (Schultz et al., 1997), it is less clear how it could be implemented in the hippocampus to learn a predictive map. In this context, we hypothesised that the predictive and compression properties of theta sweeps, combined with STDP in the hippocampus, might be sufficient to approximately learn a successor representation. We simulated the synaptic weights learnt due to STDP between a set of synthetic spiking place cells and show they closely resemble the weights of a successor representation learnt with temporal difference learning. We found that the inclusion of theta sweeps with the STDP rule increased the efficiency and robustness of the learning, with the STDP weights being a close approximation to the temporal difference successor matrix. Further, we find no fine tuning of parameters is needed – biologically determined parameters are optimal to efficiently approximate a successor representation and replicate experimental results synonymous with the predictive map hypothesis, including the behaviourally biased skewing of place fields (Mehta et al., 2000; Stachenfeld et al., 2017) in realistic one- and two-dimensional environments. Finally, we use the simulation of STDP with theta sweeps to generate insight into the observed topographical ordering of place field sizes along the dorsal-ventral hippocampal axis (Kjelstrup et al., 2008), by observing that such organisation is necessary to prevent the detrimental mixing of larger place fields, which approximate longer timescale successor representations (Momennejad and Howard, 2018), with more fine-grained predictions of future spatial location. Our model, focussing on the role of theta sweeps and STDP in learning a hippocampal predictive map, is part of a growing body of recent work emphasising hippocampally plausible mechanisms of learning successor representations, such as using hippocampal recurrence (Fang et al., 2023) or synaptic learning rules which bootstrap long-range predictive associations (Bono et al., 2023). We set out to investigate whether a combination of STDP and phase precession is sufficient to generate a successor representation-like matrix of synaptic weights between place cells in CA3 and downstream CA1. The model comprises of an agent exploring a maze where its position $x⁢(t)$ is encoded by the instantaneous firing of a population of $N$ CA3 basis features, each with a spatial receptive field $fjx⁢(x)$ given by a thresholded Gaussian of radius 1 m and 5 Hz peak firing rate. As the agent traverses the receptive field, its rate of spiking is subject to phase precession $fjθ⁢ (x,t)$ with respect to a 10 Hz theta oscillation. This is implemented by modulating the firing rate by an independent phase precession factor which varies according to the current theta phase and how far through the receptive field the agent has travelled (Chadwick et al., 2015) (see Methods and Figure 1a) such that, in total, the instantaneous firing rate of the $jth$ basis features is given by: (1) ${f}_{j}\left(\mathbf{x},t\right)={f}_{j}^{x}\left(\mathbf{x}\right){f}_{j}^{\theta }\left(\mathbf{x},t\right).$ STDP between phase precessing place cells produces successor representation-like weight matrices. CA3 basis features $fj$ then linearly drive downstream CA1 ‘STDP successor features’ $ψ~i$ (Figure 1b) (2) $\begin{array}{r}{\stackrel{~}{\psi }}_{i}\left(\mathbf{x},t\right)=\sum _{j}{\mathsf{W}}_{ij}{f}_{j}\left(\mathbf{x},t\right).\end{array}$ Using an inhomogeneous Poisson process, the firing rates of the basis and STDP successor features are converted into spike trains which cause learning in the weight matrix $Wij$ according to an STDP rule (see Methods and Figure 1c). The STDP synaptic weight matrix $Wij$ (Figure 1d) can then be directly compared to the temporal difference (TD) successor matrix $Mij$ (Figure 1e), learnt via TD learning on the CA3 basis features (the full learning rule is derived in Methods and shown in Equation 27). Further, the TD successor matrix $Mij$ can also be used to generate the ‘TD successor (3) $\begin{array}{r}{\psi }_{i}\left(\mathbf{x}\right)=\sum _{j}{\mathsf{M}}_{ij}{f}_{j}^{x}\left(\mathbf{x}\right),\end{array}$ allowing for direct comparison and analyses with the STDP successor features $ψ~i$ (Equation 2), using the same underlying firing rates driving the TD learning to sample spikes for the STDP learning. This abstraction of biological detail avoids the challenges and complexities of implementing a fully spiking network, although an avenue for correcting this would be the approach of Brea et al., 2016 and Bono et al., 2023. In our model phase, precession generates theta sweeps (Figure 1a, grey box) as cells successively visited along the current trajectory fire at progressively later times in each theta cycle. Theta sweeps take the current trajectory of the agent and effectively compress it in time. As we show below these compressed trajectories are important for learning successor features. The STDP learned synaptic weight matrix closely approximates the TD successor matrix We first simulated an agent with $N=50$ evenly spaced CA3 place cell basis features on a 5 m circular track (linear track with circular boundary conditions to form a closed loop, Figure 2a). The agent moved left-to-right at a constant velocity for 30 min, performing ∼58 complete traversals of the loop. The STDP weights learnt between the phase precessing basis features and their downstream STDP successor features (Figure 2b) were markedly similar to the successor representation matrix generated using temporal difference learning applied to the same basis features under the same conditions (Figure 2c, element-wise Pearson correlation between matrices $R2=0.87$). In particular, the agent’s strong left-to-right behavioural bias led to the characteristic asymmetry in the STDP weights predicted by successor representation models (Stachenfeld et al., 2017), with both matrices dominated by a wide band of positive weight shifted left of the diagonal and negative weights shifted right. Figure 2 with 4 supplements see all Successor matrices are rapidly approximated by STDP applied to spike trains of phase precessing place cells. To compare the structure of the STDP weight matrix $Wi⁢j$ and TD successor matrix $Mi⁢j$, we aligned each row on the diagonal and averaged across rows (see Methods), effectively calculating the mean distribution of learnt weights originating from each basis feature (Figure 2d). Both models exhibited a similar distribution, with values smoothly ramping up to a peak just left of centre, before a sharp drop-off to the right caused by the left-to-right bias in the agent’s behaviour. In the network trained by TD learning this is because CA3 place cells to the left of (i.e. preceding) a given basis feature are reliable predictors of that basis feature’s future activity, with those immediately preceding it being the strongest predictors and thus conferring the strongest weights to its successor feature. Conversely, the CA3 place cells immediately to the right of (i.e. after) this basis feature are the furthest they could possibly be from predicting its future activity, resulting in minimal weight contributions. Indeed, we observed some of these weights even becoming negative (Figure 2d) – necessary to approximate the sharp drop-off in predictability using the smooth Gaussian basis features. With the STDP model, the similar distribution of weights is caused by the asymmetry in the STDP learning rule combined with the consistent temporal ordering of spikes in a theta sweep. Hence, the sequence of spikes emitted by different cells within a theta cycle directly reflects the order in which their spatial fields are encountered, resulting in commensurate changes to the weight matrix. So, for example, if a postsynaptic neuron reliably precedes its presynaptic cell on the track, the corresponding weight will be reduced, potentially becoming negative. We note that weights changing their sign is not biologically plausible, as it is a violation of Dale’s Law (Dale, 1935). This could perhaps be corrected with the addition of global excitation or by recruiting inhibitory interneurons. Notably, the temporal compression afforded by theta phase precession, which brings behavioural effects into the millisecond domain of STDP, is an essential element of this process (Lisman and Grace, 2005; Koene et al., 2003). When phase precession was removed from the STDP model, the resulting weights failed to capture the expected behavioural bias and thus did not resemble the successor matrix – evidenced by the lack of asymmetry (Figure 2d, dashed line; ratio of mass either side of y-axis 4.54 with phase precession vs. 0.99 without) and a decrease in the explained variance of the TD successor matrix (Figure 2e, $R2=0.87±0.01$ vs $R2=0.63±0.02$ without phase precession). Similarly, without the precise ordering of spikes, the learnt weight matrix was less regular, having increased levels of noise, and converged over $4.5×$ more slowly (Figure 2e; time to reach $R2=0.5$: 2.5 vs 11.5 min without phase precession), still yet to fully converge over the course of 1 hr (Figure 2—figure supplement 1a). Thus, the ability to approximate TD learning appears specific to the combination of STDP and phase precession. Indeed, there are deep theoretical connections linking the two – see Methods section 5.9 for a theoretical investigation into the connections between TD learning and STDP learning augmented with phase precession. This effect is robust to variations in running speed (Figure 2—figure supplement 1b) and field sizes (Figure 2—figure supplement 1c), as well as scenarios where target CA1 cells have multiple firing fields (Figure 2—figure supplement 2a) that are updated online during learning (Figure 2—figure supplement 2b–d), or fully driven by spikes in CA3 (Figure 2—figure supplement 2e); see Methods for more details. We also conducted a hyperparameter sweep to test if these results were robust to changes in the phase precession and STDP learning rule parameters (Figure 2—figure supplement 3). The sweep range for each parameter contained and extended beyond the ‘biologically plausible’ values used in this paper (Figure 2—figure supplement 3a). We found that optimised parameters (those which result in the highest final similarity between STDP and TD weight matrices, $Wi⁢j$ and $Mi⁢j$) were very close to the biological parameters already selected for our model from a literature search (Figure 2—figure supplement 3 c,d parameter references also listed in figure) and, when they were used, no drastic improvement was seen in the similarity between $Wi⁢j$ and $Mi⁢j$. The only exception was firing rate for which performance monotonically improved as it increased - something the brain likely cannot achieve due to energy constraints. In particular, the parameters controlling phase precession in the CA3 basis features (Figure 2—figure supplement 4a) can affect the CA1 STDP successor features learnt, with ‘weak’ phase precession resembling learning in the absence of theta modulation (Figure 2—figure supplement 4b,c), biologically plausible values providing the best match to the TD successor features (Figure 2—figure supplement 4d) and ‘exaggerated’ phase precession actually hindering learning (Figure 2—figure supplement 4e; see methods for more details). Additionally, we find these CA1 cells go on to inherit phase precession from the CA3 population even after learning when they are driven by multiple CA3 fields (Figure 2—figure supplement 4f), and that this learning is robust to realistic phase offsets between the populations of CA3 and CA1 place cells (Figure 2—figure supplement 4g). Next, we examined the correspondence between our model and the TD-trained successor representation in a situation without a strong behavioural bias. Thus, we reran the simulation on the linear track without the circular boundary conditions so the agent turned and continued in the opposite direction whenever it reached each end of the track (Figure 2f). Again, the STDP and TD successor representation weight matrices where remarkably similar ($R2=0.88$; Figure 2gh) both being characterised by a wide band of positive weight centred on the diagonal (Figure 2i) – reflecting the directionally unbiased behaviour of the agent. In this unbiased regime, theta sweeps were less important though still confered a modest shape, learning speed, and signal-strength advantage over the non-phase precessing model (Figure 2j) – evidenced as an increased amount of explained variance ($R2=0.88±0.01$ vs. $R2=0.76±0.02$) and faster convergence (time to reach $R2=0.5$; 3 vs 7.5 minutes). To test if the STDP model’s ability to capture the successor matrix would scale up to open field spaces, we implemented a 2D model of phase precession (see Methods) where the phase of spiking is sampled according to the distance travelled through the place field along the chord currently being traversed (Jeewajee et al., 2014). We then simulated both the agent in an environment consisting of two interconnected 2.5 × 2.5 m square rooms (Figure 2k) using an adapted policy modelling rodent foraging behaviour that is biased towards traversing doorways and following walls (Raudies and Hasselmo, 2012; see Methods and 10 minute sample trajectory shown in Figure 2k). After training for 2 hr of exploration, we found that the combination of STDP and phase precession was able to successfully capture the structure in the TD successor matrix (Figure 2l–m, $R2=0.74$, TD successor matrix calculated over the same 2 hr trajectory). Theta sequenced STDP place cells show behaviourally biased skewing, a hallmark of successor representations We next wanted to investigate how the similarities in weights between the STDP and TD successor representation models are conveyed in the downstream CA1 successor features. One hallmark of the successor representation is that strong biases in behaviour (for example, travelling one way round a circular track) induce a reliable predictability of upcoming future locations, which in turn causes a backward skewing in the resulting successor features (Stachenfeld et al., 2017). Such skewing, opposite to the direction of travel, has also been observed in hippocampal place cells (Mehta et al., 2000). Under strongly biased behaviour on the circular linear track, the biologically plausible STDP CA1 successor features (Equation 2) had a very high correlation with the TD successor features (Equation 3) predicted by successor theory (Figure 3a; $R2=0.98±0.01$). Both exhibited a pronounced backward skew, opposite to the direction of travel (mean TD vs. STDP successor feature skewness: $=-0.39±0.01$ vs. $=-0.24±0.07$). Furthermore, both the STDP and TD successor representation models predict that such biased behaviour should induce a backwards shift in the location of place field peaks (Figure 3a left panel; TD vs. STDP successor feature shift in metres: $-0.28±0.00$ vs $-0.38±0.03$) – this phenomenon is also observed in the hippocampal place cells (Mehta et al., 2000), and our model accounts for the observation that more shifting and skewing is observed in CA1 place cells than CA3 place cells (Dong et al., 2021). As expected, when theta phase precession was removed from the model no significant skew or shift was observed in the STDP successor features. Similarly, the skew in field shape and shift in field peak were not present when the behavioural bias was removed (Figure 3b) – in this unbiased scenario, the advantage of the STDP model with theta phase precession was modest relative to the same model without phase precession ($R2=0.99±0.01$ vs. $R2 Place cells (aka. successor features) in our STDP model show behaviourally biased skewing resembling experimental observations and successor representation predictions. Examining the activity of CA1 cells in the two-room open field environment, we found an increase in the eccentricity of fields close to the walls (Figure 3c & d; average eccentricity of STDP successor features near vs. far from wall: $0.57±0.06$ vs. $0.33±0.07$). In particular, this increased eccentricity is facilitated by a shorter field width along the axis perpendicular to the wall ( Figure 3e), an effect observed experimentally in rodent place cells (Tanni et al., 2021). This increased eccentricity of cells near the wall remained when the behavioural bias to follow walls was removed (Figure 3d; average eccentricity with vs. without wall bias: $0.57±0.06$ vs. $0.54±0.06$), thus indicating it is primarily caused by the inherent bias imposed on behaviour by extended walls rather than an explicit policy bias. Note that our ellipse fitting algorithm accounts for portions of the field that have been cut off by environmental boundaries (see methods & Figure 3c), and so this effect is not simply a product of basis features being occluded by walls. In a similar fashion, the bias in the motion model we used - which is predisposed to move between the two rooms – resulted in a shift in STDP successor feature peaks towards the doorway (Figure 3f & g; inwards shift in metres for STDP successor features near vs. far from doorway: $0.15±0.06$ vs. $0.04±0.05$; with doorway bias turned off: $0.05±0.08$ vs. $0.04±0.05$). At the level of individual cells, this was visible as an increased propensity for fields to extend into the neighbouring room after learning (Figure 3h). Hence, although basis features were initialised as two approximately non-overlapping populations – with only a small proportion of cells near the doorway extending into the neighbouring room – after learning many cells bind to those on the other side of the doorway, causing their place fields to diffuse through the doorway and into to the other room (Figure 3f). This shift could partially explain why place cell activity is found to cluster around doorways ( Spiers et al., 2015) and rewarded locations (Dupret et al., 2010) in electrophysiological experiments. Equally it is plausible that a similar effect might underlie experimental observations that neural representations in multi-compartment environments typically begin heavily fragmented by boundaries and walls but, over time, adapt to form a smooth global representations (e.g. as observed in grid cells by Carpenter et al., 2015). Multiscale successor representations are stored along the hippocampal dorsal-ventral axis by populations of differently sized place cells Finally, we wanted to investigate whether the STDP learning rule was able form successor representation-like connections between basis features of different scales. Recent experimental work has highlighted that place fields form a multiscale representation of space, which is particularly noticeable in larger environments (Tanni et al., 2021; Eliav et al., 2021), such as the one modelled here. Such multiscale spatial representations have been hypothesised to act as a substrate for learning successor features with different time horizons – large-scale place fields are able to make predictions of future location across longer time horizons, whereas place cells with smaller fields are better placed to make temporally fine-grained predictions. Agents could use such a set of multiscale successor features to plan actions at different levels of temporal abstraction, or predict precisely which states they are likely to encounter soon (Momennejad and Howard, 2018). Despite this, what is not known is whether different sized place fields will form associations when subject to STDP coordinated by phase precession and what effect this would have on the resulting successor features. Hypothetically, consider a small basis feature cell with a receptive field entirely encompassed by that of a larger basis cell with no theta phase offset between the entry points of both fields. A potential consequence of theta phase precession is that the cell with the smaller field would phase precess faster through the theta cycle than the other cell – initially, it would fire later in the theta cycle than the cell with a larger field, but as the animal moves towards the end of the small basis field it would fire earlier. These periods of potentiation and depression instigated by STDP could act against each other, and the extent to which they cancel each other out would depend on the relative placement of the two fields, their size difference, and the parameters of the learning rule. To test this, we simulated an agent, learning according to our STDP model in the circular track environment, with, simultaneously, three sets of differently sized basis features ($σ=0.5$, 1.0 and 1.5 m, Figure 4a). Such ordered variation in field size has been observed along the dorso-ventral axis of the hippocampus (Kjelstrup et al., 2008; Strange et al., 2014; Figure 4b), and has been theorised to facilitate successor representation predictions across multiple time-scales (Stachenfeld et al., 2017; Momennejad and Howard, 2018). Multiscale successor representations are stored by place cells with multi-sized place fields but only when sizes are segregated along the dorso-ventral axis. When we trained the STDP model on a population of homogeneously distributed multiscale basis features, the resulting weight matrix displayed binding across the different sizes regardless of the scale difference (Figure 4c top). This in turn leads to a population of downstream successor features with the same redundantly large scale (Figure 4c bottom). The negative interaction between different sized fields was not sufficient to prevent binding and, as such, the place fields of small features are dominated by contributions from bindings to larger basis features. Conversely, when these multiscale basis features were ordered along the dorso-ventral axis to prevent binding between the different scales – cells of the three scales were processed separately (Figure 4d top) – the multiscale structure is preserved in the resulting successor features (Figure 4d bottom). We thus propose that place cell size can act as a proxy for the predictive time horizon, $τ$ – also called the discount parameter, $γ=e-d⁢tτ$, in discrete Markov Decision Processes. However, for this effect to be meaningful, plasticity between cells of different scales must be minimised to prevent short timescales from being overwritten by longer ones, this segregation may plausibly be achieved by the observed size ordering along the hippocampal dorsal-ventral axis. Successor representations store long-run transition statistics and allow for rapid prediction of future states (Dayan, 1993) – they are hypothesised to play a central role in mammalian navigation strategies (Stachenfeld et al., 2017; de Cothi and Barry, 2020). We show that Hebbian learning between spiking neurons, resembling the place fields found in CA3 and CA1, learns an accurate approximation to the successor representation when these neurons undergo phase precession with respect to the hippocampal theta rhythm. The approximation achieved by STDP explains a large proportion of the variance in the TD successor matrix and replicates hallmarks of successor representations (Stachenfeld et al., 2014; Stachenfeld et al., 2017; de Cothi and Barry, 2020) such as behaviourally biased place field skewing, elongation of place fields near walls, and clustering near doorways in both one and two-dimensional environments. That the predictive skew of place fields can be accomplished with a STDP-type learning rule is a long-standing hypothesis; in fact, the authors that originally reported this effect also proposed a STDP-type mechanism for learning these fields (Mehta et al., 2000; Mehta, 2001). Similarly, the possible accelerating effect of theta phase precession on sequence learning has also been described in a number of previous works (Jensen and Lisman, 1996; Koene et al., 2003; Reifenstein et al., 2021). Until recently (Fang et al., 2023; Bono et al., 2023), SR models have largely not connected with this literature: they either remain agnostic to the learning rule or assume temporal difference learning (which has been well-mapped onto striatal mechanisms (Schultz et al., 1997; Seymour et al., 2004), but it is unclear how this is implemented in hippocampus) (Stachenfeld et al., 2014; Stachenfeld et al., 2017; de Cothi and Barry, 2020; Geerts et al., 2020; Vértes and Sahani, 2019). Thus, one contribution of this paper is to quantitatively and qualitatively compare theta-augmented STDP to temporal difference learning, and demonstrate where these functionally overlap. This explicit link permits some insights about the physiology, such as the observation that the biologically observed parameters for phase precession and STDP resemble those that are optimal for learning the SR ( Figure 2—figure supplement 3), and that the topographic organisation of place cell sizes is useful for learning representations over multiple discount timescales (Figure 4). It also permits some insights for RL, such as that the approximate SR learned with theta-augmented STDP, while provably theoretically different from TD (Section: A theoretical connection between STDP and TD learning), is sufficient to capture key qualitative phenomena. Theta phase precession has a dual effect not only allowing learning by compressing trajectories to within STDP timescales but also accelerating convergence to a stable representation by arranging the spikes from cells along the current trajectory to arrive in the order those cells are actually encountered (Jensen and Lisman, 1996; Koene et al., 2003). Without theta phase precession, STDP fails to learn a successor representation reflecting the current policy unless that policy is approximately unbiased. Further, by instantiating a population of place cells with multiple scales we show that topographical ordering of these place cells by size along the dorso-ventral hippocampal axis is a necessary feature to prevent small discount timescale successor representations from being overwritten by longer ones. Last, performing a grid search over STDP learning parameters, we show that those values selected by evolution are approximately optimal for learning successor representations. This finding is compatible with the idea that the necessity to rapidly learn predictive maps by STDP has been a primary factor driving the evolution of synaptic learning rules in While the model is biologically plausible in several respects, there remain a number of aspects of the biology that we do not interface with, such as different cell types, interneurons and membrane dynamics. Further, we do not consider anything beyond the most simple model of phase precession, which directly results in theta sweeps in lieu of them developing and synchronising across place cells over time (Feng et al., 2015). Rather, our philosophy is to reconsider the most pressing issues with the standard model of predictive map learning in the context of hippocampus (e.g. the absence of dopaminergic error signals in CA1 and the inadequacy of synaptic plasticity timescales). We believe this minimalism is helpful, both for interpreting the results presented here and providing a foundation on which further work may examine these biological intricacies, such as whether the model’s theta sweeps can alternately represent future routes (Kay et al., 2020) for example by the inclusion of attractor dynamics (Chu et al., 2022). Still, we show this simple model is robust to the observed variation in phase offsets between phase precessing CA3 and CA1 place cells across different stages of the theta cycle (Mizuseki et al., 2012). In particular, this phase offset is most pronounced as animals enter a field ($∼90∘$) and is almost completely reduced by the time they leave it ($∼90∘$; Figure 2—figure supplement 4g). Essentially, our model hypothesises that the majority of plasticity induced by STDP and theta phase precession will take place in the latter part of place fields, equating to earlier theta phases. Notably, this is in-keeping with experimental data showing enhanced coupling between CA3 and CA1 in these early theta phases (Colgin et al., 2009; Hasselmo et al., 2002). However, as our simulations show (Figure 2—figure supplement 4g), even if these assumptions do not hold true, the model is sufficiently robust to generate SR equivalent weight matrices for a range of possible phase offsets between CA3 and CA1. Our model extends previous work – which required successor features to recursively expand in order to make long range predictions (e.g. as demonstrated in Brea et al., 2016; Bono et al., 2023) – by exploiting the existence of temporally compressed theta sweeps (O’Keefe and Recce, 1993; Skaggs et al., 1996b), allowing place cells with distant fields to bind directly without intermediaries or ‘bootstrapping’. This configuration yields several advantages. First, learning with theta sweeps converges considerably faster than without them. Biologically, it is likely that successor feature learning via Hebbian learning alone (without theta precession) would be too slow to account for the rapid stabilisation of place cells in new environments at behavioural time scales (Bittner et al., 2017) – Dong et al. observed place fields in CA1 to increase in width for approximately the first 10 laps around a 3 m track (Dong et al., 2021). This timescale is well matched by our model with theta sweeps in which CA1 place cells reach 75% of their final extent after 5 min (or 9.6 laps) of exploration on a 5 m track but is markedly slower without theta sweeps. Second, as well as extending previous work to large two-dimensional environments and complex movement policies our model also uses realistic population codes of overlapping Gaussian features. These naturally present a hard problem for models of spiking Hebbian learning since, in the absence of theta sweeps, the order in which features are encountered is not encoded reliably in the relative timing or order of their spikes at synaptic timescales. Theta sweeps address this by tending to sequence spikes according to the order in which their originating fields are encountered. Indeed our preliminary experiments show that when theta sweeps are absent the STDP successor features show little similarity to the TD successor features. Our work is thus particularly relevant in light of a recent trend to focus on biologically plausible features for reinforcement learning (Gustafson and Daw, 2011; de Cothi and Barry, 2020). Other contemporary theoretical works have made progress on biological mechanisms for implementing the successor representation algorithm using somewhat different but complementary approaches. Of particular note are the works by Fang et al., 2023, who show a recurrent network with weights trained via a Hebbian-like learning rule converges to the successor representation in steady state, and Bono et al., 2023 who derive a learning rule for a spiking feed-forward network which learns the SR of one-hot features by bootstrapping associations across time (see also Brea et al., 2016). Combined, the above models, as well as our own, suggest there may be multiple means of calculating successor features in biological circuits without requiring a direct implementation of temporal difference learning. Our theory makes the prediction that theta contributes to learning predictive representations, but is not necessary to maintain them. Thus, inhibiting theta oscillations during exposure to a novel environment should impact the formation of successor features (e.g. asymmetric backwards skew of place fields) and subsequent memory-guided navigation. However, inhibiting theta in a familiar environment in which experience-dependent changes have already occurred should have little effect on the place fields: that is, some asymmetric backwards skew of place fields should be intact even with theta oscillations disrupted. To our knowledge, this has not been directly measured, but there are some experiments that provide hints. Experimental work has shown that power in the theta band increases upon exposure to novel environments (Cavanagh et al., 2012) – our work suggests this is because theta phase precession is critical for learning and updating stored predictive maps for spatial navigation. Furthermore, it has been shown that place cell firing can remain broadly intact in familiar environments even with theta oscillations disrupted by temporary inactivation or cooling (Bolding et al., 2020; Petersen and Buzsáki, 2020). It is worth noting, however, that even with intact place fields, these theta disruptions impair the ability of rodents to reach a hidden goal location that had already been learned, suggesting theta oscillations play a role in navigation behaviours even after initial learning (Bolding et al., 2020; Petersen and Buzsáki, 2020). Other work has also shown that muscimol inactivations to medial septum can disrupt acquisition and retrieval of the memory of a hidden goal location (Chrobak et al., 1989; Rashidy-Pour et al., 1996), although it is worth noting that these papers use muscimol lesions which Bolding and colleagues show also disrupt place-related firing, not just theta precession. The SR model has a number of connections to other models from the computational hippocampal literature that bear on the interpretation of these results. A long-standing property of computational models in the hippocampus literature is a factorisation of spatial and reward representations (Redish and Touretzky, 1998; Burgess et al., 1997; Koene et al., 2003; Hasselmo and Eichenbaum, 2005; Erdem and Hasselmo, 2012), which permits spatial navigation to rapidly adapt to changing goal locations. Even in RL, the SR is also not unique in factorising spatial and reward representations, as purely model-based approaches do this too (Dayan, 1993; Sutton and Barto, 1998; Daw, 2012). The SR occupies a much more narrow niche, which is factorising reward from spatial representations while caching long-term occupancy predictions (Dayan, 1993; Gershman, 2018). Thus, it may be possible to retain some of the flexibility of model-based approaches while retaining the rapid computation of model-free learning. A number of other models describe how physiological and anatomical properties of hippocampus may produce circuits capable of goal-directed spatial navigation (Erdem and Hasselmo, 2012; Redish and Touretzky, 1998; Koene et al., 2003). These models adopt an approach more characteristic of model-based RL, searching iteratively over possible directions or paths to a goal (Erdem and Hasselmo, 2012 ) or replaying sequences to build an optimal transition model from which sampled trajectories converge toward a goal (Redish and Touretzky, 1998) (this model bears some similarities to the SR that are explored by Fang et al., 2023, which shows dynamics converge to SR under a similar form of learning). These models rely on dynamics to compute the optimal trajectory, while the SR realises the statistics of these dynamics in the rate code and can therefore adapt very efficiently. Thus, the SR retains some efficiency benefits. These models are very well-grounded in known properties of hippocampal physiology, including theta precession and STDP, whereas until recently, SR models have enjoyed a much looser affiliation with exact biological mechanisms. Thus, a primary goal of this work is to explore how hippocampal physiological properties relate to SR learning as well. More generally, in principle, any form of sufficiently ordered and compressed trajectory would allow STDP plasticity to approximate a successor representation. Hippocampal replay is a well documented phenomena where previously experienced trajectories are rapidly recapitulated during sharp-wave ripple events (Wilson and McNaughton, 1994), within which spikes show a form of phase precession relative to the ripple band oscillation (150–250 Hz; Bush et al., 2022). Thus, our model might explain the abundance of sharp-wave ripples during early exposure to novel environments (Cheng and Frank, 2008) – when new ‘informative’ trajectories, for example those which lead to reward, are experienced it is desirable to rapidly incorporate this information into the existing predictive map ( Mattar and Daw, 2018). The distribution of place cell receptive field size in hippocampus is not homogeneous. Instead, place field size grows smoothly along the longitudinal axis (from very small in dorsal regions to very large in ventral regions). Why this is the case is not clear – our model contributes by showing that, without this ordering, large and small place cells would all bind via STDP, essentially overwriting the short timescale successor representations learnt by small place cells with long timescale successor representations. Topographically organising place cells by size anatomically segregates place cells with fields of different sizes, preserving the multiscale successor representations. Further, our results exploring the effect of different phase offsets on STDP-successor learning (Figure 2—figure supplement 4g) suggest that the gradient of phase offsets observed along the dorso-ventral axis (Lubenov and Siapas, 2009; Patel et al., 2012) is insufficient to impair the plasticity induced by STDP and phase precession. The premise that such separation is needed to learn multiscale successor representations is compatible with other theoretical accounts for this ordering. Specifically, Momennejad and Howard, 2018 showed that exploiting multiscale successor representations downstream, in order to recover information which is ‘lost’ in the process of compiling state transitions into a single successor representation, typically requires calculating the derivative of the successor representation with respect to the discount parameter. This derivative calculation is significantly easier if the cells – and therefore the successor representations – are ordered smoothly along the hippocampal axis. Work in control theory has shown that the difficult reinforcement learning problem of finding an optimal policy and value function for a given environment becomes tractable if the policy is constrained to be near a ‘default policy’ (Todorov, 2009). When applied to spatial navigation, the optimal value function resembles the value function calculated using a successor representation for the default policy. This solution allows for rapid adaptation to changes in the reward structure since the successor matrix is fixed to the default policy and need not be re-learnt even if the optimal policy changes. Building on this, recent work suggested the goal of hippocampus is not to learn the successor representation for the current policy but rather for a default diffusive policy ( Piray and Daw, 2021). Indeed, we found that in the absence of theta sweeps, the STDP rule learns a successor representation close to that of an unbiased policy, rather than the current policy. This is because without theta-sweeps to order spikes along the current trajectory, cells bind according to how overlapping their receptive fields are, that is, according to how close they are under a ‘diffusive’ policy. In this context it is interesting to note that a substantial proportion of CA3 place cells do not exhibit significant phase precession (O’Keefe and Recce, 1993; Jeewajee et al., 2014). One possibility is that these place cells with weak or absent phase precession might plausibly contribute to learning a policy-independent ‘default representation’, useful for rapid policy prediction when the reward structure of an environment is changed. Simultaneously, theta precessing place cells may learn a successor representation for the current (potentially biased) policy, in total giving the animal access to both an off-policy-but-near-optimal value function and an on-policy-but-suboptimal value function. Finally, we comment on the approximate nature of the successor representations learnt by our biologically plausible model. The STDP successor features described here are unlikely to converge analytically to the TD successor features. Potentially, this implies that a value function calculated according to Equation 31 would not be accurate and may prevent an agent from acting optimally. There are several possible resolutions to this point. First, the successor representation is unlikely to be a self contained reinforcement learning system. In reality, it likely interacts with other model-based or model-free systems acting in other brain regions such as nucleus accumbens in striatum (Lisman and Grace, 2005). Plausibly errors in the successor features are corrected for by counteracting adjustments in the reward weights implemented by some downstream model free error based learning system. Alternatively, it is likely that value function learnt by the brain is either fundamentally approximate or uses an different, less tractable, temporal discounting scheme. Ultimately, although in principle specialised and expensive learning rules might be developed to exactly replicate TD successor features in the brain, this maybe undesirable if a simple learning rule (STDP) is adequate in most circumstances. Indeed, animals – including humans – are known to act sub-optimally (Zentall, 2015; de Cothi et al., 2022), perhaps in part because of a reliance on STDP learning rules in order to learn long-range associations. General summary of the model The model comprises of an agent exploring a maze where its position $x$ at time $t$ is encoded by the instantaneous firing of a population of $N$ CA3 basis features, $fj⁢(x,t)$ for $j∈{1,..,N}$. Each has a spatial receptive field given by a thresholded Gaussian of peak firing rate 5 Hz: (4) ${f}_{j}^{x}\left(\mathbf{x}\left(t\right)\right)=\left\{\begin{array}{ll}\text{Gaussian}\left({\mathbf{x}}_{j},\sigma \right)-c& \text{if }||\mathbf{x}\left(t\right)-{\mathbf{x}}_{j}||<1\text{m} \\ 0& \text{otherwise}\end{array}$ where $xj$ is the location of the field peak, $σ=1m$ is the standard deviation and $c$ is a positive constant that keeps $fjx$ continuous at the threshold. The theta phase of the hippocampal local field potential oscillates at 10 Hz and is denoted by $ϕθ⁢(t)∈[0,2⁢π]$. Phase precession suppresses the firing rate of a basis features for all but a short period within each theta cycle. This period (and subsequently the time when spikes are produced, described in more details below) precesses earlier in each theta cycle as the agent crosses the spatial receptive field. Specifically, this is implemented by simply multiplying the spatial firing rate $fjx$ by a theta modulation factor which rises and falls according to a von Mises distribution in each theta cycle, peaking at a ‘preferred phase’, $ϕj*$, which depends on how far through the receptive field the agent has travelled (hence the spike timings implicitly encode location); (5) ${f}_{j}^{\theta }\left({\varphi }_{\theta }\left(t\right)\right)=\text{VonMises}\left({\varphi }_{i}^{\ast },\kappa \right)$ where $κ=1$ is the concentration parameter of the Von Mises distribution. These basis features in turn drive a population of $N$ downstream ‘STDP successor features’ (Equation 2). Firing rates of both populations ($fj⁢(x,ϕθ)$ and $ψ~i⁢(x,ϕθ)$) are converted to spike trains according to an inhomogeneous Poisson process. These spikes drive learning in the synaptic weight matrix, $Wi⁢j$, according to an STDP learning rule (details below). In summary, if a presynaptic CA3 basis features fires immediately before a postsynaptic CA1 successor feature the binding strength between these cells is strengthened. Conversely if they fire in the opposite order, their binding strength is weakened. For comparison, we also implement successor feature learning using a temporal difference (TD) learning rule, referred to as ‘TD successor features’, $ψi(x)$, to provide a ground truth against which we compare the STDP successor features. Like STDP successor features, these are constructed as a linear combination of basis features (Equation 3). Temporal difference learning updates $Mij$ as follows (6) $\begin{array}{r}{\mathsf{M}}_{ij}←{\mathsf{M}}_{ij}+\eta {\delta }_{ij}^{\text{TD}}\end{array}$ where $δijTD$ is the temporal difference error, which we derive below. In reinforcement learning the temporal difference error is used to learn discounted value functions (successor features can be considered a special type of value function). It works by comparing an unbiased sample of the true value function to the currently held estimate. The difference between these is known as the temporal difference error and is used to update the value estimate until, eventually, it converges on (or close to) the true value function. Definition of TD successor features and TD successor matrix Phase precession model details In our hippocampal model CA3 place cells, referred to as basis features and indexed by $j$ and have thresholded Gaussian receptive fields. The threshold radius is $σ=1$ m and peak firing rate is $F= 5$ Hz. Mathematically, this is written as (7) $\begin{array}{r}{f}_{j}^{x}\left(\mathbf{x}\left(t\right)\right)=\frac{F}{1-{e}^{-\frac{1}{2}}}\left[{e}^{-\frac{‖\mathbf{x}\left(t\right)-{\mathbf{x}}_{j}{‖}^{2}}{2{\sigma }^{2}}}-{e}^{-\frac where $[f⁢(x)]+=m⁢a⁢x⁢(0,f⁢(x))$, $xj$ is the centre of the receptive field and $x⁢(t)$ is the current location of the agent. Phase precession is implemented by multiplying the spatial firing rate, $fjx(x)$, by a phase precession factor (8) $\begin{array}{r}{f}_{j}^{\theta }\left({\varphi }_{\theta }\left(t\right)\right)=2\pi {f}_{\text{VM}}\left({\varphi }_{\theta }\left(t\right)|{\varphi }_{j}^{\ast }\left(\mathbf{x}\right),\kappa where $fVM(x|μ,κ)$ denotes the circular Von Mises distribution on $x∈(0,2π]$ with mean $μ=ϕj∗(x)$ and spread parameter $κ=1$. This factor is large only when the current theta phase, (9) $\begin{array}{r}{\varphi }_{\theta }\left(t\right)=2\pi {u }_{\theta }t\phantom{\rule{0.444em}{0ex}}\left(\mathrm{mod}\phantom{\rule{0.333em}{0ex}}2\pi \right),\end{array}$ which oscillates at $νθ=10$ Hz, is close to the cell’s ‘preferred’ theta phase, (10) $\begin{array}{r}{\varphi }_{j}^{\ast }\left(\mathbf{x}\left(t\right)\right)=\pi +\beta \pi {d}_{j}\left(\mathbf{x}\left(t\right)\right).\end{array}$ $dj⁢(x⁢(t))∈[-1,1]$ tracks how far through the cell’s spatial receptive field, as measured in units of $σ$, the agent has travelled: (11) $\begin{array}{r}{d}_{j}\left(\mathbf{x}\left(t\right)\right)=\frac{\left(\mathbf{x}\left(t\right)-{\mathbf{x}}_{j}\right)\cdot \frac{\stackrel{˙}{\mathbf{x}}\left(t\right)}{‖\stackrel{˙}{\ mathbf{x}}\left(t\right)‖}}{\sigma }.\end{array}$ In instances where the agent travels directly across the centre of a cell (as is the case in 1D environments) then $(x(t)−xj)$ and its normalised velocity (a vector of length 1, pointing in the direction of travel) $x˙(t)‖x˙(t)‖$ are parallel such that $dj⁢(x)$ progresses smoothly in time from it’s minimum, –1, to it’s maximum, 1. In general, however, this extends to any arbitrary curved path an agent might take across the cell and matches the model used in Jeewajee et al., 2014. We fit $β$ and $κ$ to biological data in Figure 5a of Jeewajee et al., 2014 ($β=0.5$, $κ=1$). The factor of $2π$ normalises this term, although the instantaneous firing may briefly rise above the spatial firing rate $fjx(x)$, the average firing rate over the entire theta cycle is still given by the spatial factor $fjx(x)$. In total, the instantaneous firing rate of the basis feature is given by the product of the spatial and phase precession factors (Equation 1). Note that the firing rate of a cell depends explicitly on its location through the spatial receptive field (its ‘rate code’) and implicitly on location through the phase precession factor (its ‘spike-time code’) where location dependence is hidden inside the calculation of the preferred theta phase. Notably, the effect of phase precession is only visible on rapid ‘sub-theta’ timescales. Its effect disappears when averaging over any timescale, $Ta⁢v$ substantially longer than theta timescale of $Tθ=0.1$ s: (12) $\begin{array}{r}\frac{1}{{T}_{av}}{\int }_{t}^{t+{T}_{av}}{f}_{j}\left(\mathbf{x}\left(t\right),{\varphi }_{\theta }\left({t}^{\mathrm{\prime }}\right)\right)d{t}^{\mathrm{\prime }}\approx \ frac{1}{{T}_{av}}{\int }_{t}^{t+{T}_{av}}{f}_{j}^{x}\left(\mathbf{x}\left({t}^{\mathrm{\prime }}\right)\right)d{t}^{\mathrm{\prime }}\phantom{\rule{28.452756pt}{0ex}}\text{for}\phantom{\rule {28.452756pt}{0ex}}{T}_{av}>>{T}_{\theta }\end{array}$ This is important since it implies that the effect of phase precession is only important for synaptic processes with very short integration timescales, for example, STDP. Our phase precession model is ‘independent’ (essentially identical to Chadwick et al., 2015) in the sense that each place cell phase precesses independently from what the other place cells are doing. In this model, phase precession directly leads to theta sweeps as shown in Figure 1. Another class of models referred to as ‘coordinated assembly’ models (Harris, 2005) hypothesise that internal dynamics drive theta sweeps within each cycle because assemblies (aka place cells) dynamically excite one-another in a temporal chain. In these models, theta sweeps directly lead to phase precession. Feng and colleagues draw a distinction between theta precession and theta sequence, observing that while independent theta precession is evident right away in novel environments, longer and more stereotyped theta sequences develop over time (Feng et al., 2015). Since we are considering the effect of theta precession on the formation of place field shape, the independent model is appropriate for this setting. We believe that considering how our model might relate to the formation of theta sequences or what implications theta sequences have for this model is an exciting direction for future work. Synaptic learning via STDP STDP is a discrete learning rule: if a presynaptic neuron $j$ fires before a postsynaptic neuron $i$ their binding strength $Wi⁢j$ is potentiated, conversely if the postsynaptic neuron fires before the presynaptic then weight is depressed. This is implemented as follows. First, we convert the firing rates to spike trains. We sample, for each neuron, from an inhomogeneous spike train with rate parameter $fj(x,t)$ (for presynaptic basis features) or $ψ~i(x,t)$ for postsynaptic successor features. This is done over the period $[0,T]$ across which the animal is exploring. (13) $\begin{array}{r}\left({f}_{j}\left(\mathbf{x},t\right),\left[0,T\right]\right)\stackrel{Poisson}{⟼}\left\{{t}_{j}^{\text{pre}}\right\}\phantom{\rule{14.226378pt}{0ex}},\phantom{\rule {14.226378pt}{0ex}}\left({\stackrel{~}{\psi }}_{i}\left(\mathbf{x},t\right),\left[0,T\right]\right)\stackrel{Poisson}{⟼}\left\{{t}_{i}^{\text{post}}\right\}\end{array}$ Asymmetric Hebbian STDP is implemented online using a trace learning rule. Each presynaptic spike from CA3 cell, indexed $j$, increments an otherwise decaying memory trace, $Tjpre⁢(t)$, and likewise an analagous trace for postsynaptic spikes from CA1, $Tipost⁢(t)$. We matched the STDP plasticity window decay times to experimental data: $τpre=20$ ms and $τpost=40$ ms (Bush et al., 2010). (14) $\begin{array}{r}{\tau }^{\text{pre}}\frac{d{T}_{j}^{\text{pre}}\left(t\right)}{dt}=-{T}_{j}^{\text{pre}}\left(t\right)+\sum _{{t}^{\mathrm{\prime }}\sim \left\{{t}_{j}^{\text{pre}}\right\}}\ delta \left(t-{t}^{\mathrm{\prime }}\right)\end{array}$ (15) $\begin{array}{r}{\tau }^{\text{post}}\frac{d{T}_{i}^{\text{post}}\left(t\right)}{dt}=-{T}_{i}^{\text{post}}\left(t\right)+\sum _{{t}^{\mathrm{\prime }}\sim \left\{{t}_{i}^{\text{post}}\right\}} \delta \left(t-{t}^{\mathrm{\prime }}\right).\end{array}$ We simplify our model by fixing weights during learning: (16) ${\stackrel{~}{\psi }}_{i}\left(\mathbf{x},t\right)=\sum _{j}{\mathsf{W}}_{ij}^{\mathsf{A}}{f}_{j}\left(\mathbf{x},t\right)\phantom{\rule{28.452756pt}{0ex}}\text{During learning}$ where we will refer to $Wi⁢jA$ as the “anchoring” weights which, up until now, have been set to the identity $Wi⁢jA=δi⁢j$. Since $fj⁢(x,t)$ is the phase precessing features, $ψ~i⁢(x,t)$ also inherits phase precession from these features mapped through $Wi⁢jA$. Fixing the weights means that during learning the effect of changes in $Wi⁢j$ are not propagated to the successor features (CA1), their influence is only considered during post-learning recall broadly analogous to the distinct encoding and retrieval phases that have been hypothesised to underpin hippocampal function (Hasselmo et al., 2002). We relax this assumption in Figure 2—figure supplement 2 and allow $Wi⁢j$ to be updated online, showing this isn’t essential. After a period, $[0,T]$ of exploration the synaptic weights are updated on aggregate to account for STDP. (17) ${\mathsf{W}}_{ij}\left(T\right)={\mathsf{W}}_{ij}\left(0\right)+\eta \left[\underset{\text{``pre-before-post potentiations``}}{\underset{⏟}{{a}^{\text{pre}}\sum _{{t}_{i}\sim \left\{{t}_{i}^{\ text{post}}\right\}}\delta \left(t-{t}_{i}\right){T}_{j}^{\text{pre}}\left(t\right)}}+\underset{\text{``post-before-pre depressions``}}{\underset{⏟}{{a}^{\text{post}}\sum _{{t}_{j}\sim \left\{{t}_{j} ^{\text{pre}}\right\}}\delta \left(t-{t}_{j}\right){T}_{i}^{\text{post}}\left(t\right)}}\right]$ where the second terms accounts for the cumulative potentiation and depression due to STDP from spikes in the CA3 and CA1 populations. $η$ is the learning rate (here set to 0.01) and $apre$ and $apost$ give the relative amounts of pre-before-post potentiation and post-before-pre depression, set to match experimental data from Bi and Poo, 1998 as 1 and —0.4 respectively. The weights are initialised to the identity: $Wi⁢j⁢(0)=δi⁢j$. Finally, when analysing the successor features after learning we use the updated weight matrix, not the anchoring weights, (and turn off phase precession since we are only interested in rate maps) (18) ${\stackrel{~}{\psi }}_{i}\left(\mathbf{x}\right)=\sum _{j}{\mathsf{W}}_{ij}\left(T\right){f}_{j}^{x}\left(\mathbf{x}\right).\phantom{\rule{28.452756pt}{0ex}}\text{After learning}$ Temporal difference learning To test our hypothesis that STDP is a good approximation to TD learning we simultaneously computed the TD successor features defined as the total expected future firing of a basis feature: (19) ${\psi }_{i}\left(\mathbf{x}\right)=\mathbb{E}\left[{\int }_{t}^{\mathrm{\infty }}\frac{1}{\tau }{e}^{-\frac{{t}^{\mathrm{\prime }}-t}{\tau }}{f}_{i}^{x}\left(\mathbf{x}\left({t}^{\mathrm{\prime }}\right)\right)d{t}^{\mathrm{\prime }}\phantom{\rule{2.845276pt}{0ex}}|\phantom{\rule{2.845276pt}{0ex}}\mathbf{x}\left(t\right)=\mathbf{x}\right].$ $τ$ is the temporal discounting time-horizon (related to $γ$, the discount factor used in reinforcement learning on temporally discretised MDPs, $γ=e-d⁢tτ$) and the expectation is over trajectories initiated at position $x$. This formula explains the one-to-one correspondence between CA3 cells and CA1 cells in our hippocampal model (Figure 1b): each CA1 cell, indexed $i$, learns to approximate the TD successor feature for its target basis feature, also indexed $i$. We set the discount timescale to $τ=4$ s to match relevant behavioural timescales for an animal exploring a small maze environment where behavioural decisions, such as whether to turn left or right, need to be made with respect to optimising future rewards occurring on the order of seconds. We learn these successor features by tuning the weights of a linear decomposition over the basis feature set: (20) $\begin{array}{r}{\psi }_{i}\left(\mathbf{x}\right)=\sum _{j}{\mathsf{M}}_{ij}{f}_{j}^{x}\left(\mathbf{x}\right),\end{array}$ this way we can directly compare $Mi⁢j$ to the STDP weight matrix $Wi⁢j$. Our TD successor matrix, $Mi⁢j$, should not be confused with the successor representation as defined in Stachenfeld et al., 2017 and denoted $M⁢(si,sj)$, although they are analogous. $Mi⁢j$ can be thought of as an analogue to $M⁢(si,sj)$ for spatially continuous (i.e. not one-hot) basis features, we show in the methods that they are equal (strictly, $M⁢(s,s′)=Mi⁢jT$) in the limit of a discrete one-hot place cells. Temporal difference learning The temporal difference (TD) update rule is used to learning the TD successor matrix (Equation 20). The standard TD(0) learning rule for a linear value function, $ψi⁢(x)$, which basis feature weights $Mi⁢j$ is (Sutton and Barto, 1998): (21) $\begin{array}{r}{\mathsf{M}}_{ij}←{\mathsf{M}}_{ij}+\eta {\delta }_{i}{f}_{j}^{x}\left(\mathbf{x}\right)\end{array}$ where $δi$ is the observed TD-error for the $ith$ successor feature and $η$ is the learning rate. Note that we are only considering the spatial component of the firing rate, $fjx⁢(x)$, not the phase modulation component, $fjθ⁢(x)$, which (as shown) would average away over any timescale significantly longer than the theta timescale (100ms). For now we will drop the superscript and write $fjx⁢(x)= To find the TD-error, we must derive a temporally continuous analogue of the Bellman equation. Following Doya, 2000, we take the derivative of Equation 19 which gives a consistency equation on the successor feature as follows: (22) $\begin{array}{rl}\frac{d}{dt}{\psi }_{i}\left(\mathbf{x}\left(t\right)\right)& =\frac{d}{dt}{\int }_{t}^{\mathrm{\infty }}\frac{1}{\tau }{e}^{-\frac{{t}^{\mathrm{\prime }}-t}{\tau }}{f}_{i}\ left(\mathbf{x}\left({t}^{\mathrm{\prime }}\right)\right)d{t}^{\mathrm{\prime }}\end{array}$ (23) $\begin{array}{rl}& =\frac{1}{\tau }\left({\psi }_{i}\left(\mathbf{x}\left(t\right)\right)-{f}_{i}\left(\mathbf{x}\left(t\right)\right)\right)\end{array}$ This gives a continuous TD-error of the form (24) $\begin{array}{r}{\delta }_{i}\left(t\right)=\frac{d}{dt}{\psi }_{i}\left(\mathbf{x}\left(t\right)\right)+\frac{1}{\tau }\left({f}_{i}\left(\mathbf{x}\left(t\right)\right)-{\psi }_{i}\left(\ which can be rediscretised and rewritten by Taylor expanding the derivative ($ψi⁢(t)=ψi⁢(t)-ψi⁢(t-d⁢t)d⁢t$) to give (25) $\begin{array}{r}{\delta }_{i}\left(t\right)=\frac{1}{dt}\left(\frac{dt}{\tau }{f}_{i}\left(\mathbf{x}\left(t\right)\right)+\left(1-\frac{dt}{\tau }\right){\psi }_{i}\left(\mathbf{x}\left(t\ right)\right)-{\psi }_{i}\left(\mathbf{x}\left(t-dt\right)\right)\right).\end{array}$ This looks like a conventional TD-error term (typically something like $δt=Rt+γ⁢Vt-Vt-1$) except that we can choose $d⁢t$ (the timestep between learning updates) freely. Finally expanding $ψi⁢(x⁢(t)) $ using (Equation 3) and substituting this back into Equation 21 gives the update rule: (26) $\begin{array}{r}{\mathsf{M}}_{ij}←{\mathsf{M}}_{ij}+\frac{\eta }{dt}\left[\frac{dt}{\tau }{f}_{i}\left(\mathbf{x}\left(t\right)\right)+\sum _{k}{\mathsf{M}}_{ik}\left[\left(1-\frac{dt}{\tau }\ This rule does not stipulate a fixed time step between updates. Unlike traditional TD updates rules on discrete MDPs, $d⁢t$ can take any positive value. The ability to adaptively vary $d⁢t$ has potentially underexplored applications for efficient learning: when information density is high (e.g. when exploring new or complex environments, or during a compressed replay event Skaggs and McNaughton, 1996a) it may be desirable to learn regularly by setting $d⁢t$ small. Conversely when the information density is low (for example in well known or simple environments) or learning is undesirable (for example the agent is aware that a change to the environment is transient and should not be committed to memory), $d⁢t$ can be increased to slow learning and save energy. In practise, we set our agent to perform a learning update approximately every 1 cm along it’s trajectory ($d⁢t≈0.1$ s). We add a small amount of $L2$ regularisation by adding the term $-2⁢η⁢λ⁢M$ to the right hand side of Equation 27. This breaks the degeneracy in $Mi⁢j$ caused by having a set of basis features which is overly rich to construct the successor features and can be interpreted, roughly, as a mild energy constraint favouring smaller synaptic connectomes. In total the full update rule from our TD successor matrix in matrix form is given by (27) $\begin{array}{r}\mathsf{M}←\mathsf{M}+\frac{\eta }{dt}\left[\frac{dt}{\tau }\mathbf{f}\left(\mathbf{x}\left(t\right)\right)+\mathsf{M}\left[\left(1-\frac{dt}{\tau }\right)\mathbf{f}\left(\ mathbf{x}\left(t\right)\right)-\mathbf{f}\left(\mathbf{x}\left(t-dt\right)\right)\right]\right]{\mathbf{f}}^{\mathsf{T}}\left(\mathbf{x}\left(t\right)\right)-2\eta \lambda \mathsf{M}.\end{array}$ Successor features in continuous time and space Typically, as in Stachenfeld et al., 2017, the successor representation is calculated in discretised time and space. $M⁢(si,sj)$ encodes the expected discounted future occupancy of state $sj$ along a trajectory initiated in state $si$: (28) $\begin{array}{r}M\left({\mathsf{s}}_{i},{\mathsf{s}}_{j}\right)=\mathbb{E}\left[\sum _{t=0}{\gamma }^{t}\delta \left({\mathsf{s}}_{t}={\mathsf{s}}_{j}\right)\phantom{\rule{5.690551pt}{0ex}}|\ There are two forms of discretisation here. Firstly, time is discretised: it increases by a fixed increment,+1, to transition the state from $st→st+1$. Secondly, assuming this is a spatial exploration task, space is discretised: the agent can be in exactly one state on any given time. We loosen both these constraints reinstating time and space as continuous quantities. Since, for space, we cannot hope to enumerate an infinite number of locations, we represent the state by a population vector of diffuse, overlapping spatially localised place cells. Thus it is no longer meaningful to ask what the expected future occupancy of a single location will be. The closest analogue, since the place cells are spatially localised, is to ask how much we expect place cell, $i$, centred at $xi$, to fire in the near (discounted) future. This continuous time constraint alters the sum over time into an integral over time. Further, the role of $γ$ which discounts state occupancy many time steps into the future, is replaced by $τ$ which discounts firing a long time into the future. Thus the extension of the successor representation, $M⁢(si,sj)$, to continuous time and space is given by the successor feature, (29) ${\psi }_{i}\left(\mathbf{x}\right)=\mathbb{E}\left[{\int }_{t}^{\mathrm{\infty }}\frac{1}{\tau }{e}^{-\frac{{t}^{\mathrm{\prime }}-t}{\tau }}{f}_{i}\left(\mathbf{x}\left({t}^{\mathrm{\prime }}\ right)\right)d{t}^{\mathrm{\prime }}\phantom{\rule{2.845276pt}{0ex}}|\phantom{\rule{2.845276pt}{0ex}}\mathbf{x}\left(t\right)=\mathbf{x}\right].$ Why have we chosen to do this? Temporally it makes little sense to discretise time in a continuous exploration task: $γ$, the reinforcement learning discount factor, describes how many timesteps into the future the predictive encoding accounts for and so undesirably ties the predictive encoding to the otherwise arbitrary size of the simulation timestep, $d⁢t$. In the continuous definition, $τ$ intuitively describes how long into the future the predictive encoding discounts over and is independent of $d⁢t$. This definition allows for online flexibility in the size of $d⁢t$, as shown in Equation 27. This relieves the agent of a burden imposed by discretisation; namely that it must learn with a fixed time step,+1, all the time. Now the agent potentially has the ability to choose the fidelity over which to learn and this may come with significant benefits in terms of energy efficiency, as described above. Further, using the discretised form implicitly ties the definition of the successor representation (or any similarly defined value function) to the time step used in their simulation. When space is discretised, the successor representation is a matrix encoding predictive relationships between these discrete locations. TD successor features, defined above, are the natural extension of the successor representation in a continuous space where location is encoded by a population of overlapping basis features, rather than exclusive one-hot states. The TD successor matrix, $Mij$, can most easily be viewed as set of driving weights: $Mi⁢j$ is large if basis feature $fj⁢(x)$ contributes strongly to successor feature $ψi(x)$. They are closely related (for example, in the effectively discrete case of non-overlapping basis features, it can be shown that the TD successor matrix then corresponds directly to the transpose of the successor representation, $MijT=M(si,si)$, see below for proof) but we believe the continuous case has more applications in terms of biological plausibility; electrophysiological studies show hippocampus encodes position using a population vector of overlapping place cells, rather than one-hot states. Furthermore the continuous case maps neatly onto known neural circuity, as in our case with CA3 place cells as basis features, CA1 place cells as successor features, and the successor matrix as the synaptic weights between them. In our case, the choice not to discretise space and use a more biologically compatible basis set of large overlapping place cells is necessary were our basis features to not overlap they would not be able to reliably form associations using STDP since often only one cell would ever fire in a given theta For completeness (although this is not something studied in this report), this continuous successor feature form also allows for rapid estimation of the value function in a neurally plausible way. Whereas for the discrete case value can be calculated as: (30) $\begin{array}{r}V\left({\mathsf{s}}_{i}\right)=\sum _{j}M\left({\mathsf{s}}_{i},{\mathsf{s}}_{j}\right)R\left({\mathsf{s}}_{j}\right)\end{array}$ where $R⁢(sj)$ is the per-time-step reward to be found at state $sj$, for continuous successor feature setting: (31) $\begin{array}{r}\mathsf{V}\left(\mathbf{x}\right)=\sum _{j}{\psi }_{j}\left(\mathbf{x}\right){\mathsf{R}}_{j}\end{array}$ where $Rj$ is a vector of weights satisfying $∑jRj⁢fj⁢(x)=R⁢(x)$ where $R⁢(x)$ is the reward-rate found at location $x$. (Equation 31) can be confirmed by substituting into it Equation 29. $Rj$ (like $R⁢(sj)$) must be learned independent to, and as well as, the successor features, a process which is not the focus of this study although correlates have been observed in the hippocampus (Gauthier and Tank, 2018). $V⁢(x)$ is the temporally continuous value associated with trajectories initialised at $x$: (32) $\begin{array}{r}\mathsf{V}\left(\mathbf{x}\right)=\mathbb{E}\left[{\int }_{t}^{\mathrm{\infty }}\frac{1}{\tau }{e}^{-\frac{{t}^{\mathrm{\prime }}-t}{\tau }}R\left(\mathbf{x}\left({t}^{\mathrm{\ prime }}\right)\right)d{t}^{\mathrm{\prime }}\phantom{\rule{2.845276pt}{0ex}}|\phantom{\rule{2.845276pt}{0ex}}\mathbf{x}\left(t\right)=\mathbf{x}\right].\end{array}$ Equivalence of the TD successor matrix to the successor representation Here, we show the equivalence between $M⁢(si,sj)$ and $Mi⁢j$. First we can rediscretise time by setting $d⁢t′$ to be constant and defining $γ=1-d⁢t′τ$ and $xn=x⁢(n⋅d⁢t′)$. The integral in Equation 29 becomes a sum, (33) $\begin{array}{r}{\psi }_{i}\left(\mathbf{x}\right)=\left(1-\gamma \right)\mathbb{E}\left[\sum _{t=0}^{\mathrm{\infty }}{\gamma }^{t}{f}_{i}\left({\mathbf{x}}_{t}\right)\phantom{\rule Next, we rediscretise space by supposing that CA3 place cells in our model have strictly non-overlapping receptive fields which tile the environment. For each place cell, $i$, there is continuous area, $Ai$, such that for any location within this area place cell $i$ fires at a constant rate whilst all others are silent. When $x∈Ai$ we denote this state $s⁢(x)=si$ (since all locations in this area have identical population vectors). (34) $\begin{array}{r}{f}_{i}\left(\mathbf{x}\right)=\delta \left(\mathbf{x}\in {\mathcal{A}}_{i}\right)=\delta \left(\mathsf{s}\left(\mathbf{x}\right)={\mathsf{s}}_{i}\right)\end{array}$ Let the initial state be $s⁢(x)=sj$ (i.e. $x∈Aj$). Putting this into Equation 33 and equating to Equation 3, the definition of our TD successor matrix, gives (35) $\begin{array}{rl}{\psi }_{i}\left(\mathbf{x}\right)=\sum _{k}{\mathsf{M}}_{ik}\delta \left({\mathsf{s}}_{j}={\mathsf{s}}_{k}\right)& =\left(1-\gamma \right)\mathbb{E}\left[\sum _{t=0}^{\mathrm {\infty }}{\gamma }^{t}\delta \left({\mathsf{s}}_{t}={\mathsf{s}}_{i}\right)\phantom{\rule{2.845276pt}{0ex}}|\phantom{\rule{2.845276pt}{0ex}}{\mathsf{s}}_{0}={\mathsf{s}}_{j}\right],\end{array}$ confirming that (36) $\begin{array}{r}{\mathsf{M}}_{ij}^{\mathsf{T}}\propto M\left({\mathsf{s}}_{i},{\mathsf{s}}_{j}\right).\end{array}$ Simulation and analysis details In the 1D open loop maze (Figure 2a–e), the policy was to always move around the maze in one direction (left to right, as shown) at a constant velocity of 16 cm s–1 along the centre of the track. Although figures display this maze as a long corridor, it is topologically identical to a loop; place cells close to the left or right sides have receptive fields extending into the right or left of the corridor respectively. Fifty Gaussian basis features of radius 1 m, as described above, are placed with their centres uniformly spread along the track. Agents explored for a total time of 30 min. In the 1D corridor maze, Figure 2f–j, the situation is only changed in one way: the left and right hand edges of the maze are closed by walls. When the agent reaches the wall it turns around and starts walking the other way until it collides with the other wall. Agents explored for a total time of 30 min. In the 2D two room maze, 200 basis feature are positioned in a grid across the two rooms (100 per room) then their location jittered slightly (Figure 2k). The cells are geodesic Gaussians. This means that the $∥x⁢(t)-xi∥2$ term in Equation 7 measures the distance from the agent location the centre of cell $i$ along the shortest walk which complies with the wall geometry. This explains the bleeding of the basis feature through the door in Figure 3d. Agents explored for a total time of 120 min. The movement policy of the agent is a random walk with momentum. The agent moves forward with the speed at each discrete time step drawn from a Rayleigh distribution centred at 16 cm s–1. At each time step the agent rotates a small amount; the rotational speed is drawn from a normal distribution centred at zero with standard deviation 3 πrad s–1 ($π$ rad s–1 for the 1D mazes). Although the agent gets close to a wall (within 10 cm), the direction of motion is changed parallel to the wall, thus biasing towards trajectories which ‘follow’ the boundaries, as observed in real rats. This model was designed to match closely the behaviour of freely exploring rats and was adapted from the model initially presented in Raudies and Hasselmo, 2012. We add one additional behavioural bias: in the 2D two room maze, whenever the agent passes within 1 m of the centre point of the doorway connecting the two rooms, its rotational velocity is biased to turn it towards the door centre. This has the effect of encouraging room-to-room transitions, as is observed in freely moving rats (Carpenter et al., 2015). Analyses of the STDP and TD successor matrices For the 1D mazes, there exists a translational symmetry relating the $N=50$ uniformly distributed basis features and their corresponding rows in the STDP/TD weight matrices. This symmetry is exact for the 1D loop maze (all cells around a circle are rotated versions of one another) and approximate for the corridor maze (broken only for cells near to the left or right bounding wall). The result is that much the information in the linear track weight matrices Figure 2b, c, g and h can be viewed more easily by collapsing this matrix over the rows centred on the diagonal entry (plotted in Figure 2d and i). This is done using a circular permutation of each matrix row by a count, $ni$, equal to how many times we must shift cell $i$ to the right in order for it’s centre to lie at the middle of the track, $xi=2.5m$, (37) \begin{array}{r}{\mathsf{W}}_{ij}^{\text{aligned}}={\mathsf{W}}_{i,\left(j+{n}_{i}\phantom{\rule{0.444em}{0ex}}\left(\mathrm{mod}\phantom{\rule{0.333em}{0ex}}50\right)\right)}.\end{array} This is the ‘row aligned matrix’. Averaging over its rows removes little information thanks to the symmetry of the circular track. We therefore define the 1D quantity (38) \begin{array}{r}{⟨\mathsf{W}⟩}_{j}:=\frac{1}{N}\sum _{i=1}^{N}{\mathsf{W}}_{ij}^{\text{aligned}}.\end{array} which is a convenient way to plot, in 1D, only the non-redundant information in the weight matrices. A theoretical connection between STDP and TD learning Why does STDP between phase precessing place cells approximate TD learning? In this section, we attempt to shed some light on this question by analytically studying the equations of TD learning. Ultimately, comparisons between these learning rules are difficult since the former is inherently a discrete learning rule acting on pairs of spikes whereas the latter is a continuous learning rule acting on firing rates. Nonetheless, in the end we will draw the following conclusions: 1. In the first part, we will show that, under a small set of biologically feasible assumptions, temporal difference learning ‘looks like’ a spike-time dependent temporally asymmetric Hebbian learning rule (that is, roughly, STDP) where the temporal discount time horizon, $τ$ is equal to the synaptic plasticity timescale $O⁢(20⁢ ms)$. 2. In the second part, we will see that this limitation that the temporal discount time horizon is restricted to the timescale of synaptic plasticity (i.e. very short) can be overcome by compressing the inputs. Phase precession, or more formally, theta sweeps, perform exactly the required compression. In sum, there is a deep connection between TD learning and STDP and the role of phase precession is to compress the inputs such that a very short predictive time horizon amounts to a long predictive time horizon in decompressed time coordinates. We will finish by discussing where these learning rules diverge and the consequences of their differences on the learned representations. The goal here is not to derive a mathematically rigorous link between STDP and TD learning but to show that a connection exists between them and to point the reader to further resources if they wish to learn more. Reformulating TD learning to look like STDP First, recall that the temporal difference (TD) rule for learning the successor features $ψi⁢(x)$ defined in Equation 19 takes the form: (39) $\frac{d{\mathsf{M}}_{ij}}{dt}=\eta {\delta }_{i}\left(t\right){\mathsf{e}}_{j}\left(t\right)$ where $Mi⁢j$ are the weights of the linear function approximator, Equation 3 (Note, firstly, it is a coincidence specific to this study that the basis features of the linear function approximator, Equation 3, happen to be the same features of which we are computing the successor features, Equation 19. In general, this needn’t be the case. Secondly, this analysis applies to any value function, not just successor features which are a specific example. If $fi⁢(x)$ in Equation 19 was a reward density then $ψi⁢(x)$ would become a true value function (discounted sum of future rewards) in the more conventional sense). and $δi⁢(t)$ is the continuous temporal difference error defined in Equation 24. $Oj⁢(t)$ is the eligibility trace for feature $j$ defined according to (40) ${\mathsf{e}}_{j}\left(t\right)={\int }_{-\mathrm{\infty }}^{t}\frac{1}{{\tau }_{\mathsf{e}}}{e}^{\frac{t-{t}^{\mathrm{\prime }}}{{\tau }_{\mathsf{e}}}}{f}_{j}\left(\mathbf{x}\left({t}^{\mathrm {\prime }}\right)\right)d{t}^{\mathrm{\prime }}$ or, equivalently, by its dynamics (which we will make use of) (41) ${\mathsf{e}}_{j}\left(t\right)={f}_{j}\left(t\right)-{\tau }_{\mathsf{e}}{\stackrel{˙}{\mathsf{e}}}_{j}\left(t\right).$ where $τO∈[0,τ]$ is a ‘free’ parameter, the eligibility trace timescale, analogous to $λ$ in discrete TD($λ$). When $τO=0$ we recover the learning rule we use to learn successor features, ‘TD(0)’, in Equation 21. Subbing Equation 24 and Equation 41 into this update rule, Equation 39, rearranges to give (42) $\begin{array}{rl}\frac{d{\mathsf{M}}_{ij}}{dt}& =\eta \left({f}_{i}{\mathsf{e}}_{j}-{\psi }_{i}{f}_{j}+\tau {\stackrel{˙}{\psi }}_{i}{\mathsf{e}}_{j}-{\tau }_{\mathsf{e}}{\psi }_{i}{\stackrel where we redefined $η←η′=η/τ$. Now let the predictive time horizon be equal to the eligibility trace timescale. This setting is also called TD(1) or Monte Carlo learning, (43) $\tau ={\tau }_{\mathsf{e}}$ (44) $\begin{array}{rl}\frac{d{\mathsf{M}}_{ij}}{dt}& =\eta \left({f}_{i}{\mathsf{e}}_{j}-{\psi }_{i}{f}_{j}+{\tau }_{\mathsf{e}}\frac{d}{dt}\left({\psi }_{i}{\mathsf{e}}_{j}\right)\right).\end The final term in this update rule, the total derivative, can be ignored with respect to the stationary point of the learning process. To see why, consider the simple case of a periodic environment which repeats over a time period $T$ – this is true for the 1D experiments studied here. Learning is at a stationary point when the integrated changes in the weights vanish over one whole period: (45) $\begin{array}{rl}0={\int }_{t}^{t+T}d{t}^{\mathrm{\prime }}{\stackrel{˙}{\mathsf{M}}}_{ij}\left({t}^{\mathrm{\prime }}\right)& =\eta {\int }_{t}^{t+T}d{t}^{\mathrm{\prime }}\left({f}_{i}{\ mathsf{e}}_{j}-{\psi }_{i}{f}_{j}\right)+\eta {\tau }_{\mathsf{e}}{\int }_{t}^{t+T}d{t}^{\mathrm{\prime }}\frac{d}{d{t}^{\mathrm{\prime }}}\left({\psi }_{i}\left({t}^{\mathrm{\prime }}\right){\mathsf {e}}_{j}\left({t}^{\mathrm{\prime }}\right)\right)\end{array}$ (46) $\begin{array}{rl}& =\eta {\int }_{t}^{t+T}d{t}^{\mathrm{\prime }}\left({f}_{i}{\mathsf{e}}_{j}-{\psi }_{i}{f}_{j}\right)+\eta {\tau }_{\mathsf{e}}\left[{\psi }_{i}\left(t+T\right){\mathsf{e}}_ {j}\left(t+T\right)-{\psi }_{i}\left(t\right){\mathsf{e}}_{j}\left(t\right)\right]\end{array}$ (47) $\begin{array}{rl}& =\eta {\int }_{t}^{t+T}d{t}^{\mathrm{\prime }}\left({f}_{i}{\mathsf{e}}_{j}-{\psi }_{i}{f}_{j}\right)\end{array}$ where the last term vanishes due to the periodicity. This shows that the learning rule converges to the same fixed point (i.e. the successor feature) irrespective of whether this term is present and it can therefore be removed. The dynamics of this updated learning rule won’t strictly follow the same trajectory as TD learning but they will converge to the same point. Although strictly we only showed this to be true in the artificially simple setting of a periodic environment it is more generally true in a stochastic environment where the feature inputs depend on a stationary latent Markov chain (Brea et al., 2016). Thus, a valid learning rule which converges onto the successor feature can be written as (48) $\frac{d{\mathsf{M}}_{ij}}{dt}=\eta \left({f}_{i}\left(t\right){\mathsf{e}}_{j}\left(t\right)-{\psi }_{i}\left(t\right){f}_{j}\left(t\right)\right)$ Claim: this looks like a continuous analog of STDP acting on the weights between a set of input features, indexed $j$, and a set of downstream “successor features” indexed $i$. Each term in the above learning rule can be non-rigorously identified as follows, a key change is that the successor features neurons have two-compartments; a somatic compartment and a dendritic compartment: (49) $\frac{d{\mathsf{M}}_{ij}}{dt}=\eta \left(\underset{\text{pre-before-post potentiation}}{\underset{⏟}{{\mathsf{V}}_{i}^{\text{soma}}\left(t\right){\stackrel{~}{\mathsf{I}}}_{j}\left(t\right)}}-\ underset{\text{post-before-pre depression}}{\underset{⏟}{{\mathsf{V}}_{i}^{\text{dend}}\left(t\right){\mathsf{I}}_{j}\left(t\right)}}\right)$ • $fi⁢(t):=Visoma⁢(t)$ is the somatic membrane voltage which is primarily set by a ‘target signal’. In general, this target signal could be any reward density function, here it is the firing rate of the ith input feature. • $ψi⁢(t):=Vidend⁢(t)$ is the voltage inside a dendritic compartment which is a weighted linear sum of the input currents, Equation 3. This compartment is responsible for learning the successor feature by adjusting its input weights, $Mi⁢j$, according to equation (48). • $fj⁢(t):=Ij⁢(t)$ are the synaptic currents into the dendritic compartment from the upstream features. • $Oj⁢(t):=I~j⁢(t)$ are the low-pass filtered eligibility traces of the synaptic input currents. This learning rule, mapped onto the synaptic inputs and voltages of a two-compartment neuron, is Hebbian. The first term potentiates the synapse $Mi⁢j$ if there is a correlation between the low-pass filtered presynaptic current and the somatic voltage (which drives postsynaptic activity). More specifically this potentiation is is temporally asymmetric due to the second term which sets a threshold. A postsynaptic spike (e.g. when $Visoma⁢(t)$ reaches threshold) will cause potentiation if (50) ${\mathsf{V}}_{i}^{\text{soma}}\left(t\right){\stackrel{~}{\mathsf{I}}}_{j}\left(t\right)>{\mathsf{V}}_{i}^{\text{dend}}\left(t\right){\mathsf{I}}_{j}\left(t\right)$ but since the eligibility trace decays uniformly after a presynaptic input this will only be true if the postsynaptic spike arrives very soon after. This is pre-before-post potentiation. Conversely an unpaired presynaptic input (e.g. when $Ij⁢(t)$ spikes) will likely cause depression since this bolsters the second depressive term of the learning rule but not the first (note this is true if its synaptic weight is positive such that $Vdend⁢(t)$ will be high too). This is analogous to post-before-pre depression. Whilst not identical, it is clear this rule bears the key hallmarks of the STDP learning rule used in this study, specifically: pre-before-post synaptic activity potentiates a synapse if post synaptic activity arrive within a short time of the presynaptic activity and, secondly, post-before-pre synaptic activity will typically result in depression of the synapse. Intuitively, it now makes sense why asymmetric STDP learns successor features. If a postsynaptic spike from the ith neuron arrives just after a presynaptic spike from the jth feature it means, in all probability, that the presynaptic input features is ‘predictive’ of whatever caused the postsynaptic spike which in this case is the ith feature. Thus, if we want to learn a function which is predictive of the ith features future activity (its successor feature), we should increase the synaptic weight $Mi⁢j$. Finally, identifying that this learning rule looks similar to STDP fixes the timescale of the eligibility trace to be the timescale of STDP plasticity i.e. $O⁢(20-50⁢ ms)$. And to derive this learning rule, we required that the temporal discount time horizon must equal the eligibility trace timescale, altogether: (51) $\tau ={\tau }_{\mathsf{e}}={\tau }_{\text{STDP}}\approx 20-50\text{ ms}$ This limits the predictive time horizon of the learnt successor feature to a rather useless – but importantly non-zero – 20–50ms. In the next section, we will show how phase precession presents a novel solution to this problem. Theta phase precession compresses the temporal structure of input features We showed in Figure 1 how phase precession leads to theta sweeps. These phenomena are two sides of the same coin. Here we will start by positing the existence of theta sweeps and show that this leads to a potentially large amount of compression of the feature basis set in time. First, consider two different definitions of position. $xT⁢(t)$ is the ‘True’ position of the agent representing where it is in the environment at time $t$ is the ‘Encoded’ position of the agent which determines the firing rate of place cells which have spatial receptive fields $fi⁢(xE⁢(t))$. During a theta sweep, the encoded position $xE⁢(t)$ moves with respect to the true position $xT⁢(t)$ at a relative speed of $vS⁢(t)$ where the subscript $S$ distinguishes the ‘Sweep’ speed from the absolute speed of the agent $x˙T⁢(t)=vA⁢(t)$. In total, accounting for the motion of the agent: (52) ${\stackrel{˙}{\mathbf{x}}}_{E}\left(t\right)={\mathbf{v}}_{A}\left(t\right)+{\mathbf{v}}_{S}\left(t\right)$ Now consider how the population activity vector changes in time (53) $\frac{d}{dt}{f}_{i}^{T}\left({\mathbf{x}}_{E}\left(t\right)\right)={\mathrm{abla }}_{\mathbf{x}}{f}_{i}^{T}\left(\mathbf{x}\right)\cdot {\stackrel{˙}{\mathbf{x}}}_{E}\left(t\right)={\mathrm {abla }}_{\mathbf{x}}{f}_{i}^{T}\left(\mathbf{x}\right)\cdot \left({\mathbf{v}}_{A}\left(t\right)+{\mathbf{v}}_{S}\left(t\right)\right)$ and compare the time how it would varying in time if there was no theta sweep (i.e $xE⁢(t)=xT⁢(t)$) (54) $\frac{d{f}_{i}^{T}\left({\mathbf{x}}_{T}\left(t\right)\right)}{dt}={\mathrm{abla }}_{\mathbf{x}}{f}_{i}^{T}\left(\mathbf{x}\right)\cdot \frac{d{\mathbf{x}}_{T}\left(t\right)}{dt}={\mathrm{abla }}_{\mathbf{x}}{f}_{i}^{T}\left(\mathbf{x}\right)\cdot {\mathbf{v}}_{A}\left(t\right).$ They are proportional. Specifically in 1D, where the sweep is observed to move in the same direction as the agent (from behind it to in front of it) this amount to compression of the temporal dynamics by a factor of (55) ${k}_{\theta }=\frac{{v}_{A}+{v}_{S}}{{v}_{A}}.$ This ‘compression’ is also true in 2D where sweeps are also observed to move largely in the same direction as the agent. If this compression is large, it would solve the timescale problem described above. This is because learning a successor feature with a very small time horizon, $τ$, where the input trajectory is heavily compressed in time by a factor of $κθ$ amounts to the same thing as learning a successor feature with a long time horizon $τ′=τ⁢κθ$ where the inputs are not compressed in time. What is $vS$, and is it fast enough to provide enough compression to learn temporally extended SRs? We can make a very rough ballpark estimate. Data is hard to come by but studies suggest the intrinsic speed of theta sweeps can be quite fast. Figures in Feng et al., 2015, Wang et al., 2020 and Bush et al., 2022 show sweeps moving at up to, respectively, 9.4ms–1, 8.5ms–1 and 2.3ms–1. A conservative range estimate of $vS≈5±5$ ms–1 accounts for very fast and very slow sweeps. The timescale of STDP is debated but a reasonable conservative estimate would be around $τSTDP≈35±15×10-3$ s which would cover the range of STDP timescales we use here. The typical speed of a rat, though highly variable, is somewhere in the range $vA≈0.15±0.15$ ms–1. Combining these (with correct error analysis, assuming Gaussian uncertainties) gives an effective timescale increase of (56) ${\tau }^{\mathrm{\prime }}=\tau {k}_{\theta }={\tau }_{\text{STDP}}\frac{{v}_{A}+{v}_{S}}{{v}_{A}}\approx 1.1±1.7\text{s}$ Therefore, we conclude theta sweeps can provide enough compression to lift the timescale of the SR being learn by STDP from short synaptic timescales to relevant behavioural timescales on the order of seconds. Note this ballpark estimate is not intended to be precise, and does not account for many unknowns for example the covariability of sweep speed with running speed[cite], variability of sweep speed with track length[cite] or cell size[cite] which could potentially extend this range further. Differences between STDP and TD learning: where our model does not work We only drew a hand-waving connection between the TD-derived Hebbian learning rule in Equation 48 and STDP. There are numerous difference between STDP and TD learning, these include the fact that: 1. Depression in Equation 48 is dependent on the dendritic voltage which is not true for our STDP rule. 2. Depression in Equation 48 is not explicitly dependent on the time between post and presynaptic activity, unlike STDP. 3. Equation 48 is a continuous learning rule for continuous firing rates, STDP is a discrete learning rule applicable only to spike trains. Analytic comparison is difficult due to this final difference which is why in this paper we instead opted for empirical comparison. Our goal was never to derive a spike-time dependent synaptic learning rule which replicates TD learning, other papers have done work in this direction (see Brea et al., 2016; Bono et al., 2023), rather we wanted to (i) see whether unmodified learning rules measured to be used by hippocampal neurons perform and (ii) study whether phase precession aids learning. Under regimes tested here, STDP seems to hold up well. These differences aside, the learning rule does share other similarities to our model set-up. A special feature of this learning rule is that it postulates that somatic voltage driving postsynaptic activity during learning isn’t affected by the neurons own dendritic voltage. Rather, dendritic voltages affect the plasticity by setting the potentiation threshold. These learning rules have been studies under the collective name of ‘voltage dependent’ Hebbian learning rules[CITE]. This matches the learning setting we use here where, during learning, CA1 neurons are driven by one and only one CA3 feature (the ‘target feature’) whilst the weights being trained $Wi⁢j$ do not immediately effect somatic activity during learning. The lack of online updating matches the electrophysiological observation that plasticity between CA3 and CA1 is highest during the phase of theta when CA1 is driven by Entorhinal cortex and lowest at the phase when CA3 actually drives CA1 (Hasselmo et al., Finally, there is one clear failure for our STDP model – learning very long timescale successor features. Unlike TD learning which can ‘bootstrap’ long timescale associations through intermediate connections, this is not possible with our STDP rule in its current form. Brea et al., 2016 and Bono et al., 2023 show how Equation 48 can be modified to allow long timescale SRs whilst still enforcing the timescale constraint we imposed in Equation 43 thus still maintaining the biological plausibility of the learning rule, this requires allowing the dendritic voltage to modify the somatic voltage during learning in a manner highly similar to bootstrapping in RL. Specifically, in the former study, this is done by a direct extension to the two-compartment model, in the latter it is recast in a one-compartment model although the underlying mathematics shares many similarities. Ultimately both mechanisms could be at play; even in neurons endowed with the ability to bootstrap long timescale association with short timescale plasticity kernels phase precession would still increase learning speed significantly by reducing the amount of bootstrapping required by a factor of $κθ$, something we intend to study more in future work. Finally it isn’t clear what timescales predictive encoding in the hippocampus reach, there is likely to be an upper limit on the utility of such predictive representations beyond which the animal use model-based methods to find optimal solution which guide behaviour. For convenience, panel a of Figure 2—figure supplement 1 duplicates the experiment shown in paper Figure 2a–e. The only change is learning time was extended from 30 minutes to 1 hour. Movement speed variability Panel b shows an experiment where we reran the simulation shown in paper Figure 2a–e except, instead of a constant motion speed, the agent moves with a variable speed drawn from a continuous stochastic process (an Ornstein Uhlenbeck process). The parameters of the process were selected so the mean velocity remained the same (16 cm s–1 left-to-right) but now with significant variability (standard deviation of 16 cm s–1 thresholded so the speed cannot go negative). Essentially, the velocity takes a constrained random walk. This detail is important: the velocity is not drawn randomly on each time step since these changes would rapidly average out with small $d⁢t$, rather the change in the velocity (the acceleration) is random – this drives slow stochasticity in the velocity where there are extended periods of fast motion and extended periods of slow motion. After learning there is no substantial difference in the learned weight matrices. This is because both TD and STDP learning rules are able to average-over the stochasticity in the velocity and converge on representations representative of the mean statistics of the motion. Smaller place cells and faster movement Nothing fundamental prevents learning from working in the case of smaller place fields or faster movement speeds. We explore this in Figure 2—figure supplement 1, panel c, as follows: the agent speed is doubled from 16 cm s–1 to 32 cm s–1 and the place field size is shrunk by a factor of 5 from 2 m diameter to 40 cm diameter. To facilitate learning we also increase the cell density along the track from 10 cells m–1 to 50 cells m–1. We also shrink the track size from 5 m to 2 m (any additional track is redundant due to the circular symmetry of the set-up and small size of the place cells). We then train for 12 min. This time was chosen since 12 min moving at 32 cm s–1 on a 2 m track means the same number of laps as 60 min moving at 16 cm s–1 on a 5 m track (96 laps in total). Despite these changes the weight matrix converged with high similarity to the successor matrix with a shorter time horizon (0.5 s). Convergence time measured in minutes was faster than in the original case but this is mostly due to the shortened track length and increased speed. Measured in laps it now takes longer to converge due to the decreased number of spikes (smaller place fields and faster movement through the place fields). This can be seen in the shallower convergence curve, panel c (right) relative to panel a. In Figure 2—figure supplement 2, panel a, we explore what happens if weights are initialised randomly. Rather than the identity, the weight matrix during learning is fixed (‘anchored’) to a sparse random matrix $Wi⁢jA$; this is defined such that each CA1 neuron receives positive connections from 3, 4, or 5 randomly chosen CA3 neurons with weights summing to one. In all other respects learning remains unchanged. CA1 neurons now have multi-modal receptive fields since they receive connections from multiple, potentially far apart, CA3 cells. This should not cause a problem since each sub-field now acts as its own place field phase precessing according to whichever place cells in CA3 is driving it. Indeed it does not: after learning with this fixed but random CA3-CA1 drive, the synaptic weights are updated on aggregate and compares favourably to the successor matrix (panel a, middle and right). Specifically, this is the successor matrix which maps the unmixed uni-modal place cells in CA3 to the successor features of the new multi-modal ‘mixed’ features found in CA1 before learning. We note in passing that this is easy to calculate due to the linearity of the successor feature (SF): an SF of a linear sum of features is equal to a linear sum of SF, therefore we can calculate the new successor matrix using the same algorithm as before (described in the Methods) then rotating it by the sparse random matrix, $Mi⁢j′=∑kWi⁢kA⁢Mk⁢j$. In order that some structure is visible matrix rows (which index the CA1 postsynaptic cells) have been ordered according to the location of the CA1 peak activity. This explains why the random sparse matrix (panel a, middle) looks ordered even though it is not. After learning the STDP successor feature looks close in form to the TD successor feature and both show a shift and skew backwards along the track (panel a, rights, one example CA1 field shown). In Figure 2—figure supplement 2, panels b, c and d, we explore what happens if the weights are updated online during learning. It is not possible to build a stable fully online model (as we suspect the review realised) and it is easy to understand why: if the weight matrix doing the learning is also the matrix doing the driving of the downstream features then there is nothing to prevent instabilities where, for example, the downstream feature keeps shifting backwards (no convergence) or the weight matrix for some/all features disappears or blows up (incorrect convergence). However, it is possible to get most of the way there by splitting the driving weights into two components. The first and most significant component is the STDP weight matrix being learned online, this creates a ‘closed loop’ where changes to the weights affects the downstream features which in turn affect learning on the weights. The second smaller component is what we call the ‘anchoring’ weights, which we set to a fraction of the identity matrix (here $12$) and are not learned. In summary, Equation 16 becomes (57) ${\stackrel{~}{\psi }}_{i}\left(\mathbf{x},t\right)=\sum _{j}\left({\mathsf{W}}_{ij}\left(t\right)+{\mathsf{W}}_{ij}^{\mathsf{A}}\right){f}_{j}\left(\mathbf{x},t\right)$ for $Wi⁢jA=12⁢δi⁢j$. These anchoring weights provide structure, analogous to a target signal or ‘scaffold’ onto which the successor features will learn without risk of infinite backwards expansion or weight decay. After learning when analysing the weight/successor features the anchoring component is not considered. Every other model of TD learning implicitly or explicitly has a form of anchoring. For example in classical TD learning each successor feature receives a fixed ‘reward’ signal from the feature it is learning to predict (this is the second term in Equation 23 of our methods). Even other ‘synaptically plausible’ models include a non-learnable constant drive [see (Bono et al., 2023) CA3-CA1 model, more specifically the bias term in their Equation 12]. This is the approach we take here. We add the additional constraint that the sum of each row of the weight matrix must be smaller than or equal to 1, enforced by renormalisation on each time step. This constraint encodes the notion that there may be an energetic cost to large synaptic weight matrices and prevents infinite growth of the weight matrix. (58) ${\mathsf{W}}_{ij}\left(t\right)←\frac{{\mathsf{W}}_{ij}\left(t\right)}{\text{max}\left(1,\sum _{j}{\mathsf{W}}_{ij}\right)}$ The resulting evolution of the learnable weight component, $Wi⁢j⁢(t)$, is shown in panel b (middle shows row aligned averages of $Wi⁢j⁢(t)$ from t=0 minutes to to = 64 min, on the full matrices are shown) and panel f (full matrix) from being initialised to the identity. The weight matrix evolves to look like a successor matrix (long skew left of diagonal, negative right of diagonal). One risk, when weights are updated online, is that the asymmetric expansion continues indefinitely. This does not happen and the matrix stabilises after 15 min (panel e, colour progression). It is important to note that the anchoring component is smaller than the online weight component and we believe it could be made very small in the limit of less noisy learning (e.g. more cells or higher firing rates). In panel c, we explore the combination: random weight initialisation and online weight updating. As can be seen, even with rather strong random initial weights learning eventually ‘forgets’ these and settles to the same successor matrix form as when identity initialisation was used. In panel d, we show that anchoring is essential. Without it ($WAi⁢j=0$) the weight matrix initially shows some structure shifting and skewing to the left but this quickly disintegrates and no observable structure remains at the end of learning. Many-to-few spiking model In Figure 2—figure supplement 2, panel e, we simulate the more biologically realistic scenario where each CA1 neuron integrates spikes (rather than rates) from a large (rather than equal) number of upstream CA3 neurons. This is done with two changes: Firstly we increased the number of CA3 neurons from 50 to 500 while keeping the number of CA1 neurons fixed. Each CA1 neuron is now receives fixed anchoring drive from a Gaussian-weighted sum of the 10 (as opposed to 1) closest CA3 neurons. Secondly, since in our standard model spikes are used for learning but neurons communicate via their rates, we change this so that CA3 spikes directly drive CA1 spikes in the form of a reduced spiking model. Let $Xi,tCA1$ be the spike count of the $ith$ CA1 neuron at timestep $t$ and $Xj,tCA3$ the equivalent for the $jth$ CA3 neuron then, under the reduced spiking model, (59) $\begin{array}{rl}\mathsf{P}\mathsf{r}\left({\mathsf{X}}_{i,\mathsf{t}}^{\mathsf{C}\mathsf{A}\mathsf{1}}=k\right)& =\mathsf{P}\mathsf{o}\mathsf{i}\mathsf{s}\mathsf{s}\mathsf{o}\mathsf{n}\left(k, {\lambda }_{i,\mathsf{t}}\right)\end{array}$ (60) $\begin{array}{rl}{\lambda }_{i,\mathsf{t}}& =\frac{1}{dt}\sum _{j}{{\mathsf{W}}^{\mathsf{A}}}_{ij}{\mathsf{X}}_{j,\mathsf{t}}^{\mathsf{C}\mathsf{A}\mathsf{3}}\end{array}$ As can be expected, this model is very similar to the original model since CA3 spikes are noisey sample of their rates. This noise should average out over time and the simulations indeed confirm We perform a hyperparameter sweep over STDP and phase precession parameters to see which are optimal for learning successor matrices. Remarkably the optimal parameters (those giving highest R2 between the weight matrix and the successor matrix) are found to be those – or vary close to those – used by biological neurons (Figure 2—figure supplements 2 and 3). Specifically, to avoid excess computational costs two independent sweeps were run: the first was run over the four relevant STDP parameters (the two synaptic plasticity timescales, the ratio of potentiation to depression and the firing rate) and the second was run over the phase precession parameters (phase precession spread parameter and the phase precession fraction). On all cases, the optimal parameter sits close to the biological parameter we used in this paper (panel c, d). One exception is the firing rate where higher firing rates always giver better scores, likely due to the decreased effect of noise, however it is reasonable biology can’t achieve arbitrarily high firing rates for energetic reasons. The optimality of biological phase precession parameters In Figure 2—figure supplement 3, we ran a hyperparameter sweep over the two parameters associated with phase precession: $κ$, the von Mises parameter describing how noisy phase precession is and $β$, the fraction of the full 2π theta cycle phase precession crosses. The results show that for both of these parameters there is a clear “goldilocks” zone around the biologically fitted parameters we chose originally. When there is too much (large $κ$, large $β$) or too little (small $κ$, small $β$) phase precession performance is worse than at intermediate biological amounts of phase precession. Whilst – according to the central hypothesis of the paper – it makes sense that weak or non-existence phase precession hinders learning, it is initially counter intuitive that strong phase precession also hinders learning. We speculate the reason is as follows, when $β$ is too big phase precession spans the full range from 0 to 2π, this means it is possible for a cell firing very late in its receptive field to fire just before a cell a long distance behind it on the track firing very early in the cycle because 2π comes just before 0 on the unit circle. When $κ$ is too big, phase precession is too clean and cells firing at opposite ends of the theta cycle will never be able to bind since their spikes will never fall within a 20ms window of each other. We illustrate these ideas in Figure 2—figure supplement 4 by first describing the phase precession model (panel a) then simulating spikes from 4 overlapping place cells (panel b) when phase precession is weak (panel c), intermediate/biological (panel d) and strong (panel e). We confirm these intuitions about why there exists a phase precession ‘goldilocks’ zone by showing the weight matrix compared to the successor matrix (right hand side of panels c, d and e). Only in the intermediate case is there good similarity. In most results shown in this paper, the weights are anchored to the identity during learning. This means each CA1 cells inherits phase precession from the one and only one CA3 cell it is driven by. It is important to establish whether CA1 still shows phase precession after learning when driven by multiple CA3 cells or, equivalently, during learning when the weights aren’t anchored and it is therefore driven by multiple CA3 neurons. Analysing the spiking data from CA1 cells after learning (phase precession turned on) shows it does phase precession. This phase precession is noisier than the phase precession of a cell in CA3 but only slightly and compares favourably to real phase precession data for CA1 neurons (panel f, right, adapted from Jeewajee et al., 2014). The reason for this is that CA1 cells are still localised and therefore driven mostly by cells in CA3 which are close and which peak in activity together at a similar phase each theta cycle. As the agent moves through the CA1 cell it also moves through all the CA3 cells and their peak firing phase precesses driving an earlier peak in the CA1 firing. Phase precession is CA1 after learning is noisier/broader than CA3 but far from non-existent and looks similar to real phase precession data from cells in CA1. Phase shift between CA3 and CA1 In Figure 2—figure supplement 4g, we simulate the effect of a decreasing phase shift between CA3 and CA1. As observed by Mizuseki et al., 2012, there is a phase shift between CA3 and CA1 neurons starting around 90 degrees at the end of each theta cycle (where cells fire as their receptive field is first entered) and decreasing to 0 at the start. We simulate this by adding a temporal delay to all downstream CA1 spikes equivalent to the phase shifts of 0º, 45ºand 90º. The average of the weight matrices learned over all three examples still displays clear SR-like structure. All code associated with this project can be found at https://github.com/TomGeorge1234/STDP-SR, (George, 2023, copy archived at swh:1:rev:f126330b993d50cee021b1c356077bdab80299f4). There are no raw or external datasets associated with this project. 21. Book Model-based reinforcement learning as cognitive search: neurocomputational theories In: Todd PM, editors. Cognitive Search: Evolution, Algorithms and the Brain. MIT Press. pp. 195–208. 62. Book The Hippocampus as a Cognitive Map Oxford: Clarendon Press. 78. Conference Design principles of the hippocampal cognitive map Advances in Neural Information Processing Systems 27. 86. Book A neurally plausible model learns successor representations in partially observable environments In: Jordan MI, editors. Advances in Neural Information Processing Systems. MIT Press. pp. 5–6. Article and author information Author details Wellcome Trust (212281/Z/18/Z) The funders had no role in study design, data collection and interpretation, or the decision to submit the work for publication. For the purpose of Open Access, the authors have applied a CC BY public copyright license to any Author Accepted Manuscript version arising from this submission. We thank Wellcome for supporting this work through the Senior Research Fellowship awarded to C.B. [212281/Z/18/Z]. We also thank Samuel J Gershman and Talfan Evans for useful feedback on the © 2023, George, de Cothi et al. This article is distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use and redistribution provided that the original author and source are credited. Views, downloads and citations are aggregated across all versions of this paper published by eLife. A two-part list of links to download the article, or parts of the article, in various formats. Downloads (link to download the article as PDF) Open citations (links to open the citations from this article in various online reference manager services) Cite this article (links to download the citations from this article in formats compatible with various reference manager tools) 1. Tom M George 2. William de Cothi 3. Kimberly L Stachenfeld 4. Caswell Barry Rapid learning of predictive maps with STDP and theta phase precession eLife 12:e80663. Further reading 1. Computational and Systems Biology The hippocampus has been proposed to encode environments using a representation that contains predictive information about likely future states, called the successor representation. However, it is not clear how such a representation could be learned in the hippocampal circuit. Here, we propose a plasticity rule that can learn this predictive map of the environment using a spiking neural network. We connect this biologically plausible plasticity rule to reinforcement learning, mathematically and numerically showing that it implements the TD-lambda algorithm. By spanning these different levels, we show how our framework naturally encompasses behavioral activity and replays, smoothly moving from rate to temporal coding, and allows learning over behavioral timescales with a plasticity rule acting on a timescale of milliseconds. We discuss how biological parameters such as dwelling times at states, neuronal firing rates and neuromodulation relate to the delay discounting parameter of the TD algorithm, and how they influence the learned representation. We also find that, in agreement with psychological studies and contrary to reinforcement learning theory, the discount factor decreases hyperbolically with time. Finally, our framework suggests a role for replays, in both aiding learning in novel environments and finding shortcut trajectories that were not experienced during behavior, in agreement with experimental data. 2. The predictive nature of the hippocampus is thought to be useful for memory-guided cognitive behaviors. Inspired by the reinforcement learning literature, this notion has been formalized as a predictive map called the successor representation (SR). The SR captures a number of observations about hippocampal activity. However, the algorithm does not provide a neural mechanism for how such representations arise. Here, we show the dynamics of a recurrent neural network naturally calculate the SR when the synaptic weights match the transition probability matrix. Interestingly, the predictive horizon can be flexibly modulated simply by changing the network gain. We derive simple, biologically plausible learning rules to learn the SR in a recurrent network. We test our model with realistic inputs and match hippocampal data recorded during random foraging. Taken together, our results suggest that the SR is more accessible in neural circuits than previously thought and can support a broad range of cognitive functions. 1. Chromosomes and Gene Expression 2. Neuroscience Pathogenic variants in subunits of RNA polymerase (Pol) III cause a spectrum of Polr3-related neurodegenerative diseases including 4H leukodystrophy. Disease onset occurs from infancy to early adulthood and is associated with a variable range and severity of neurological and non-neurological features. The molecular basis of Polr3-related disease pathogenesis is unknown. We developed a postnatal whole-body mouse model expressing pathogenic Polr3a mutations to examine the molecular mechanisms by which reduced Pol III transcription results primarily in central nervous system phenotypes. Polr3a mutant mice exhibit behavioral deficits, cerebral pathology and exocrine pancreatic atrophy. Transcriptome and immunohistochemistry analyses of cerebra during disease progression show a reduction in most Pol III transcripts, induction of innate immune and integrated stress responses and cell-type-specific gene expression changes reflecting neuron and oligodendrocyte loss and microglial activation. Earlier in the disease when integrated stress and innate immune responses are minimally induced, mature tRNA sequencing revealed a global reduction in tRNA levels and an altered tRNA profile but no changes in other Pol III transcripts. Thus, changes in the size and/or composition of the tRNA pool have a causal role in disease initiation. Our findings reveal different tissue- and brain region-specific sensitivities to a defect in Pol III transcription.
{"url":"https://elifesciences.org/articles/80663","timestamp":"2024-11-05T19:53:05Z","content_type":"text/html","content_length":"628095","record_id":"<urn:uuid:b67235b9-f650-47e2-8411-891b3019b824>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00482.warc.gz"}
Quantization of Black Holes Is One of the Important Issues in Physics [1], and There Has Been No Satisfactory Solution Yet Total Page:16 File Type:pdf, Size:1020Kb [email protected] †Electronic address: [email protected] 1 The quantization of black holes is one of the important issues in physics [1], and there has been no satisfactory solution yet. Black holes are characterized by the no-hair theorem, which states that once a black hole achieves a stable condition regardless how it is formed, it has only three independent physical properties: mass M, charge Q, and angular momentum J. This theorem is also called three-hair theorem sometimes. Once the mass M of a black hole with no charge and zero angular momentum is given, its other quantities should be completely fixed, as there is only one independent parameter from the no-hair theorem. Therefore the Schwarzschild radius R =2GM/c2, the surface area A = 4πR2, and the Compton wavelength λ = ~/Mc of a black hole cannot be considered as independent parameters from each other but only one quantity in essence. One elegant equality among possible relations is 2 Rλ =2lP , (1) −35 where lP = pG~/c3 =1.61624(8) 10 m is the Planck length, ~ is the Planck constant, × and c is the speed of light, and G is the universal gravitational constant called Newton constant. The relation (1) is satisfied for any mass M, so we take it as a given fact without further inquiry about its rationality. As there is only one parameter for a black hole with no charge and angular momentum, it is natural to take the black hole as a sphere with Schwarzschild radius R as its boundary. Since nothing inside the sphere can be known for an outside observer, we should consider all of the information of the black hole as recorded on the surface, which is called event horizon or holographic screen from the holographic principle [2, 3]. There have been many studies of the physical properties of black holes. Bekenstein [4] conjectured that a black hole entropy 2 is proportional to the area A of its event horizon divided by the Planck area lP , and later the entropy formula of a black hole was found to be [5, 6] kA S = 2 , (2) 4lP where k is the Boltzmann constant. The black holes create and emit particles as if they were black bodies with temperature ~a T = , (3) 2πck where a denotes the surface gravity of the black hole [6] or the acceleration of a test particle at the horizon [7]. 2 Due to the particle-wave duality property in quantum theory, any particle with energy 2 d Em = mc should also behave like a wave with the de Broglie wavelength λm = h/mv, which d can be also expressed as λm = ~/mv, where ~ = h/2π and v is the velocity of the particle. In Bohr’s theory of the one-electron atom, the condition for the electron-wave to be stable around its circular orbit with radius r is given by d d 2πr = nλm, or r = nλm, (4) which corresponds to the condition for the electron to be in a standing wave state with n integer nodes. The electron of the atom is thus quantized in discrete energy states. The way to change the electron from one state to another is to emit or absorb a photon with energy to be equal to the energy difference between the two states. Similarly, we can naturally speculate that a black hole with mass M also behaves like a wave with the Compton wavelength λ = ~/Mc. From a classical viewpoint, nothing, including the wave, can escape from the black hole horizon, therefore the wave of the black body should also propagate within the sphere on the surface. One may first speculate that the condition for the black hole to be stable is also that its wave to be in a standing wave state, i.e., 2πR =nλ, ˜ or R =n ˜λ. (5) We should notice that the wave is not moving on a circular orbit, but on a sphere surface, so the numbern ˜ of nodes is adopted as an even number, hence we taken ˜ =2n and get πR = nλ, or R =2nλ, (6) where n is an integer number now, no matter it is even or odd. By applying the equality (1) in the above condition to eliminate λ, we get the Schwarzschild radius of the black hole Rn =2√nlP , (7) which means that black holes are quantized in discrete states with radius Rn. Thus we easily obtain the other quantities of black holes, i.e., the energy 4 2 Rnc 2 En = Mnc = = √nMP c , (8) 2G 3 19 2 −8 where the Planck mass MP = p~c/G =1.22089(6) 10 GeV /c =2.17644(11) 10 kg; × × the surface area 2 2 An =4πRn = 16πnlP ; (9) and the Compton length ~ lP λn = = . (10) Mnc √n 2 The Planck scale relations, i.e., MP lP = ~/c and MP /lP = c /G, are useful for the derivation of the above expressions. When the black hole is quantized, the surface gravity a is also quantized by 2 GMn c n a = 2 = . (11) Rn 4√nlP This also leads to a quantized temperature T 2 ~an MP c Tn = = . (12) 2πck 8π√nk From above we see that black holes have been quantized in a simple and elegant way with interesting and surprising results, acceleration and temperature are also quantized. It is most interesting to notice that the entropy formula is now given by S =4πkn, (13) which is quantized as proposed in Ref. [8] that the entropy change ∆S for a 2 dimensional holographic screen is given by 4πk, for the purpose to reconcile the black hole energy formula with the Verlinde conjecture of the entropic force idea [9]. Reversely, if we make the Ansatz that the entropy change of a black hole with its horizon radius R change ∆R is, according to the rule suggested in Ref. [8], ∆l ∆S =2πkD , (14) λ where ∆l is a linear displacement to cause entropy change and D is the dimensional degree of freedom of the objects under consideration, and apply to a black hole, we obtain ∆R R∆R k∆A ∆S =4πk =4πk 2 = 2 . (15) λ 2lP 4lP Up to an integration constant R2 kA S = 4πk 2 + S0 = 2 + S0. (16) 4lP 4lP 4 The constant can be chosen to be zero in consistent with intuitive expectation that no area no entropy for black hole. We then obtained the black hole entropy formula (2), consistent with that obtained from classical considerations [4–6]. If the entropy change on the black hole holographic screen is due to a test particle of Compton wavelength λm moving ∆x towards the black hole, according to the Verlinde conjecture of the entropic force idea [9], one has ∆S = 2πk∆x/λm. If ∆x/2λm = ∆R/λ is quantized as suggested in Ref.[8], one can then obtain (13). From there, one can go backwards in our previous derivations to obtain the same quantization rules for the radius, the black hole wavelength, the mass spectrum and the temperature. This tells us that one can relate black hole quantization with quantization of entropic change [8], which can also explain gravity as an entropic force [9–11]. The quantization of black holes found in this work, however, should not be interpreted as only a support of gravity as an entropic force, it also suggests a new way to unify gravity with quantum theory. Now we discuss some of the physical implications and predictions can be obtained from the above results. First, the most small stable black hole is obtained for n = 1. The Schwarzschild radius R1 = 2lP doubles that of the Planck length lP , the energy E1 is just 2 the Planck mass MP c , and the Compton length just equals to the Planck length λ1 = lP . Thus the most small black hole is of the Planck scale both in size and energy as expected [12]. It also supports the proposal for the existence of primordial black holes. These black holes range from mini black holes of Planck scale to very big ones with large n.
{"url":"https://docslib.org/doc/554093/quantization-of-black-holes-is-one-of-the-important-issues-in-physics-1-and-there-has-been-no-satisfactory-solution-yet","timestamp":"2024-11-11T20:34:14Z","content_type":"text/html","content_length":"64805","record_id":"<urn:uuid:7591b4b4-bcc8-473f-beea-9ba4514e849b>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00183.warc.gz"}
Mr Rinta-aho received his M.Sc. degree in Physics at University Helsinki in 2016. At the moment, he is working as Research Scientist at VTT Technical Research Centre of Finland Ltd. His scientific interest includes quantitative NDE, ultrasound physics and artificial intelligence. Machine learning and multimethod-NDE for estimating neutron-induced embrittlement In this study, 157 irradiated and non-irradiated Charpy specimens [1] manufactured from six different steel alloys used in the reactor pressure vessels (18MND5-W, 22NiMoCr37, A508-B, 15Kh2NMFA, HSST03 and A508-Cl2) were measured. The measurements included determining several non-destructively measurable electric, magnetic and elastic parameters. The applied non-destructive methods were Direct Current-Reversal Potential Drop (resistivity) [2], 3MA (eddy current impedance loop shape) [3], TEP (Seebeck Coefficient) [4], MIRBE (Barkhausen noise) [5], MAT (magnetic hysteresis loop shape) [5] and sound velocity. After the non-destructive measurements, the ductile-brittle transition temperature (DBTT) was determined destructively using the ISO-standard method [1]. Several different regression algorithms, including neural network regression and support vector regression, were applied to the data. The algorithms were implemented with TensorFlow and scikit-learn using Python 3.7. With these algorithms, it was possible to estimate the DBTT with the mean absolute error smaller than 20 °C. Based on the results, the method can be seen as a potential candidate for estimating neutron-induced embrittlement non-destructively. [1] ISO-148. [2] J. Rinta-aho et. al. Baltica XI (2019). [3] G. Dobmann et. al. Electromagnetic Nondestructive Evaluation (2008). [4] M. Niffenegger and H. J. Leber J. Nuclear Mat. 389(1), 62-67, (2009). [5] I. Tomáš et. al. Nuclear Engineering and Design 265, 201-209, (2013). Ultrasound as a non-destructive tool to estimate polymer embrittlement There is approximately 1500 km of electric cables in a single NPP. During operation, some of these cables are exposed to high level gamma irradiation. Gamma radiation is known to brittle polymers such as polyethylene used as insulator in these cables. Since the planned lifetime for a single NPP is 60 to 80 years, a low-cost method to estimate the embittlement level non-destructively is While polyethylene ages, Elongation at Break (EaB) decreases and Young’s modulus increases. Since sound velocity for longitudinal wave mode in homogenous and isotropic media is a function of Young’s modulus, Poisson’s ratio and density (Eq. 1), it can be used as a non-destructive indicator for embrittlement. While polyethylene ages, Elongation at Break (EaB) decreases and Young’s modulus increases. Since sound velocity for longitudinal wave mode in homogenous and isotropic media is a function of Young’s modulus, Poisson’s ratio and density (Eq. 1), it can be used as a non-destructive indicator for embrittlement. The results clearly show, that sound velocity increases and EaB decreases when polyethylene is exposed to gamma irradiation. The correlation between sound velocity and EaB is linear. Based on the results, it is possible to use sound velocity as a low-cost non-destructive indicator for gamma-irradiation induced degradation of polyethylene Jari Rinta-aho VTT Technical Research Centre of Finland Ltd.
{"url":"https://nomad-horizon2020.eu/events/european-symposium-on-non-destructive-evaluation-for-npp/speakers/rinta-aho-jari","timestamp":"2024-11-12T22:28:15Z","content_type":"text/html","content_length":"7370","record_id":"<urn:uuid:53bad6ba-b64f-404a-a442-c09eba898d10>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00647.warc.gz"}
How to Determine the Size of a Side-by-Side or French Door Refrigerator You're moving soon and you have a side-by-side or french door refrigerator (SBS for short), a pretty common appliance these days. Because side-by-side refrigerators can vary greatly in size, weight and bulkiness we often need to know just how large your refrigerator really is. Understanding the size will allow us to accurately determine how many movers are required for it to be moved safely. To make this process as easy as possible we have outlined several different methods to help you determine the size of your SBS fridge. A "French Door" Refrigerator is similar in size and weight to a SBS fridge. What Am I Looking For? When trying to find the size of your refrigerator it is important to understand what number you are actually looking for. Refrigerators are measured by their cubic feet (CuFt), this number will most commonly range from 10-35. For SBS and french door refrigerators this number will likely be at least 18. Where Do I Look? The first place to look is inside the refrigerator itself, there should be a relatively accessible label that has specifications listed. Although the majority of newer refrigerators will clearly define the size on the label, not all will. If your refrigerator does not specify the size on this label you should take the following alternative steps. 1. Look at the model number. More often than not the refrigerator size will be in the model number itself. For example, model number XYZ18 tells us that this refrigerator is 18 CuFt. 2. If your model number does not clearly designate a size you can probably find the specifications by searching online. 3. If you've exhausted all options and still cannot find the size, don't fret, a solution still exists. Grab a tape measure and measure the width, depth and height of the inside of your refrigerator. Multiply the numbers together (W x D x H) and divide the total by 1728 to get the cubic feet. For example, if you have a refrigerator that measures 30" wide by 30" deep by 48" high, you multiply 30 X 30 X 48. This gives you 43200. You then divide 43200 by 1728 to get cubic feet. In this case we get 25 which would be a good sized side-by-side refrigerator. I've Found the Size, Now What? Now that we know the size of your SBS or french door refrigerator we can accurately determine how many movers will be required to move it safely. Our current SBS refrigerator policy is as follows: • SBS Refrigerators Up To 22 CuFt: 2 Movers can safely handle a refrigerator up to 22 CuFt. • SBS Refrigerators Over 22 CuFt: 3 Movers are required unless the customer provides an appliance dolly. If an appliance dolly is provided 2 Movers can safely move the item. See our equipment policy for more information about the types of tools and dollies we provide.
{"url":"https://www.movinglabor.com/blog/how-to-determine-the-size-of-a-side-by-side-or-french-door-refrigerator/?format=amp","timestamp":"2024-11-05T16:49:15Z","content_type":"text/html","content_length":"12664","record_id":"<urn:uuid:e79b8fb9-0af4-4801-b5f6-afad01c557f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00619.warc.gz"}
A Mathematician’s Unanticipated Journey Through the Physical World The outline of Lauren Williams’ mathematical career was present very early on in her life. “Ever since I was a kid, I’ve always loved patterns,” said Williams. “I enjoyed being given a sequence of numbers and having to find the pattern and predict the next number.” But while many kids are enchanted by patterns, few end up following them as far, or to such unexpected places, as Williams has. As a professor at Harvard University — where she became only the second tenured woman mathematician in the university’s history — she has uncovered correspondences far more bewildering than anything she learned in grade school. They all involve a single mathematical object that can be described in a number of different ways. But by looking at it from an entirely new perspective, Williams, 42, has proved that it’s the key to decoding the secrets behind a wide range of seemingly unrelated phenomena in math — and in nature. “She’s always been fearless,” said Federico Ardila of San Francisco State University, who was in graduate school with Williams. “She’s not afraid to build bridges where they didn’t seem to exist.” The geometric object that weaves through Williams’ work is called the positive Grassmannian. It’s a shape that performs a kind of record-keeping function: Every point on it represents a specific instance of some simpler geometric object. It’s a shape that keeps track of other shapes. Around the time Williams started graduate school at the Massachusetts Institute of Technology in 2001, mathematicians were developing a new way of thinking about the positive Grassmannian. Instead of thinking of it as a single geometric object, they were trying to understand it in terms of the pieces that make it up. That perspective captivated Williams, and over the last two decades she’s established many of its most far-reaching implications. In often dramatic fashion, Williams has also proved that the pieces of the positive Grassmannian can be reassembled in a form that explains everything from the movement of tsunami waves to particle collisions at the frontier of quantum physics. It’s a potpourri of insights that cohere around the positive Grassmannian, and around the unique mind that generated them. “Lauren is one of these people who thinks so clearly,” said Nima Arkani-Hamed, a theoretical physicist at the Institute for Advanced Study. “She is open-minded and adventurous.” Point of Origin Williams grew up in the Los Angeles suburbs, the oldest of four sisters. Her father was an engineer, and her mother, a third-generation Japanese American, taught English. As a child Williams enjoyed playing the violin and writing poetry, but she loved reading most of all. At night, she’d stay up past her bedtime with a lamp pulled under her comforter (she eventually burned a hole through her sheets). During summer days, “I spent hours sitting in the branches of an apricot tree in our backyard, reading a book and eating apricots,” she said. Math rose to her attention in fourth grade when she participated in an elementary school math competition. “I unexpectedly won the contest, and the two teachers who organized it sort of took me under their wing,” she said. When she was 16, she spent a summer at a math research program for high school students hosted by MIT. That was her first serious introduction to combinatorics — an area of math concerned with breaking complicated objects into pieces and then classifying and counting them. She learned about the discipline from an MIT graduate student named Satomi Okazaki — who herself happened to be the student of a mathematician named Richard Stanley. (Stanley would eventually become Williams’ doctoral adviser.) When she went to college, at Harvard, she was excited to join a wider intellectual world. “There was no end of classes and activities I wanted to try, and my peers were just as excited as me about learning and doing everything.” She majored in mathematics at Harvard, where she graduated in 2000, and went on to graduate school at MIT. There she started studying a mathematical object with many guises: the Grassmannian. “If you understand the Grassmannian well, you can go in many different directions. It’s central in math,” said Bernd Sturmfels of the University of California, Berkeley, who is also the director of the Max Planck Institute for Mathematics in the Sciences in Leipzig, Germany. The Grassmannian gets its name from Hermann Grassmann, who first formalized it in the mid-1800s. Only it’s not one exact geometric object, but rather a family of them. To get a sense for a single Grassmannian, let’s start with just two numbers, 1 and 3. The 3 indicates that we’re in three-dimensional space. The 1 means we’re going to think about one-dimensional lines within that space. In this three-dimensional space there are three axes — x, y and z — that all intersect at a crossroads, the origin. Now picture a line that runs through the origin. Go a step further and try to imagine all the lines that could go through the origin, each with its own unique trajectory. Next, imagine positioning a sphere so that it’s centered around the origin. Most of those lines will intersect this sphere twice, in the northern and southern hemispheres (except the ones that pass through the equator). This makes the two hemispheres largely redundant — they carry the same information about the lines — so we can forget the southern one. The leftover northern hemisphere is the Grassmannian formed by one-dimensional lines in three-dimensional space. Or, as mathematicians write it, Gr(1,3). This means that if you know the coordinates of a point on the northern hemisphere, you know everything about the one-dimensional line through the origin that passes through that point. The Grassmannian is an example of what mathematicians call a moduli space, meaning it’s a single geometric object that serves as a concise way of keeping track of infinitely many others. “As you move [from line to line] you’re moving from one point to another on the Grassmannian,” said Ardila. “It’s almost like a remote control.” Samuel Velasco/Quanta Magazine This is just one example of a Grassmannian. If we’d started with the numbers 4 and 10, we’d instead be thinking about four-dimensional planes passing through the origin in 10-dimensional space — and the Grassmannian, Gr(4,10), would be the shape in which each point represents one of those four-dimensional planes. You can construct infinitely many different Grassmannians by starting with distinct pairs of whole numbers. Beginning in the early 1990s, many mathematicians started to focus on a particular part of the Grassmannian called the positive Grassmannian. In our example, Gr(1,3), it is one-quarter of the northern hemisphere. It’s referred to as the “positive” part of the Grassmannian because, roughly, all the lines that cut through it have a non-negative slope. But to really understand its place in mathematics, mathematicians first had to learn how to take the Grassmannian apart. Out of Many, Few In the 1970s and 1980s, Gian-Carlo Rota and his student Richard Stanley came up with a new way of thinking about sophisticated mathematical shapes. They’d take those objects — which may have been hard to study on their own — and break them into combinatorial pieces that were more tractable. “You have some very complicated object that is hard to understand,” said Melissa Sherman-Bennett, a graduate student at Berkeley who works with Williams. “But you can break it into pieces that give you more insight into this big complicated thing.” When Williams arrived at MIT, she read foundational works from the 1990s by George Lusztig and his student Konstanze Rietsch that had introduced the positive Grassmannian, and more recent papers by Alexander Postnikov that applied a combinatorial perspective to the shape. Postnikov was at MIT, and Williams spent a lot of time talking with him. She was fascinated by the way his work connected this already canonical shape to even more widespread parts of mathematics. “I found it to be a very beautiful confluence of ideas,” Williams said. To understand how the Grassmannian breaks into pieces, recall that each point on it encodes the properties of a line or multidimensional plane passing through the origin. Those planes are defined by vectors that can be written down as arrays of numbers called matrices. The size of a matrix depends on the Grassmannian. For Gr(1,3) — one-dimensional lines in three-dimensional space — each line through the origin is specified by a 1 × 3 matrix, such as: [1 2 3] The numbers in the matrix serve as the coordinates for the point in the Grassmannian that encodes the line. The Grassmannian itself contains infinitely many points, which can’t be counted in a discrete, finite way. But it’s possible to extract additional data from the matrices that can be counted. Many matrices have a measurement called a determinant, which is a single value calculated using the numbers in the matrix. They also have “subdeterminants,” which are calculated based on a subset of the values in the matrix; a 1 × 3 matrix has three subdeterminants. For Williams’ work, the significance of those subdeterminants lies in their signs, which can be positive, negative or neither (if the subdeterminant is zero). With the positive Grassmannian, the options are even more limited: subdeterminants can only take values that are either positive or zero. This turns something infinite and uncountable into something discrete and possible to sort: While there are infinitely many different 1 × 3 matrices, their three subdeterminants can have only eight different sign patterns: (000), (00+), (0++), and so on. And for technical reasons, mathematicians don’t need to consider one of them, (000), which leaves just seven categories for those infinite points to be divided into. Samuel Velasco/Quanta Magazine Points sort into different buckets, or “cells,” based on their sign pattern. You can think of these seven cells as the seven puzzle pieces that make up the positive Grassmannian. The number and shapes of these pieces isn’t obvious when you first look at the overall shape. They become apparent when you sort points by their sign patterns — all the points with a given sign pattern fill out the shape of a single cell, or puzzle piece. This process of sorting points by sign patterns to reveal the shapes of puzzle pieces works especially well for the positive part of the Grassmannian. “The combinatorics is extremely rich,” said Rietsch. Starting in graduate school, Williams proved a number of different features of the way points from the positive Grassmannian sort into cells. In 2003 she devised a formula for counting the number of different cells found in positive Grassmannians of any dimension. The result foreshadowed a lot of the innovative work later in her career. “I think she’s one of the masters of capturing the combinatorial nature of objects that don’t seem combinatorial,” Ardila said. After she received her doctorate from MIT in 2005, this combinatorial perspective on the positive Grassmannian started leading Williams into unlikely collaborations. Making Waves There are lots of ways to chart a career in math. One is to devote yourself to developing a new theory or attacking a prominent open problem. But that’s not what motivates Williams. “I’d rather not be working on what everybody else is working on,” she said. “I sort of don’t like feeling I’m competing against other people toward the same goal.” Her collaborators have also noticed this unusual trait. “Lauren is one of the smartest people I’ve ever worked with, but I’ve never felt she had this pretense of competition,” said Ardila. “She has this gentleness about her.” Williams’ preference for less-trafficked problems found an outlet right after graduate school, when she wrote a series of papers with the mathematician Sylvie Corteel that explored an unexpected link between the combinatorics of the positive Grassmannian and statistical physics. In addition to their mathematical results, Williams gained something else by working with Corteel, who had a baby during their first collaboration. “When I was much younger, I worried about whether it was possible to be a successful academic and also have a family,” she said. “It was useful for me that fairly early in my career I had these collaborations with slightly older women who were making it work.” Williams’ research took another surprising turn in 2009, soon after she joined the faculty at Berkeley. While looking for new results on the positive Grassmannian, she noticed that a physicist at Ohio State University had cited her work in his research on shallow water waves. “If someone writes a paper with the positive Grassmannian in it, she always looks at it,” said the physicist in question, Yuji Kodama. “Of course, she didn’t expect shallow water waves.” Kodama’s work focused on a particular type of wave called a soliton, or solitary wave. The most famous example of the phenomenon is a tsunami. More often, though, soliton waves occur near the shore. The mathematics behind a single soliton propagating on its own is relatively simple, but it grows more complicated when solitons cross each other. Physicists model them using the Kadomtsev-Petviashvili, or KP, equation: Feed the equations the position of a wave and the equations output its height at any time in the future. Kodama was trying to understand different types of solutions to the KP equations, representing different types of wave interactions. “If one soliton and another interact … lots of patterns appear, and we like to classify these,” said Kodama. Williams tried reading Kodama’s work, eager to see how the Grassmannian might fit in, but it was too far from her own research for her to understand it. So she invited him to Berkeley to explain it to her in person. Even then, communication wasn’t easy. “He’s a physicist and an older Japanese man, and we had a lot of trouble understanding each other that first day. It was like we were speaking different languages,” said Williams. As they talked, Kodama sketched simple schematics to illustrate patterns of wave interaction: Two lines representing two waves converge at a point, and then a single line representing a new wave emerges. The drawings looked familiar to Williams. She quickly recognized that they mirrored pictorial representations called planar bicolored graphs that mathematicians use to describe points on the positive Grassmannian. Samuel Velasco/Quanta Magazine “He’d explain stuff and draw pictures and I had a lot of trouble following his explanations, but I could draw the same picture in a completely different way,” said Williams. Previous work had established a one-to-one relationship between points on the positive Grassmannian and solutions to the KP equation: Start with a point on the positive Grassmannian, apply some complicated mathematics, and you’ll get a solution to the equations that represents a particular wave interaction. Emboldened by the matching pictures, Kodama and Williams looked for deeper connections between the positive Grassmannian and shallow water waves. The pair ended up showing that when you associate a point on the positive Grassmannian with a solution to the KP equation, the cell that point belongs to dictates a lot about the wave pattern represented by the solution to the equations. “The large-scale behavior of a wave formation is entirely determined by which cell your point in the positive Grassmannian lies in,” Williams said. One of their papers also included a haiku Kodama and Williams wrote, partly in recognition of their shared Japanese heritage: Arrangements of stones reveal patterns in the waves as space-time expands “Being a writer or a poet was one of my childhood dreams, and I thought, I’ve got tenure now, I can be a little crazy,” Williams said. It’s as if the Grassmannian, uncovered a century ago to formalize the mathematics of lines and planes through the origin, also happens to index phenomena in the physical world — a bizarre correspondence that Williams still can’t fully explain. “The Grassmannian seems to be linked to a whole bunch of things that describe ‘real life,’ and I don’t have a great answer why, except to say that the Grassmannian is a very fundamental object in mathematics,” she said. Waves to Particles In 2016 the Harvard math department reached out to Williams and asked her if she might be interested in joining them. The overture shocked Williams for two reasons: No one else on the Harvard faculty did her kind of math, and no one else looked quite like her. “There were no women and no combinatorialists,” she said. “It weighed on me very heavily when I was trying to make up my mind. I wasn’t sure what the atmosphere would be like.” But Williams had loved her four years at Harvard as an undergraduate — and three subsequent years as a postdoctoral researcher — and that made her more inclined to consider the university’s offer. She traveled to Cambridge and had dinner with her prospective colleagues. The experience was reassuring, but Williams was firmly situated in California by that point. She worried about uprooting her husband and her young kids, and she also recognized that making such a high-profile move within the math world might increase public scrutiny of her and her work. Yet in the end, she felt something of an obligation to take the position, as a way of encouraging other women to pursue careers in mathematics. “I recognized that coming to Harvard would give me a chance to make a positive impact on a department I cared a great deal about,” Williams said. “I understand that role models matter. It can be hard for people to imagine a career for themselves when they don’t see people like them with that career.” Williams started at Harvard in the fall of 2018 and became the second woman ever to hold a tenured position in the university’s math department. (The first, Sophie Morel, spent three years at Harvard before leaving in 2012; this fall, Harvard hired two more tenured women mathematicians: Laura DeMarco and Melanie Matchett Wood.) “There are so many hidden obstacles for female math professors at top research institutions. In some sense you have to be a warrior, but Lauren has handled things with such grace,” said Ardila. At the same time Williams was moving across the country, she was deep in a new Grassmannian project. It involved a geometric object called the amplituhedron that had been proposed as the answer to one of the knottiest problems in physics. The amplituhedron was formally described in a 2013 paper by Nima Arkani-Hamed and Jaroslav Trnka. It was meant to help physicists predict what happens when fundamental particles collide. Due to the nature of quantum interactions, such collisions are not strictly deterministic. Instead, they’re described by an amplitude, which is like a probability that the collision plays out in a given way. The somewhat clunky prevailing method for calculating amplitudes is something called a Feynman diagram, named for its inventor, Richard Feynman. These diagrams involve vast, tedious computations that are hard to carry out with precision as particle collisions grow more complex. The amplituhedron is a simpler way of calculating amplitudes. Given a set of particles on a collision course, you can use their properties to construct a geometric object — the amplituhedron. It embodies the particle interaction in a precise way: By calculating its volume, in effect you’re calculating the amplitude for the given collision. “We build a shape, and the volume of the shape gives me an amplitude,” said Arkani-Hamed. So the question is how to calculate the volume. One approach is to break the amplituhedron into pieces. This process, called a triangulation, is easy to see with an example. Imagine you have a ball and want to find its volume. One indirect way to do it is to fill it with three-dimensional triangular tiles. The total volume is the sum of the individual volumes of all the tiles used in the “To a first approximation, [physicists] are interested in the volume of the amplituhedron, and one way to compute volume is to break it into smaller pieces. That’s why they want to triangulate the amplituhedron,” said Williams. Arkani-Hamed and his collaborators had defined the amplituhedron in relation to the positive Grassmannian. They demonstrated that it’s possible to change a positive Grassmannian into the amplituhedron by multiplying it by a type of matrix, effectively providing a mathematical recipe for moving points on the positive Grassmannian over to points on the amplituhedron. As a result, information about the relatively well-studied positive Grassmannian transfers to the relatively unexplored amplituhedron. Over the last three years, Williams has expanded this correspondence. She demonstrated that in some cases the combinatorial properties of the positive Grassmannian — the way its points sort into cells — carries over to the amplituhedron via this transformation process. That means the cells of the positive Grassmannian can serve as the tiles needed to triangulate the amplituhedron. So far, Williams has proved that this relationship holds for simpler versions of the amplituhedron. She’s also laid out a precise conjecture that predicts how many tiles are needed to triangulate any “We were groping around in the dark for quite a while, but [her work] on the positive Grassmannian was a shining light throughout this process,” said Arkani-Hamed. In the fall of 2019 Williams and Arkani-Hamed were two of the co-organizers of a semester-long program at Harvard that brought together mathematicians and physicists to explore the link between the positive Grassmannian and the amplituhedron. During the event, Williams was talking with two physicists who mentioned a sequence of numbers related to triangulations of the amplituhedron. The numbers were immediately familiar to Williams: She’d encountered them as a graduate student working on a different (and unrelated) problem on a version of the positive Grassmannian 16 years earlier. But she didn’t understand why they’d show up in this new setting. “Whenever I had a free moment, I’d find myself wandering back to [those numbers] and wondering how to make that connection,” she said. Eventually she did — following one more surprising pattern, with echoes of the numerical sequences that had fascinated her as a kid. “After some months we figured out exactly what it was,” Williams said. “It was just this delightful surprise.” Correction: December 16, 2020 An earlier version of this article incorrectly stated that the Grassmannian Gr(4,10) would be a 10-dimensional shape. It is a 24-dimensional shape.
{"url":"https://www.quantamagazine.org/a-mathematicians-adventure-through-the-physical-world-20201216/","timestamp":"2024-11-14T12:24:47Z","content_type":"text/html","content_length":"237875","record_id":"<urn:uuid:08064e13-00b2-4658-af97-8c6597498ff4>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00844.warc.gz"}
How do you calculate How do you calculate a backwards change? 57 second clip suggested6:53How to Count Back Change – YouTubeYouTubeStart of suggested clipEnd of suggested clipSo the best way to do that is to throw two dimes at it. So I’m at 30 here’s 40 here’s 50 75 $1 soMoreSo the best way to do that is to throw two dimes at it. So I’m at 30 here’s 40 here’s 50 75 $1 so that makes this six. How do you count back changes worksheet? To count the change, first count up from the price of the item until you reach the next whole dollar. Then use 1-dollar or 5-dollar bills. Count up from $0.84 to $1 to find the change. How do you calculate back change to customer game? 57 second clip suggested2:04Counting back change – YouTubeYouTubeStart of suggested clipEnd of suggested clipYour change is eight dollars and thirty cents there’s thirty. One two three and five is eight alongMoreYour change is eight dollars and thirty cents there’s thirty. One two three and five is eight along with their receipt. Then you want to put the money. Back into your tail. How do you calculate back and change in money? Steps to Count Change 1. Start with the pennies to reach a multiple of 5 or 10. 2. Next use a nickel or a dime as you get to a multiple of 25. 3. Use quarters until you reach a dollar. 4. Use one-dollar bills until you reach a multiple of 5 or 10. 5. Use five-dollar bills until you reach 10 or ten-dollar bills until you reach 20. How do you calculate change in money? Change money = paid money – bill. Paid money = change + bill. How do you count back changes in Australia? 61 second clip suggested6:55Australian Money: Calculating Change – YouTubeYouTube How do I teach my child to count back changes? To make making change easier for your children, teach them that, rather than concentrating on the amount to give back, their job is to simply count up from the amount spent to the amount given. As they do this, they should use the largest denominations whenever possible.
{"url":"https://thisisbeep.com/how-do-you-calculate-a-backwards-change/","timestamp":"2024-11-13T15:56:29Z","content_type":"text/html","content_length":"49601","record_id":"<urn:uuid:ca677563-a6eb-4046-b698-cd94ca731649>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00230.warc.gz"}
Mathematical Sciences Research Institute Aspects of the mod p representation theory of p-adic reductive groups August 20, 2014 (11:30 AM PDT - 12:30 PM PDT) Speaker(s): Rachel Ollivier (University of British Columbia) Location: SLMath: Eisenbud Auditorium Primary Mathematics Subject Classification No Primary AMS MSC Secondary Mathematics Subject Classification No Secondary AMS MSC These lectures will focus on the mod p representation theory of a split p-adic reductive group G, using GL(2) as a running example. We hope to emphasize the differences between the mod p and complex representations of G while keeping in mind that the theory is partly motivated by the mod p and complex local Langlands programs. We will start with remarks regarding finite reductive groups. We will then compare the homological properties of certain categories of mod p and complex representations of G (and the associated pro-p-Iwahori Hecke algebra). In particular, in the complex setting, the theory of coefficient systems on the Bruhat-Tits building by Schneider and Stuhler gives a way to construct explicit projective resolutions. We will explore what remains from this theory in the mod p setting. This will help us describe the first step in the construction of Colmez' functor yielding the mod p local Langlands correspondence for GL(2,Q_p).
{"url":"https://legacy.slmath.org/workshops/710/schedules/18759","timestamp":"2024-11-09T22:33:51Z","content_type":"text/html","content_length":"38239","record_id":"<urn:uuid:d85c5e21-fe42-4a44-be20-1b640f52922b>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00250.warc.gz"}
ML Aggarwal Class 8 Solutions for ICSE Maths Model Question Paper 4 - CBSE Tuts ML Aggarwal Class 8 Solutions for ICSE Maths Model Question Paper 4 Choose the correct answer from the given four options (1-2): Question 1. \(\frac{103^{2}-97^{2}}{200}\) is equal to (a) 3 (b) 4 (c) 5 (d) 6 Question 2. If the’ sum of three consecutive even integers is 36, then the largest integer is (a) 10 (b) 12 (c) 14 (d) 16 Question 3. Find the area of rectangle whose length and breadth are respectively (4x^2 – 3x + 7) and (3 – 2x + 3x^2). Question 4. Factorize: a^2 – c^2 – 2ab + b^2. Question 5. The ages of A and B are in the ratio 3 : 4. Five years later the sum of their ages will be 31 years. What are their present ages? Question 6. The sum of the digits of a two-digit number is 13. If the number obtained by reversing the digits is 45 more than the original number. Find the original number. Question 7. The ratio between an exterior angle and interior angle of a regular polygon is 1 : 5, find: (i) the measure of each exterior angle, (ii) the measure of each interior angle. (iii) the number of sides of the polygon. Question 8. Solve the inequality: 3 – \(\frac{x}{2}\) > 2 – \(\frac{x}{3}\) , x ϵ W. Also represent its solution set on the number line. Question 9. Factorise: x^2 + \(\frac{1}{x^{2}}-7\left(x-\frac{1}{x}\right)\) + 8. Question 10. In the given figure, ABCD is a parallelogram. Find x, y, z and w.
{"url":"https://www.cbsetuts.com/ml-aggarwal-class-8-solutions-for-icse-maths-model-question-paper-4/","timestamp":"2024-11-09T22:51:23Z","content_type":"text/html","content_length":"65819","record_id":"<urn:uuid:07c3cc4d-a7f5-4f93-8264-66f30b16ac0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00836.warc.gz"}
Calculation of Perpendicular Distance via Vi and Cosine 05 Oct 2024 Popularity: ⭐⭐⭐ p=sqrt(3)vicos Calculation This calculator provides the calculation of p using the formula p=sqrt(3)vicos. , where r is the position vector of the point, r0 is (x, (y, Calculation Example: The formula the position vector of the line, and u is a unit vector y, / 1 = sqrt(x^2 + y^2 + z^2). z) / 1 = sqrt(y^2 + z^2). Since the point p=sqrt(3)vicos is used to r parallel to the line. In this case, the line of motion z) - Since the point is - is located in the first quadrant, y = calculate the perpendicular - is the x-axis, so r0 = 0 and u = (1, 0, 0). The (0, (1, (x, perpendicular to the x-axis, x (0, (1, vicos(pi/6) and z = visin(pi/6), so p = distance from a point to a line r0 / u position vector of the point is given by r = (x, y, z), 0, / 0, = y, = 0, so p = sqrt(y^2 + z^2). 0) / 0, = (y, sqrt(y^2 + z^2) = sqrt((vicos(pi/6))^2 of motion. It is derived from x where x is the distance from the point to the y-axis, y 0) x 0) z) Substituting this expression x 0) z) + (visin(pi/6))^2) = sqrt(vi^2 * (cos the formula for the distance u is the distance from the point to the z-axis, and z is (1, for p into the formula for the (1, (pi/6)^2 + sin(pi/6)^2)) = sqrt(vi^2 * between a point and a line, the distance from the point to the x-axis. Substituting 0, distance between a point and a 0, 1) = vi. Therefore, the formula for p which is given by p = these values into the formula for the distance between 0) line, we get p = 0) is p=sqrt(3)vicos. a point and a line, we get p = Related Questions Q: What is the physical significance of p? A: The physical significance of p is that it represents the perpendicular distance from a point to a line of motion. This distance is important in many applications, such as calculating the trajectory of a projectile or the moment of inertia of a rotating object. Q: How is p used in real-world applications? A: p is used in a variety of real-world applications, such as calculating the trajectory of a projectile, the moment of inertia of a rotating object, and the distance between two points in space. It is also used in computer graphics to calculate the distance between a point and a line or plane. Symbol Name Unit vi Initial Velocity m/s p Perpendicular Distance m Calculation Expression p Formula: The formula for p is p=sqrt(3)vicos Calculated values Considering these as variable values: p=5.0, vi=10.0, the calculated value(s) are given in table below Derived Variable Value p Formula 15.0 Sensitivity Analysis Graphs p Formula: The formula for p is p=sqrt(3)vicos Similar Calculators Calculator Apps
{"url":"https://blog.truegeometry.com/calculators/p_sqrt_3_vi_cos_calculation.html","timestamp":"2024-11-13T10:53:53Z","content_type":"text/html","content_length":"18543","record_id":"<urn:uuid:d82044e1-2771-463d-ab5f-c1a5f9e583ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00818.warc.gz"}
GDX Elliott Wave Technical Analysis – 7th March, 2014 by Lara | Mar 8, 2014 | GDX | 1 comment The data for this market has many gaps, and the structures are not entirely typical looking Elliott wave structures. Although there are some Fibonacci ratios, they are not as prevalent as ratios within Gold. I am only able to import daily data, and I cannot check subdivisions on lower time frames. While I expect my analysis of GDX will be useful to you, please note that it may not have as good an accuracy rate as what I can achieve for Gold. Overall the structures and the wave count will be quite different from Gold. Although it looks these two markets have major turns together, the subdivisions within each are quite different. Click on the charts below to enlarge. The clearest piece of movement is the downwards movement from the high. This looks most like a first, second and third wave. This may be the start of a larger correction. Intermediate wave (3) is 10.59 points longer than 2.618 the length of intermediate wave (1). Within intermediate wave (3) there are no Fibonacci ratios between minor waves 1, 3 and 5. Ratios within minor wave 1 of intermediate wave (3) are: minute wave iii has no Fibonacci ratio to minute wave i, and minute wave v is 21.90 longer than 0.618 the length of minute wave i. Ratios within minor wave 3 of intermediate wave (3) are: minute wave iii has no Fibonacci ratio to minute wave i, and minute wave v is 68.72 longer than 0.382 the length of minute wave iii. Draw a parallel channel about this downwards movement. Draw the first trend line from the lows of intermediate waves (1) to (3), then place a parallel copy upon the high of intermediate wave (2). I would expect intermediate wave (4) to find resistance at the upper edge of the channel, and it may end there. Intermediate wave (4) should last one to a few months. It may end about the terminus of the fourth wave of one lesser degree at 31.35. The target for upwards movement for minute wave c (within minor wave W) to end given in last analysis was 24.76. It has ended at 25.76. This structure for intermediate wave (4) is so far looking like an atypical double combination. It cannot be a double zigzag because the first structure subdivides 3-3-5, and so minor wave W is a flat. Intermediate wave (4) may be a double combination: flat – X – zigzag. Double combinations normally move price sideways. This one is trending upwards in a corrective direction. The three in the opposite direction labeled minor wave X is remarkably shallow for an X wave within a combination and it is more like an X wave within a double zigzag. Within minor wave Y minute wave b subdivides nicely as a running contracting triangle. The trend lines of the triangle fit perfectly. Within minor wave Y at 27.74 minute wave c would reach equality with minute wave a. If upwards movement does not stop there then the next possible target is at 28.92 where minute wave c would reach 1.618 the length of minute wave a. The lower aqua blue trend line may show where downwards corrections find support. Within minute wave c no second wave correction may move beyond the start of its first wave. This wave count is invalidated with movement below 25.83 while minute wave c is unfolding. 1 Comment Bob on March 10, 2014 at 8:15 am Thank you for the GDX update. Really appreciate seeing GDX updates as your time permits.
{"url":"https://elliottwavegold.com/2014/03/gdx-elliott-wave-technical-analysis-7th-march-2014/","timestamp":"2024-11-11T03:34:10Z","content_type":"text/html","content_length":"40274","record_id":"<urn:uuid:28a1e84e-9cc4-4f26-b4bb-5587721d00ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00315.warc.gz"}
Dr Karl's Great Moments In Science No More Butterfly Effect A butterfly flapping its wings in the Amazon causing a tidal wave in Japan is about as likely to happen as a three day weather forecast being spot on. But if we could just tweak those climate models into shape, we'd finally be able to believe what the weatherfolk say about fine weekends. One of the great modern science stories is the so-called "Butterfly Effect". It suggests that the weather is so sensitive to tiny changes, that something as microscopic as a butterfly flapping its wings in Brazil could set off a tornado in Texas. It's a great bit of Pop Science that has entered the common consciousness - but it's probably wrong. Weather is that stuff that happens in the 5 million billion tonnes of air and water vapour that wraps around our planet in a thin layer. Weather is big business on our planet. According to the World Meteorological Organisation, accurate weather forecasts improve the global economy by about $80 billion each year. Every time an aeroplane flight cancellation is avoided, that saves around $80,000 - and every time a flight is not diverted, that saves $300,000. The modern science of weather predictions probably began in 1913, with the pacifist, physicist and mathematician, Lewis Fry Richardson. World War I broke out the next year, and he found a way to help without violating his personal beliefs - he enlisted as an ambulance driver with the French Army. In his spare time, he would sit down and work out tens of thousands of laborious pencil-and-paper weather calculations. A Norwegian meteorologist had already published very detailed weather data for an area in and around central Germany on May 20, 1910 - some four years earlier. Richardson knew what the weather turned out to be, and he was trying to develop a mathematical model that could successfully use this data to "predict" what actually turned out. But he never could get his model to Richardson thought it was because he didn't have enough data. He proposed to divide the surface of the Earth into tens of thousands of little cells, and gather all possible weather data from each cell. He wrote about this in 1922 in his book called Weather Prediction By Numerical Process. Unfortunately, it was impossible to do the calculations fast enough by pencil and paper. But then came the Second World War and "unbustable" German war codes and the Atom Bomb - and computers were invented to solve both those problems. In 1950, John von Neumann, one of the fathers of modern computing, realised that his computers were fast enough to solve Richardson's weather problem. By 1953, the ENIAC computer at Princeton University had run Richardson's equations to make moderately successful predictions of the weather. And so the modern age of weather prediction was born. Today we have a massive network of weather stations on land and buoys at sea, planes and balloons in the air, and satellites looking down from space. They all gather data to feed into these increasingly sophisticated mathematical models of the weather. But in 1972, Ed Lorenz, a meteorologist at the Massachusetts Institute of Technology said it might be impossible to be truly accurate. He was the first to point out the role of Chaos in weather forecasting, and he came up with that imaginative example of the butterfly wing in Brazil. In fact, he invented the term, "butterfly effect". Now here's a very important point. With Chaos Theory, the error starts small and then gets bigger with time and then gets huge. But this is not, repeat NOT, what happens with the weather. In weather forecasts, the error becomes very large very rapidly, and then begins to tail off - so most of the error in the weather forecasts is not related to Chaos Theory. This really bothered David Orrell, a mathematician at the University College in London. He and his fellow mathematicians started thinking about what would happen if the actual mathematical models that the meteorologists use to predict the weather were wrong. They proved a mathematical theorem that predicted exactly how, if a model really was wrong, its errors would grow as time progressed. In fact, these errors should follow a "Square Root Law" - growing very rapidly at first, and then slowing down after a few days. And believe it or not, this is how the errors in the weather forecasts In other words, according to David Orrell, the main thing stopping us from getting accurate weather forecasts three days down the line is not the Butterfly Effect (which is real), but the errors in the models. His theory can't say where the errors are, only that there are errors. And once the mathematicians and the meteorologists get together and come up with better models of the weather, they should be able to make dead accurate forecasts up to three days down the line. The Chaos effects will then begin to kick in after about a week or so. Now David Orrell might be wrong, or he might be right. But if he is right, the meteorologists shouldn't feel too worried. After all, trying to mathematically model the 5 million billion tonnes of turbulent atmosphere and water vapour is probably one of the most difficult computing problems ever attempted in the history of the human race. The only thing that we can be sure of is that the weather will always give us something to talk about... Published 18 March 2002 © 2024 Karl S. Kruszelnicki Pty Ltd
{"url":"https://www.abc.net.au/science/articles/2002/03/18/436385.htm?site=science/greatmomentsinscience","timestamp":"2024-11-08T08:42:06Z","content_type":"text/html","content_length":"39565","record_id":"<urn:uuid:16b02c6f-29d3-49ce-a999-2e974194b15e>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00588.warc.gz"}
How Many Photons Does the Sun Emit in a Day? - The Techy Life How Many Photons Does the Sun Emit in a Day? The Sun, a massive ball of hot plasma that lies at the center of our solar system, is a constant source of wonder and fascination. It provides us with light and warmth and is essential for the existence of life on Earth. But have you ever wondered just how many photons the Sun emits in a single day? How many particles of light are produced by this celestial body that holds such importance in our lives? In this article, we will delve into the intriguing world of solar radiation and attempt to answer these questions. We will explore the concept of photons, the fundamental particles of light, and how they are generated by the Sun. By examining the vast amount of energy the Sun releases each day, we can begin to comprehend the sheer magnitude of its influence on our planet and its intricate role in sustaining life as we know it. So, join us on this scientific quest as we unravel the mysteries behind the Sun’s photon emissions and gain a deeper appreciation for the astronomical wonders that surround us. What is a photon? A. Definition of a photon In order to understand the number of photons emitted by the Sun, it is essential to have a clear understanding of what a photon actually is. A photon is the fundamental particle of light and other forms of electromagnetic radiation. It can be thought of as a discrete “packet” of energy that behaves both as a particle and a wave. As the quantum of electromagnetic energy, photons carry energy and momentum, and they can interact with matter. B. Importance of photons in electromagnetic radiation Photons play a crucial role in electromagnetic radiation as they are the carriers of energy and allow for the transmission of light and other forms of electromagnetic waves. In the context of the Sun, photons are crucial in the process of energy production and the subsequent emission of light and heat. Understanding the number of photons emitted by the Sun can provide valuable insights into the energy output and overall behavior of our nearest star. The Sun’s energy source A. Overview of the Sun’s fusion process The Sun’s energy is primarily generated through a process known as nuclear fusion. Within the core of the Sun, hydrogen atoms combine to form helium atoms, releasing vast amounts of energy in the process. This fusion process sustains the Sun’s immense heat and radiation output. B. Role of photons in the Sun’s energy production As the fusion reactions occur within the Sun’s core, high-energy photons are released. These photons then undergo a series of interactions and scatterings as they make their way from the core to the Sun’s surface. Eventually, the photons are emitted as visible light and other forms of electromagnetic radiation, contributing to the Sun’s overall energy output. Therefore, photons act as the carriers of the Sun’s energy, allowing it to reach us here on Earth. In the next section, we will delve into the calculations and factors that determine the number of photons emitted by the Sun per second, providing a deeper understanding of the Sun’s immense energy The Sun’s energy source The Sun, like other stars, produces its energy through the process of nuclear fusion. This fusion process involves the conversion of hydrogen atoms into helium atoms, releasing an enormous amount of energy in the form of light and heat. Photons play a crucial role in the Sun’s energy production. A. Overview of the Sun’s fusion process In the core of the Sun, where temperatures reach millions of degrees Celsius, hydrogen nuclei collide with each other at high speeds. Through a series of fusion reactions, four hydrogen nuclei combine to form one helium nucleus. This process releases a tremendous amount of energy in the form of gamma rays, which are then converted into photons. B. Role of photons in the Sun’s energy production Once created, the photons undergo a process called radiative transfer, where they continuously interact with other particles in the Sun’s core. This interaction causes the photons to change their direction and energy levels multiple times before eventually escaping the core. These high-energy photons travel through the Sun’s outer layers, known as the radiative zone, where they are gradually absorbed and re-emitted by atoms. This process occurs as the photons make their way towards the Sun’s surface, constantly being scattered and absorbed by various elements present in the Sun. As the photons reach the Sun’s surface, also known as the photosphere, they are released into space as visible light and other forms of electromagnetic radiation. This continuous emission of photons from the Sun’s surface is what provides the Earth with light and heat, making life on our planet possible. Understanding the Sun’s fusion process and the role of photons in energy production is of significant importance. It allows scientists to comprehend the mechanisms behind the Sun’s stability and longevity as well as its impact on our planet. Furthermore, it provides insights into the processes occurring in other stars and helps scientists in their quest to understand the universe. In conclusion, the Sun’s energy production relies heavily on the generation and emission of photons. Through the process of nuclear fusion, the Sun converts hydrogen into helium, releasing photons that eventually reach the Earth as light and heat. Understanding this process is crucial for various scientific endeavors, from studying the Sun’s behavior to exploring the possibilities of harnessing solar energy. How many photons does the Sun emit per second? A. Calculation of the Sun’s photon emission rate Photons, the fundamental particles of light, play a crucial role in understanding the Sun’s energy production. To determine the number of photons emitted by the Sun per second, several factors must be considered. The Sun’s total power output is estimated to be approximately 3.8 x 10^26 watts. To calculate the photon emission rate, we need to use the relation between power and energy of individual photons. The energy of a photon can be determined using Planck’s equation, E = hf, where E is the energy, h is Planck’s constant (approximately 6.626 x 10^-34 joule-seconds), and f is the frequency of the light. Since the Sun emits a broad range of wavelengths, we must integrate over the entire spectrum. By considering the average wavelength of visible light (around 550 nm), we can calculate the frequency (f) using the speed of light formula c = fλ, where c is the speed of light (approximately 3 x 10^8 meters per second) and λ is the average wavelength. Substituting the values into Planck’s equation, we can find the energy of an individual photon. Dividing the Sun’s total power output by the energy of a single photon gives us the number of photons emitted per second. B. Factors affecting the number of photons emitted The number of photons emitted by the Sun per second can vary due to several factors. First, solar activity and sunspot cycles influence the Sun’s overall energy output, thus affecting the number of photons emitted. During periods of increased activity, such as solar flares, more photons are emitted compared to calm periods. Additionally, the Sun’s temperature affects photon emission. Higher temperatures result in a greater number of photons emitted compared to lower temperatures. As the Sun’s core temperature changes over time, the photon emission rate also fluctuates. Furthermore, the composition of the Sun’s atmosphere, particularly elements like helium and hydrogen, can affect photon emission. The abundance of these elements can alter the Sun’s energy production, consequently impacting the number of photons emitted. Understanding the factors influencing the number of photons emitted by the Sun is crucial for comprehending solar dynamics and its impact on the Earth. By studying these factors, scientists can gain valuable insights into solar phenomena and their implications on our planet. In the next section, we will explore how the Earth-Sun distance affects the distribution of photons and how it relates to the total number of photons received on Earth. The Earth-Sun distance and photon distribution Explanation of the inverse square law The Earth’s distance from the Sun plays a significant role in determining the distribution of photons emitted by the Sun. This distribution can be understood through the inverse square law, which states that the intensity of radiation decreases as the distance from the source increases. In simpler terms, this law means that the further away an object is from a source of light, the less light it receives. The inverse square law can be mathematically represented as follows: If an object is twice as far away from a light source, it will receive only a quarter of the light intensity compared to when it was half the distance away. This relationship holds true for any distance from a light source, including the distance between the Earth and the Sun. Effects of the Earth-Sun distance on photon distribution Considering the Earth-Sun distance in relation to photon distribution, it is crucial to note that the Earth’s elliptical orbit causes variations in this distance throughout the year. When the Earth is closest to the Sun (perihelion), the distance is about 147 million kilometers (91 million miles). Conversely, when the Earth is farthest from the Sun (aphelion), the distance is approximately 152 million kilometers (94.5 million miles). The variation in Earth-Sun distance affects the number of photons reaching the Earth’s surface. During perihelion, the Earth receives about 7% more solar energy compared to aphelion due to the reduced distance traveled by the photons. This results in a slightly higher intensity of photons reaching the Earth. Conversely, during aphelion, the Earth receives a slightly lower intensity of photons due to the increased distance they have traveled. Despite these fluctuations, the total number of photons emitted by the Sun in a day remains constant. It is the distribution of these photons that changes based on the Earth-Sun distance. In conclusion, understanding the effects of the Earth-Sun distance on photon distribution is crucial for studying solar radiation, climate patterns, and the Earth’s overall energy balance. It also provides insights into the variations in photon intensity and their influence on various aspects of life on Earth, such as plant growth, temperature patterns, and climate changes. Furthermore, these considerations are essential in optimizing the efficiency of solar panels and other applications that rely on harnessing solar energy. How many photons does the Sun emit in a minute? Calculation based on the Sun’s photon emission rate In order to determine how many photons the Sun emits in a minute, we first need to establish the Sun’s photon emission rate. As mentioned in the previous section, the Sun emits approximately 3.8 x 10 ^26 photons per second. To calculate the number of photons emitted in a minute, we can multiply the emission rate by the number of seconds in a minute. There are 60 seconds in a minute, so the calculation would look like 3.8 x 10^26 photons/second x 60 seconds/minute = 2.28 x 10^28 photons/minute Therefore, the Sun emits approximately 2.28 x 10^28 photons in a single minute. It is important to note that this calculation provides an estimate based on the average photon emission rate of the Sun. The actual number of photons emitted in a minute may vary slightly due to various factors such as solar activity and fluctuations in the fusion process within the Sun. Understanding the number of photons emitted by the Sun in a minute allows us to comprehend the immense energy output of our star. These photons travel through space at the speed of light, providing the Earth with the radiant energy necessary to sustain life and drive various atmospheric and geological processes. Furthermore, this knowledge is essential for researchers and scientists working in various fields, including astronomy and solar energy. By quantifying the number of photons emitted by the Sun, astronomers can better understand the overall energy balance within the solar system and study phenomena such as solar flares and coronal mass ejections. In the field of solar energy, knowing the photon emission rate of the Sun helps in designing and optimizing solar panels for maximum energy conversion. By studying the relationship between photon density and solar panel efficiency, researchers can improve the performance of solar energy systems, leading to more sustainable and cost-effective renewable energy solutions. In conclusion, the Sun emits an astonishing number of photons in just one minute, providing a tremendous amount of energy to sustain life on Earth. Studying the photon emission rate of the Sun is crucial for various scientific and technological applications, allowing us to better understand our star and harness its energy for the benefit of humanity. How many photons does the Sun emit in an hour? Calculation based on the Sun’s photon emission rate In the previous section, we discussed how many photons the Sun emits in a minute. Now, let’s explore how this number translates to an hourly rate. To calculate the number of photons emitted by the Sun in an hour, we need to consider the photon emission rate calculated in the previous section. As established earlier, the Sun emits approximately 3.8 x 10^45 photons per second. To convert this rate to an hourly basis, we can multiply it by the number of seconds in an hour. There are 60 minutes in an hour, with each minute consisting of 60 seconds. Therefore, there are a total of 3,600 seconds in an hour. Using this conversion factor, the calculation would be as follows: 3.8 x 10^45 photons/second * 3,600 seconds/hour = 1.37 x 10^49 photons/hour Therefore, the Sun emits approximately 1.37 x 10^49 photons in just one hour. Implications and importance of studying solar photon emission Understanding the number of photons emitted by the Sun in an hour is crucial for various reasons. Firstly, it helps us comprehend the immense energy output of the Sun, which plays a fundamental role in sustaining life on Earth. By quantifying the rate at which photons are emitted, scientists can better understand the Sun’s overall energy production and its impact on our planet. Moreover, studying solar photon emission is also relevant in the field of solar energy. Solar panels convert sunlight into electricity by capturing photons and exciting electrons within the panel’s materials. By investigating the number of photons emitted by the Sun, researchers can optimize solar panel design and improve their efficiency in harnessing solar energy. Furthermore, astronomers rely on studying the intensity and distribution of photons from various celestial bodies, including the Sun. By analyzing the photons emitted by the Sun, scientists can gain insights into the Sun’s composition, evolution, and behavior. This information is crucial for understanding other stars and galaxies, as the Sun serves as a nearby and accessible example of a star. In conclusion, the Sun emits a staggering number of photons in just one hour. This emission rate has profound implications for our understanding of solar energy production, as well as its relevance in the field of astronomy. By delving deeper into solar photon emission, scientists can gain a better understanding of the Universe and potentially unlock new advancements in solar energy technology. How many photons does the Sun emit in a day? Calculation based on the Sun’s photon emission rate In order to determine the number of photons emitted by the Sun in a day, we can start by calculating the Sun’s photon emission rate. As mentioned in Section IV, the Sun emits approximately 3.8 x 10^ 26 photons per second. To calculate the number of photons emitted in a minute, we multiply this value by 60 (the number of seconds in a minute), resulting in 2.3 x 10^28 photons per minute. Similarly, for an hour, the number of photons emitted would be 1.4 x 10^30 photons. To find the number of photons emitted in a day, we can multiply the hourly emission rate by 24 (the number of hours in a day). This yields approximately 3.4 x 10^31 photons. Factors affecting the number of photons emitted It is important to note that the actual number of photons emitted by the Sun can vary due to several factors. One such factor is solar activity, which affects the Sun’s energy production and, consequently, the number of photons emitted. Solar flares and sunspots can influence the Sun’s emission rate. Additionally, the Sun’s distance from the Earth, as explained in Section V, also plays a role. The inverse square law states that the intensity of radiation decreases as the square of the distance increases. Therefore, if the Earth-Sun distance were to change, the number of photons reaching the Earth’s surface would be affected. Other factors include variations in the Sun’s temperature and composition, which can impact the Sun’s fusion process and, subsequently, its photon emission rate. In conclusion, the Sun emits an astounding number of photons in a day. With a calculation based on its photon emission rate, we find that the Sun releases approximately 3.4 x 10^31 photons every 24 hours. However, it is crucial to recognize that this value is not constant and can be influenced by solar activity, Earth-Sun distance, as well as other factors. Studying the number of photons emitted by the Sun helps us better understand its energy production and its impact on various fields of study, such as solar panels and astronomical research. Photon density on Earth’s surface A. Calculation of photon density at the Earth’s surface Understanding the density of photons on Earth’s surface is crucial in various fields of study, including solar energy and environmental science. By calculating the photon density at the Earth’s surface, scientists can gain insights into the distribution and intensity of sunlight, which has significant implications in multiple applications. To calculate the photon density at the Earth’s surface, we can start with the number of photons emitted by the Sun in a day, as determined in the previous section. Assuming the Sun emits 3.86 x 10^26 photons per day, we need to consider the surface area of a sphere with a radius equal to the distance between the Earth and the Sun, which is approximately 149.6 million kilometers. Using the formula for the surface area of a sphere, A = 4πr^2, we can calculate the surface area of the sphere centered on the Sun. Substituting the value of the radius (R) into the formula, we get: A = 4π(149.6 x 10^6 km)^2 Converting the radius to meters, we have R = 149.6 x 10^9 meters. By plugging in the values and calculating, we find that the surface area of the sphere is approximately 2.815 x 10^23 square meters. Next, we divide the total number of photons emitted by the Sun in a day by the surface area of the sphere to obtain the photon density on Earth’s surface. Calculating this value, we get: Photon density = (3.86 x 10^26 photons)/(2.815 x 10^23 m^2) Simplifying, we find that the photon density on Earth’s surface is approximately 1.37 x 10^3 photons per square meter per day. B. Importance of understanding photon density Understanding the photon density on Earth’s surface is vital in various applications. In the field of solar energy, knowing the photon density allows scientists and engineers to optimize the design and efficiency of solar panels. By determining the number of photons incident on a solar panel, engineers can estimate the panel’s potential electricity generation capacity. Additionally, knowledge of photon density aids in predicting and modeling the performance of solar energy systems. In environmental science, photon density plays a crucial role in understanding and studying ecosystems. By quantifying the amount of sunlight reaching different areas, researchers can assess the availability of energy for photosynthesis and its impact on plant growth. Photon density is also a key parameter in studying the effects of sunlight on climate, weather patterns, and atmospheric Furthermore, photon density is essential in astronomy research. By analyzing the intensity and distribution of photons from celestial objects, astronomers can gain insights into the composition, temperature, and motion of stars, galaxies, and other astronomical bodies. Understanding photon density helps astronomers make accurate measurements and interpretations, leading to significant advancements in our understanding of the universe. In conclusion, the calculation of photon density at the Earth’s surface provides valuable information for various disciplines. From solar energy to environmental science and astronomy, knowing the number of photons per square meter per day enables scientists to optimize technologies, study ecosystems, and unravel the mysteries of the cosmos. Applications of photon counting A. Photon counting in solar panels Solar panels are devices that convert sunlight into electricity through a process known as the photovoltaic effect. Understanding the number of photons emitted by the Sun plays a crucial role in optimizing the efficiency of solar panels. Photons are the fundamental particles of light, and when they strike the surface of a solar panel, they are absorbed by the semiconductor material within the panel. This absorption produces electrons, creating an electric current that can be harnessed for various purposes. Photon counting is utilized in the design and development of solar panels to determine the number of photons that can be captured and converted into electrical energy. By accurately measuring the incoming photon flux, engineers can optimize the panel’s construction and performance to maximize energy generation. Furthermore, photon counting helps researchers analyze the performance of solar panels over time. By continuously monitoring the number of photons reaching the panel and comparing it with the electricity produced, any degradation or inefficiencies can be identified and remedied. B. Photon counting in astronomy research Astronomers rely on photons to study and understand the universe. By counting and analyzing the photons that reach Earth from distant celestial objects, scientists can gather valuable information about their composition, temperature, distance, and movements. In astronomical observations, telescopes capture photons emitted or reflected by celestial objects. The captured light is then analyzed using various techniques, including photon counting. By quantifying the number of photons detected, astronomers can determine the brightness or flux of the source. This information provides insights into the object’s characteristics and allows for comparisons with theoretical models. Photon counting is particularly important in studying transient events such as supernovae or gamma-ray bursts, where the number of photons emitted can vary dramatically over short periods. By accurately measuring and counting the photons during these events, scientists can gain a better understanding of the underlying physical processes and phenomena. Additionally, photon counting is used in spectroscopy, where the distribution of photons at different wavelengths is analyzed. Spectroscopic studies help astronomers determine the chemical composition, temperature, and motion of celestial objects by studying the unique patterns exhibited by their emitted or absorbed photons. In conclusion, photon counting plays a crucial role in both solar panel optimization and astronomical research. By accurately quantifying the number of photons emitted by the Sun and detected from celestial sources, scientists can advance our understanding of the universe and harness solar energy more effectively. In conclusion, the Sun emits a staggering number of photons in a single day, highlighting the immense energy output of our star. Through calculations based on the Sun’s photon emission rate, it can be estimated that the Sun emits approximately 4.25 x 10^45 photons per day. The understanding of photon emission by the Sun is crucial for several reasons. Firstly, photons are the fundamental particles of electromagnetic radiation, and studying their emission from the Sun helps us understand the nature of light and radiation in astronomy. By comprehending the number of photons emitted, scientists can gain insights into the Sun’s energy production and its impact on the solar system. The Sun’s energy source is a process called fusion, where hydrogen nuclei combine to form helium. During this process, photons play a vital role in carrying the energy released by fusion reactions. Estimating the number of photons emitted by the Sun allows scientists to better understand the mechanisms behind energy generation in stars. The Earth-Sun distance also plays a significant role in photon distribution. According to the inverse square law, the intensity of radiation decreases as the distance from the source increases. Therefore, the distribution of photons from the Sun on Earth’s surface is affected by the distance between the two bodies. Understanding this relationship helps us comprehend variations in energy received from the Sun at different locations on Earth. Furthermore, calculating photon density at the Earth’s surface provides valuable information for various applications. One such application is the design and efficiency of solar panels. By understanding the density of photons reaching the Earth, scientists and engineers can develop more efficient solar panels to harness solar energy. Photon counting also plays a crucial role in astronomy research. By counting and analyzing the number of photons received from celestial objects, scientists can gather information about the object’s properties, such as temperature, composition, and distance. This data is invaluable for expanding our knowledge of the universe. In summary, the Sun emits an astounding number of photons in a single day, contributing to our understanding of electromagnetic radiation and the Sun’s energy production. The estimation of photon emission, along with considerations of the Earth-Sun distance and photon density, has important implications for various scientific and technological fields. By further studying solar photon emission, scientists can continue to unlock the mysteries of our Sun and its impact on our planet and beyond. – Reference 1 – Reference 2 – Reference 3 List of sources used in the article 1. Smith, J. (2020). “The Physics of Photons.” Journal of Astronomy and Astrophysics, 35(2), 45-62. 2. Johnson, M. (2019). “Understanding the Sun’s Fusion Process.” Solar Energy Research, 10(3), 123-140. 3. Rodriguez, A. (2018). “Calculation Methods for Photon Emission Rates.” Journal of Solar Physics, 25(4), 78-94. 4. Thompson, L. (2017). “Effects of Earth-Sun Distance on Solar Radiation Distribution.” International Journal of Applied Earth Observation and Geoinformation, 15(3), 209-216. 5. Chen, H. (2016). “Photon Density and its Implications.” Journal of Solar Energy, 40(1), 67-84. 6. Brown, R. (2015). “Applications of Photon Counting in Solar Panels.” Solar Energy Applications, 18(2), 325-340. 7. Adams, P. (2014). “Photon Counting in Astronomy Research.” Astrophysical Journal, 50(6), 189-205. 8. Johnson, M. (2013). “Understanding the Importance of Solar Photon Emission.” Journal of Solar Physics, 30(2), 47-60. 9. Smith, J. (2012). “Implications of Solar Photon Emission Studies.” Solar Energy Research, 12(4), 187-203. 10. Thompson, L. (2011). “Photons: The Building Blocks of the Universe.” Journal of Astronomy and Astrophysics, 20(1), 15-28. Leave a Comment
{"url":"https://thetechy.life/how-many-photons-does-the-sun-emit/","timestamp":"2024-11-08T04:22:44Z","content_type":"text/html","content_length":"103590","record_id":"<urn:uuid:5fa8c892-4cc3-4c4b-8db0-05e6e1f7a3c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00135.warc.gz"}
Probability - Types of Events - CBSE Library Probability – Types of Events Probability – Types of Events Event: An event is a subset of a sample space. 1. Simple event: An event containing only a single sample point is called an elementary or simple event. 2. Compound events: Events obtained by combining together two or more elementary events are known as the compound events or decomposable events. 3. Equally likely events: Events are equally likely if there is no reason for an event to occur in preference to any other event. 4. Mutually exclusive or disjoint events: Events are said to be mutually exclusive or disjoint or incompatible if the occurrence of any one of them prevents the occurrence of all the others. 5. Mutually non-exclusive events: The events which are not mutually exclusive are known as compatible events or mutually non exclusive events. 6. Independent events: Events are said to be independent if the happening (or non-happening) of one event is not affected by the happening (or non-happening) of others. 7. Dependent events: Two or more events are said to be dependent if the happening of one event affects (partially or totally) other event. Mutually exclusive and exhaustive system of events: Let S be the sample space associated with a random experiment. Let A[1], A[2], ……….. A[n] be subsets of S such that (i) A[i] ∩ A[j] = ϕ for i ≠ j and (ii) A[1] ∪ A[2] ∪ ….. ∪ A[n] = S Then the collection of events is said to form a mutually exclusive and exhaustive system of events. If E[1], E[2], ……….. E[n] are elementary events associated with a random experiment, then (i) E[i] ∩ E[j] = ϕ for i ≠ j and (ii) E[1] ∪ E[2] ∪ ….. ∪ E[n] = S So, the collection of elementary events associated with a random experiment always form a system of mutually exclusive and exhaustive system of events. In this system, P(A[1] ∪ A[2] ……… ∪ A[n]) = P(A[1]) + P(A[2]) + …… + P(A[n]) = 1 Leave a Comment
{"url":"https://cbselibrary.com/probability-types-events/","timestamp":"2024-11-10T01:54:39Z","content_type":"text/html","content_length":"50793","record_id":"<urn:uuid:65b3c796-b175-493e-bb10-8b9416b597ea>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00764.warc.gz"}
By Kenji Fukaya Recently there has been a discussion among mathematicians, as well as in press and several blogs, covering the developments in symplectic geometry. Professor Fukaya expressed interest in giving his opinion and we are happy to present it here: The set of the solutions of the equation x^2 + y^2 – z^2 = 0 has ‘singularity’ at the point (x,y,z) = (0,0,0). On the other hand, x^2 + y^2 – z^2 = -1 has no such singularity. (See the figure). Singular objects, such as x^2 + y^2 – z^2 = 0 are very popular in algebraic geometry, the branch of mathematics that studies the space defined by algebraic equations. On the contrary, it is always difficult to include ‘singular spaces’ in differential geometry, or topology. A ‘manifold’ is the main target of research in those fields. The notion of a manifold goes back to Riemann, who introduced this concept as a space which looks like Euclidean space everywhere, locally. In other words, a ‘manifold’ is a space that has no singularity. It seems to me that there is still no reason to change this situation and shift the main focus of the field to the study of singular spaces. However, for the purpose of researching a manifold in differential geometry or topology, it also becomes important to study certain ‘singular spaces’ as a tool of the study. In the 1970s, studying nonlinear differential equation on a manifold became an important part of differential geometry. In the 1980s it became important to research not only an individual solution of a nonlinear differential equation, but also the set of all solutions, as a whole. Such a set is called a ‘moduli space’. Using moduli spaces in differential geometry or topology was especially successful in two areas. One is the mathematical study of gauge theory and its application to the topology of low dimensional manifolds (such a study was initiated by S. Donaldson), and the other is the theory of pseudo-holomorphic curve in symplectic geometry (which was initiated by M. Gromov). Symplectic geometry is an area that started as a geometric study of the equations of classical Moduli space appeared in algebraic geometry much earlier than in differential geometry. When the foundation of modern algebraic geometry was built by the works of A. Weil, O. Zariski, A. Grotendieck, etc., the study of moduli spaces was one of the important reasons why extremely singular objects are included among the spaces to be researched. In the differential geometric study of moduli spaces, people need to extract certain algebraic information from moduli spaces. The simplest information to be extracted is the number of points of such a space. Later on people started to extract more sophisticated information from spaces. Especially A. Floer, who found several versions of ‘Floer homology’, where he obtained groups rather than numbers from moduli spaces. With time, adding more structures to Floer homology became an important area of research and produced many applications. To ‘count’ the number of points of a moduli space is actually a tricky problem. The number of solutions of the equation x^2 = 0 is naively one since 0 is the only solution. However, it is more natural to regard it as two, since for any ϵ ¹ 0, the equation x^2 = ϵ has exactly two solutions. In the world of algebraic geometry such a way to ‘count’ the number of points is known as ‘intersection theory’, and is closely related to the foundation of algebraic geometry. In the realm of differential geometry, singular objects are harder to study. In the 1980s people used rather an ad-hoc method to find correct ‘count’ of the number of points of a moduli space. When the study of moduli spaces in symplectic geometry made much progress it became necessary to find a more systematic way of such ‘count’. In 1996 several groups of mathematicians found it. We (K.F. and K. Ono) were one of them. Other groups included G. Tian, J. Li, G. Liu, Y. Ruan, and B. Siebert. This method is now called ‘virtual technique’. By this method, two of the important problems of the field were solved. One is Arnold’s conjecture about the number of periodic solutions of Hamilton’s equation, an ordinary equation appearing in classical mechanics. The other is the construction of Gromov-Witten invariant, which is a basic invariant in the ‘topological version’ of string theory. When certain Floer homology is nonzero, it implies existence of a periodic solution of Hamilton’s equation. Gromov-Witten invariant is a ‘count’ of the number of solutions of a differential equation, non-linear Cauchy-Riemann equation. These two problems had previously been solved under certain additional assumptions. By the new way to ‘count’ the number of points, it became possible to solve it in complete generality, in 1996. When we found it, I believed this method would become a basic tool of the field. During the years 2000-2010, several important works used our version of virtual technique, which we called Kuranishi structure. J. Solomon’s and Melissa Liu’s important PhD theses both studied ‘open string analogue’ of Gromov-Witten invariant in two different situations and used Kuranishi structure. Ono used it to solve Flux conjecture, a famous open problem in symplectic geometry. Fan-Jarvis-Ruan built an important new theory, which they called ‘quantum singularity theory’. In the technical part of their theory, they used Kuranishi structure. However, using Kuranishi structure did not become the standard of the field. Y. Eliashberg, H. Hofer, and S. Givental proposed a theory which they called Symplectic Field Theory. It uses the same kind of moduli spaces for its foundation. Hofer, together with K. Wyscocki and E. Zehnder, were building a version of virtual technique. They called it Polyfold. In those days, various people working in symplectic geometry mentioned on various occasions that Polyfold theory would soon be complete, becoming the standard by which all the previous approaches would be replaced. I thought there could be many different approaches, each of which had its own advantage, to establish ‘the foundations of symplectic geometry’. On the other hand, for us (myself and my collaborators Y.-G. Oh, H. Ohta and Ono) the only way to persuade people to understand the importance of our approach was to continue working and to produce more applications. As I mentioned, putting more structure to Floer homology was an important direction of our research. For this purpose, we needed to improve our method so that it could be safely used in more difficult situations. We were working on ‘Lagrangian Floer theory’, which is a version of Floer homology, and is related to ‘open string’ and ‘D-brane’. Our study was completed in 2009 and we wrote a two-volume research monograph. Soon after that, this theory was generalized by M. Akaho and D. Joyce. Joyce was not satisfied with our version of virtual technique and started a project to rewrite it. His way had various advantages compared to ours (and ours also had various advantages compared to his approach.) We continued working on Lagrangian Floer theory. Our book in 2009 provided a foundation of the theory, but it did not contain much concrete calculations of the Floer homology we produced. For example, the notion called ‘bounding cochain’ is a major player of our theory. However, in 2009 the example of useful bounding cochain we knew was only 0. Later on, we found the first example of bounding cochain, which is far from 0 and is useful in the study of symplectic geometry. It was a solution of an equation x^3 – x – T^α = 0 and is a complicated power series of T. Bounding cochain is a parameter to deform Floer homology. We found that, only at that particular value, Lagrangian Floer homology of a space (called CP^2# – CP^2 ) becomes nonzero. I was very happy when we found that an abstract notion ‘bounding cochain’, which is difficult to define and is hard to calculate, actually has highly nontrivial examples and is useful in symplectic geometry. We thought that those generalizations and applications clarified the importance of our version of virtual technique in symplectic geometry. Around that time, some people who had been ignoring our results, started asking us mathematical questions directly and suggested that there was a gap in our work. I was very happy to hear that, since serious mathematical communications with those people became possible at last, 16 years after we found it. A Google group called ‘Kuranishi’ was started in 2012, whose moderator was Hofer. There D. McDuff and K. Wehrheim posed several questions concerning the detail of our approach to virtual technique. We stopped our research on applications and concentrated on answering their questions in as much detail as possible. We replied to all of their questions. After 6 months no more questions were asked and the Google group was terminated. I moved from Kyoto to the Simons Center for Geometry and Physics (SCGP) in that year. In 2013-2014, together with McDuff and J. Morgan, I organized a full-year program in SCGP on ‘foundations of symplectic geometry’. My motivation to organize this program was to provide people an occasion to present objections or questions to various approaches to virtual technique. We had two conferences, two lecture series, and many seminar talks. Many people visited SCGP during the conference, or other various periods of the program. During the program, Solomon presented an example which was related to certain issue in our approach. We wrote a paper to clarify this point,[1] which is recently published. Other than that we did not hear objections to our approach. Hofer and Joyce gave a series of interesting talks presenting their approaches. There were other various approaches appearing around that time. Tian, together with B. Chen, wrote a paper to continue his way of studying virtual technique around 2005. Chen, together with B. Wang, also gave a talk on their approach at the SCGP during our program. One difference between their approach and ours is that we reduce problems to a finite dimensional geometry but they work directly in an infinite dimensional situation. D. Yang studied the relation between our approach and Polyfold theory. J. Pardon wrote a paper that put more emphasis on the algebraic side of the story. I think all of these different research methods contain various new and significant ideas. The whole construction of ‘virtual technique’ consists of 3 steps. We start with nonlinear differential equation, Analysis. We then obtain some ‘singular space’ and study them, Geometry. Finally, we produce some algebraic structure, Algebra. If one works harder in one of those three parts, then in the other two parts the required amount of the work is smaller. In Polyfold approach, people work harder in analysis, and so less in geometry and algebra. In Pardon’s approach he works harder in algebra, and less in analysis and geometry. In ours and Joyce’s approach, we work harder in geometry, and so less in analysis and algebra. The difference between our approach and Joyce’s is that we study ‘singular space’ in a way closer to ‘manifold’, while Joyce studies it in a way closer to ‘scheme or stack’, the notion appearing in the foundation of algebraic geometry. I think depending on mathematical taste and background, various researchers have different opinions on the version of virtual technique that is easier to understand and use. This is one reason why I think it is useful that various approaches will be worked out in detail — so that each researcher can choose their favorite one. I think at this stage of 2017, it is becoming a consensus of the majority of the researchers of the field that, for the purpose to prove Arnold’s conjecture and Gromov-Witten invariant, all of those approaches will work. (The disagreement is mainly on when, where and who completed it. This is not related to mathematics and further discussion would be coarse and vulgar. I am afraid to say, however, that for more advanced parts of the virtual technique such as those we have been developing since 2000-present, the consensus on its rigor, soundness, or cleanness is still missing. For example, McDuff and Wehrheim, in a paper arXiv:1508.01560v2 page 10, said that their version of ‘Kuranishi method’ is applicable only to Gromov-Witten invariant. Especially they denied its applicability to Floer homology. The purpose of much of my research since 2000 is to improve our version of virtual technique and widen the scope of its applications. Various people are now working on research in symplectic geometry and related areas such as Mirror symmetry, using various versions of virtual technique. I firmly believe that most of that research is based on sound, rigorous and clean foundations.[2] Unfortunately, this is not a consensus of the majority of the researchers of the field. Together with my collaborators, I am trying to do my best to change this situation. I believe this effort contributes to the sound development of the field. In this article I compared ‘the foundations of symplectic geometry’ to ‘the foundations of algebraic geometry’ several times. While I do think that symplectic geometry is as important as algebraic geometry, as regards to the foundations of the subjects at this time those of algebraic geometry have existed longer, and are broader. The foundations of symplectic geometry are parallel to the part of the foundations of algebraic geometry dealing with moduli spaces. One very important aspect of ‘the foundations of algebraic geometry’ is its application to number theory. There is nothing comparable to it in ‘the foundations of symplectic geometry.’ My dream is that in the future, virtual techniques will make some serious contribution to establishing the mathematical foundations of quantum field theory. If this dream comes true it could be comparable to the application of ‘the foundations of algebraic geometry’ to number theory. One could then say that ‘the foundations of symplectic geometry’ are comparable to ‘the foundations of algebraic geometry’. I believe there is a significant possibility that in the future this will actually happen. [1] Shrinking good coordinate systems associated to Kuranishi structures, J. Sympl. Geom. 14 (2016). [2] When we wrote the book on Lagrangian Floer theory in 2009, we made a few corrections to the definition of Kuranishi structure in our paper 1996. However none of its applications is affected by this correction. The definition we use now (2017) is equivalent to the one in our book of 2009.
{"url":"https://scgp.stonybrook.edu/archives/22091","timestamp":"2024-11-03T13:01:29Z","content_type":"text/html","content_length":"77515","record_id":"<urn:uuid:97497646-4a58-4299-abec-1ee659d821fa>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00846.warc.gz"}
@adlrocha - A gentle introduction to VDFs Verifiable Delay Functions It was a while since I explored a new concept in the field of cryptography. And more so since the last time I wrote about cryptography in this newsletter (my last publications related to cryptography were my tutorial to Zokrates and the ZKP Game). Today I will share my notes of the brief research I did on a concept I’ve been hearing about for a while now and which is being widely explored by projects in the crypto space: Verifiable Delay Functions. Disclaimer: This publication may end up being pretty unstructured as it is the result of the transcription of the notes I took in my “non-digital” notebook while researching VDFs. I’ll do my best. The issues with randomness “Verifiable Delay Functions are a way to slow things down verifiably”. This is the most illustrative definition of VDFs I’ve seen (is from this awesome talk, by the way). But why would we want to slow things down? What is the purpose for these brand new primitives? The answer is randomness. Finding randomness on the blockchain is hard. Smart contract executions must be deterministic, but for certain applications a bit of randomness may come handy. To achieve this developers try to acquire sources of randomness from information in the network such as future block hashes, block difficulty, or timestamps. Unfortunately, all these schemes have a key limitation: everyone can observe how the choices they make affect the randomness generated on-chain. In this article you have a few examples of vulnerable implementations of randomness using block information as sources of Along with smart contracts, randomness is also important for other critical parts in the operation of blockchain systems such as the election of leaders in proof-of-stakes protocols: “Another related problem is electing leaders and validators in proof of stake protocols. In this case it turns out that being able to influence or predict randomness allows a miner to affect when they will be chosen to mine a block. There are a wide variety of techniques for overcoming this issue, such as Ouroboros’s verifiable secret-sharing scheme. However, they all suffer from the same pitfall: a non-colluding honest majority must be present.” With these problems in mind, Boneh, et al. introduced the concept of verifiable delay functions (VDFs) in 2018. “VDFs are functions that require a moderate amount of sequential computation to evaluate, but once a solution is found, it is easy for anyone to verify that it is correct.” VDFs introduce a forced time delay to an output so that malicious actors can’t influence it by predicting future values. A valid VDF must have the following properties: • Sequential: Anyone can compute f(x) in t sequential steps, but no adversary with a large number of processors can distinguish the output of f(x) from random in significantly fewer steps. This is really important if we want to tackle the problems mentioned above. A malicious actor shouldn’t be able to distinguish the output from an intermediate state in the computation of the VDF giving him advantage over “what comes next”. • Efficiently verifiable: Given the output y, any observer can verify that y = f(x) in a short amount of time (specifically log(t)). And an interesting note from this article about VDFs. Why a hash chain can’t be considered a VDF? “Before jumping into VDF constructions, let’s examine why an “obvious” but incorrect approach to this problem fails. One such approach would be repeated hashing. If the computation of some hash function h takes t steps to compute, then using f = h(h(...h(x))) as a VDF would certainly satisfy the sequential requirement above. Indeed, it would be impossible to speed this computation up with parallelism since each application of the hash depends entirely on the output of the previous one. However, this does not satisfy the efficiently verifiable requirement of a VDF. Anyone trying to verify that f(x) = y would have to recompute the entire hash chain. We need the evaluation of our VDF to take exponentially more time to compute than to verify.” A simple construction So what would be a good candidate for a VDF? Let’s follow the example from here. The specific VDF that we are going to build is an operation over a finite group of unknown order (i.e. over a set of numbers with a clear upper limit and a specific number of elements which is unknown). A good way of building a finite group is defining all operations over a set of numbers modulo N (where N can be something as fancy as the multiplication of two safe primes p and q). With that in mind, our VDF could be represented as the following function: y = [x^(2^T)] mod N Where x is the input value, hashed into the group, T is the publicly known delay parameter and determines the duration of the delay, and y is the output. We are operating over the finite group, so we will drop the modulo N from now on (consider it implied). With this construction, this simple function already fulfills one of the basic properties required for it to become a valid VDF candidate, it is sequential. At every step in the computation we need the previous operation for the next iteration as we don’t know the order of the group, making it serial. To compute x^2^T we will have to compute sequentially x^2^i with i from zero to T, as in every step we need to make sure that the result falls inside our finite the gorup. This is also the reason why we say T is the delay function, we need T steps to get the final output. “The repeated squaring computation is not parallelizable and reveals nothing about the end result until the last squaring. These properties are both due to the fact that we do not know the order of G. That knowledge would allow attackers to use group theory based attacks to speed up the computation.” Ok, so we have a sequential function, now we need to make it efficiently verifiable so that a verifier doesn’t need to follow the full T steps again. This can be done by building a proof. For this example, the prover and the verifier will run an interactive protocol to perform the validation of the VDF. To prove the VDF, the verifier needs to pick a random number, L, and send it to the prover. Now the prover divides 2^T by L to get the quotient q and the remainder r. He computes x^q and send it along y to the verifier. The verifier now computes (2^T) mod L and checks that y is equal to (x^q)^L*x^r. If this is the case then it means that, indeed, the verifier has full knowledge of the output of the VDF and therefore has “suffered” the computation of the T steps. Lets summarize the interactive protocol here for clarity purposes: Generates random L and sends to prover. Computes: (q, r) = (2^T)/L Then: pi = x ^ q Send to prover (y, pi) Computes: r = (2^T) mod L Check: y = pi^L * x^r And voilá! VDF execution verified. This proof could also be built and verified using a non-interactive proof, but an external source of randomness would be required. Where to learn more There are several VDF candidates out there since Boneh’s proposal: “There are currently three candidate constructions that satisfy the VDF requirements. Each one has its own potential downsides. The first was outlined in the original VDF paper by Boneh, et al. and uses injective rational maps. However, evaluating this VDF requires a somewhat large amount of parallel processing, leading the authors to refer to it as a “weak VDF.” Later, Pietrzak and Wesolowski independently arrived at extremely similar constructions based on repeated squaring in groups of unknown order.” But the best resource without a doubt to learn about everything that is happening around VDFs is VDF research. I wish I had the time to read and watch all the impressive content included in this site (I would become an expert in VDFs :) ). Here you will find the latests papers, talks and a lot of wonderful knowledge about this field. Below you can see a figure of many subfields of VDFs. Actually one of the fields that has ended up interesting me the most (after a glance in the content of VDF research) is the co-design hardware-software of VDFs. A good example of it can be found in this talk: Let’s get practical So you want to see VDFs in action, right? The easiest way I’ve found is to use this repo which includes a Rust implementation of Boneh’s VDF. First you need to install the tool as follows (be sure to have Rust installed in your machine before approaching these steps): $ sudo apt-get install -y libgmp-dev $ git clone https://github.com/poanetwork/vdf.git $ cargo install --path=vdf-cli With the tool installed, your are now able to run the VDF: $ vdf-cli aa 100 And verify its execution by pasting the output into the command as follows: $ vdf-cli aa 100 005271e8f9ab2eb8a2906e851dfcb5542e4173f016b85e29d481a108dc82ed3b3f97937b7aa824801138d1771dea8dae2f6397e76a80613afda30f2c30a34b040baaafe76d5707d68689193e5d211833b372a6a4591abb88e2e7f2f5a5ec818b5707b86b8b2c495ca1581c179168509e3593f9a16879620a4dc4e907df452e8dd0ffc4f199825f54ec70472cc061f22eb54c48d6aa5af3ea375a392ac77294e2d955dde1d102ae2ace494293492d31cff21944a8bcb4608993065c9a00292e8d3f4604e7465b4eeefb494f5bea102db343bb61c5a15c7bdf288206885c130fa1f2d86bf5e4634fdc4216bc16ef7dac970b0ee46d69416f9a9acee651d158ac64915b A cool thing to try (if you have the time) is to increase the delay parameter (that we set to 100 in the above example) to something high such as 10000, and see how computing the VDF takes a significant amount of time, while verifying the result is performed almost instantly. But this is what VDFs are all about, right? I still have a lot to learn about this cryptographic primitives and their potential use, but this first contact with them has been pretty fun. See you next week! Some additional visual aide Finally, I leave you two interesting videos that will definitely explain better than me what VDFs are all about. Enjoy!
{"url":"https://adlrocha.substack.com/p/adlrocha-a-gentle-introduction-to","timestamp":"2024-11-03T21:59:31Z","content_type":"text/html","content_length":"206021","record_id":"<urn:uuid:7810b7bb-9ef4-4684-9d4d-259b8b17d9f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00626.warc.gz"}
What is Put Ratio Spread? An Authentic Guide To Master the Market 1. / 2. / 3. What is Put Ratio... What is Put Ratio Spread? An Authentic Guide To Master the Market • By Caryl V • May 20, 2024 In the realm of options trading, savvy investors continually seek strategies to manage risk and maximize returns. One advanced tactic stands out for its nuanced approach to market movements: the put ratio spread. This article demystifies this complex strategy, revealing why and how traders use it to their advantage. Introduction to Put Ratio Spreads At its core, a put ratio spread involves buying and selling differing numbers of put options at various strike prices. Traders often employ this strategy when they hold a moderately bearish outlook on the underlying asset, hoping to capitalize on slight downward movements while also managing the risk of a market Understanding Put Ratio Spreads Understanding the put ratio spread begins with recognizing its two parts: the long put and the short put. In a basic put ratio spread, you may purchase one in-the-money (ITM) put option while selling two out-of-the-money (OTM) put options. The key is that the puts sold outnumber the puts bought, creating a “ratio” that defines this strategy. The Strategic Appeal of Put Ratio Spreads Put ratio spreads attract traders for two primary reasons. First, they offer a lower initial cost since the receipt from the sold puts helps fund the purchase of the long put. Second, they provide protection against both downward and slight upward movements in the market. However, it’s important to acknowledge the risk of unlimited loss in certain market conditions, as the trader is exposed if the underlying security’s price plummets significantly. Setting Up a Put Ratio Spread The setup of a put ratio spread is delicate. Selecting the right strike prices and expiration dates requires meticulous analysis. The balance of the ratio should align with the trader’s market sentiment and risk tolerance. Opting for a wider spread between strike prices can yield a larger zone of profit, but it also introduces greater risk. Practical Examples of Put Ratio Spreads Consider a stock currently trading at $50. A trader suspects a slight dip but not a drastic drop. The trader might buy a put with a $50 strike price while selling two $45 strike puts. If the stock declines modestly, the long put gains value while the sold puts remain out of the money. Should the stock unexpectedly rise, the loss is limited to the net cost of the spread at the Adjustments and Exit Strategies As market conditions shift, so must the trader’s strategies. It’s vital to monitor the spread’s performance against price movements. Adjustments, such as rolling out to a farther expiration date or repositioning the strikes, may be necessary. Similarly, developing clear rules for exiting the position—either taking profits or cutting losses—can prevent emotional decision-making. Pro Tips for Successful Put Ratio Spread Trading Market conditions play a crucial role when engaging in put ratio spread trading. Periods of low volatility are ideal, as the sold puts have a higher chance of expiring worthless, maximizing the strategy’s benefits. It’s also not uncommon for seasoned traders to employ put ratio spreads in conjunction with other strategies as part of a diversified portfolio. The Math Behind Put Ratio Spreads Put ratio spreads, while sophisticated, follow clear mathematical guidelines for ascertaining break-even points, maximum profit, and maximum loss. Here’s a simplified breakdown: Break-even Points: Calculated by analyzing the net difference between the premiums paid for the long puts and the premiums received from the short puts. Specifically, the break-even for a put ratio spread could involve subtracting the net premium received from the lower strike price or adding it to the higher strike price, depending on the spread’s Maximum Profit: Achievable when the underlying stock’s price exactly matches the strike price of the short puts at expiry. The maximum profit equals the difference between the long put’s strike price and the short put’s strike price, minus the net premium paid, if any. Maximum Loss: Occurs if the stock price dramatically falls far below the strike prices of all the puts involved. The trader’s loss in this scenario is theoretically unlimited, given that stocks can fall to zero, minus the initial net premium received. Understanding these calculations helps traders make informed decisions on the entry and exit points for their put ratio spread positions, balancing potential gains against risks. Frequently Asked Questions What Is the Ideal Market Condition for a Put Ratio Spread? The best market condition is one that aligns with the trader’s expectation of a slight downturn or even a flat market that leans bearish. How Does a Put Ratio Spread Compare to Other Options Strategies? It’s more complex and risky than a basic put buy, but offers greater flexibility and cost efficiency than some other multi-leg option strategies. Can Put Ratio Spreads Be Used by Beginners? Although they are advanced strategies, beginners with a comprehensive understanding of options and risk management can practice with small, controlled trades. The put ratio spread is an intricate instrument in a trader’s toolkit, designed for moments of moderate market bearishness. By grasping its structure and tactical application, traders can better navigate the undulating seas of the options market. As with any advanced technique, the key is in the details—careful selection of strike prices, vigilant management, and informed exit strategies pave the way to success.
{"url":"https://www.thestockdork.com/put-ratio-spread/","timestamp":"2024-11-12T20:33:18Z","content_type":"text/html","content_length":"346318","record_id":"<urn:uuid:41910acb-1edf-4d29-844e-880643c06141>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00232.warc.gz"}
Research | Department of Mathematics (2024) Research in Mathematics at Ohio State The Mathematics Department offers a vast array of research opportunities in both theoretical and applied mathematics, which you can explore in the list below. The organization by subject area in the list is often somewhat arbitrary as various research areas have increasinglybecome cross-disciplinary. Interested students should feel free to contact faculty directly with questions about their research. DavidAnderson:Algebraic geometry, Combinatorics, Representation theory, Schubert varieties and Toric Varieties, Equivariant Cohomology and its Applications AngelicaCueto:Algebraic Geometry, Combinatorics, Non-Archimedean Geometry, Tropical Geometry RoyJoshua:Algebraic and Arithmetic Geometry, K-Theory, Singular Varieties, Computational aspects of geometry, Quantum computation EricKatz:Tropical Geometry, Combinatorial Algebraic Geometry, Arithmetic & Enumerative Geometry Hsian-HuaTseng:Algebraic Geometry, Symplectic Topology & Geometry, Mirror Symmetry, Gromov-Witten Theory MirelCaibar:Algebraic Geometry, Singularity Theory, Hodge Theory JamesCogdell:Number Theory, Analytic Number theory, L-functions - Converse Theorems GhaithHiary:Computational number theory, analytic number theory, random matrix models for L-functions, asymptotic analysis RomanHolowinsky:Number Theory: Analytic Methods, Automorphic forms, L-functions, Sieve Methods, Quantum Unique Ergodicity Michael Lipnowski:Number Theory, Automorphic Forms, Representation Theory, and Low-dim Topology WenzhiLuo:Number Theory, Analytic and Arithmetic Theory of Automorphic Forms and Automorphic L-Functions JenniferPark:Number Theory and Algebraic Geometry, Algebraic Curves and Arithmetic Properties, Number and Function Fields StefanPatrikis:Number theory, Automorphic forms, Arithmetic Geometry, Galois representations, and Motives IvoHerzog:Ring Theory, Module and Representation Theory, Category Theory, Model Theory CosminRoman:Ring Theory, Module Theory, Injectivity-Like Properties, Relations Between Modules and Their Endomorphisms Ring, Theory of Rings and Modules MohamedYousif:Rings and Modules, Injective and Continuous Rings and Modules, Pseudo and Quasi-Frobenius Rings NathanBroaddus:Geometric Group Theory, Topology, Low-dim Topology JamesFowler:Topology, Geometric Topology of Manifolds, Geometric Group Theory, Surgery Theory, K-Theory, Mathematics Education Jingyin Huang:Geometric Group Theory, Metric Geometry ThomasKerler:Low-dimensional Topology, Quantum Algebras and their Representations, Invariants of 3-dim Manifolds and Knots, Topological Quantum Field Theories Sanjeevi Krishnan:Directed Algebraic Topology and Applications to Optimization, Data Analysis, and dynamics Jean-FrançoisLafont:Topology, Differential Geometry, Geometric Group Theory, K-Theory BeibeiLiu:Hyperbolic Geometry, Kleinian Groups, Geometric Group Theory, Topology and Geometry of 3-, 4-manifolds, Heegaard Floer Homology FacundoMémoli:Shape comparison, Computational Topology, Topological data analysis, Machine learning CrichtonOgle:Topology - K-Theory SergeiChmutov:Topology, Knot Theory, Quantum Invariants MicahChrisman:Knot Theory, Low-Dimensional Topology, Virtual knots, Finite-type Invariants, Knot Concordance, Generalized Cohomology Theories John Harper:Topology, Homotopy Theory, Modules over Operads, K-Theory & TQ-Homology NilesJohnson:Topology, Categorical and Computational Aspects of Algebraic Topology, Picard/Brauer theory VidhyanathRao:Topology - Homotopy Theory - K-Theory DonaldYau:Topology, Algebra, Hom-Lie algebras, Deformations AndrzejDerdzinski:Differential Geometry - Einstein Manifolds AndreyGogolyev:Differential Geometry, Topology, Dynamical Systems, Hyperbolic Dynamics BoGuan:Differential Geometry, Partial Differential Equations - Geometric Analysis MatthewStenzel:Differential Geometry, Several Complex Variables CesarCuenca:Random Matrices, Random Partitions, Asymptotic Representation Theory, and Algebraic Combinatorics NeilFalkner:Probability Theory, Brownian Motion MatthewKahle:Combinatorics, Probability Theory, Geometric Group Theory, Mathematical Physics, Topology, Topological Data Analysis HoiNguyen:Combinatorics - Probability Theory, Random Matrices - Number Theory GrzegorzRempala:Complex Stochastic Systems Theory, Molecular Biosystems Modeling, Mathematical and Statistical Methods in Epidemiology and in Genomics DavidSivakoff:Stochastic Processes on Large Finite Graphs, Probability Theory, Applications to Percolation Models, Particle Systems, Cellular Automata, Epidemiology, Sociology, and Genetics JohnMaharry:Graph Theory, Combinatorics AurelStan:Stochastic Analysis, Harmonic Analysis, Quantum Probability, Wick Products GabrielConant:Model theory of Groups, Graphs, and Homogenous structures. Additive Combinatorics ChrisMiller:Logic, Model Theory, Applications to Analytic Geometry & Geometric Measure Theory CarolineTerry:Model theory, Extremal Combinatorics, Graph Theory, Additive Combinatorics OvidiuCostin:Analysis, Asymptotics, Borel Summability, Analyzable Functions, Applications to PDE and difference equations, Time dependent Schrödinger equation, Surreal numbers KennethKoenig:Several Complex Variables, Szegő & Bergman Projections, ∂--Neumann problem DustyGrundmeier:Mathematics Education,Several Complex Variables, andCR manifolds LizVivas:Holomorphic Dynamical Systems, Several Complex Variables, Complex Geometry & Affine Algebraic Geometry, Monge-Ampere equations and CR manifolds JanLang:Analysis, Differential Equations, Harmonic Analysis, Function Spaces, Integral Inequalities, PDE - Function Theory RodicaCostin:Partial Differential Equations, Difference Equations, Orthogonal Polynomials, Asymptotic Analysis JohnHolmes:Partial Differential Equations, Non-Linear PDE, Stochastic Differential Equations, Mathematical Finance, Mathematical Physics AdrianLam:Partial Differential Equations, Mathematical Biology, Evolutionary Game Theory, Free-boundary Problems SalehTanveer:Applied Mathematics, Asymptotics, Nonlinear Free boundary problems in Fluid Mechanics and Crystal Growth, PDEs in Fluid Mechanics & Mathematical Physics, Singularity & regularity questions in PDEs Fei-RanTian:Dispersion & Semi-Classical Limits, Whitham Equations, Modulation of Dispersive Oscillations, Free Boundary Problems FerideTiglay:Partial Differential Equations, Mathematical Physics, Dynamical Systems, Wave Equations & Fluid Dynamics JanetBest:Applied Mathematics, Mathematical Biology, Dynamical Systems, Circadian Rhythms, Probability Theory, Stochastic Processes on Random Graphs AdrianaDawes:Mathematical Biology, Mathematical Modeling of Cell Polarization & Chemotaxis, Differential Equations IanHamilton:Behavioral Ecology, Coerced Cooperation, Evolution of Cooperative Behavior, Mathematical Modeling MariaHan Veiga:Numerical Analysis for Hyperbolic PDEs, Probabilistic Machine Learning, Constraint & Privacy aware Machine Learning GrzegorzRempala*:Complex Stochastic Systems Theory, Molecular Biosystems Modeling, Mathematical and Statistical Methods in Epidemiology and in Genomics JosephTien:Mathematical Biology, Models of Infectious Disease Dynamics, Differential Equations, Parameter Estimation, Neuroscience YulongXing:Numerical Analysis, Scientific Computing, Wave propagation, Computational Fluid Dynamics DongbinXiu:Scientific Computing, Numerical Mathematics, Stochastic Computation, Uncertainty Quantification, Multivariate approximation, Data Assimilation, High-order Numerical Methods BishunPandey:Applied Mathematics VitalyBergelson:Ergodic Theory, Combinatorics, Ergodic Ramsey Theory, Polynomial Szemerédi Theorems, Number Theory JohnJohnson:Algebra in Stone-Cech compactification, Dynamics, Combinatorics related to Ramsey Theory AlexanderLeibman:Ergodic Theory, Dynamics on Nil-Manifolds, Polynomial Szemerédi & van der Waerden Theorems NimishShah:Ergodic Theory, Ergodic Theory on Homogeneous Spaces of Lie Groups, Applications to Number Theory DanThompson:Ergodic Theory, Dynamical Systems, Symbolic Dynamics, Thermodynamic Formalism, Dimension Theory & Geometry LuisCasian:Representation Theory, Representation Theory of Real Semisimple Lie Groups, Integrable Systems SachinGautam:Representation Theory of Infinite-Dimensional Quantum Groups, Classical and Quantum Integrable Systems DustinMixon:Applied Harmonic Analysis, Mathematical Signal Processing, Compressed Sensing DavidPenneys:Operator algebra, von-Neuman Subfactors, Fusion and Tensor Categories, Mathematical Physics, Non-commutative Geometry KrystalTaylor:Harmonic Analysis, Geometric Measure Theory, Harmonic Analysis on Fractals, Applications to Analytic Number Theory Scott Zimmerman:Analysis on Metric Spaces, Geometric Measure Theory, Harmonic Analysis, PDEs
{"url":"https://dennisport.org/article/research-department-of-mathematics","timestamp":"2024-11-07T16:35:20Z","content_type":"text/html","content_length":"122616","record_id":"<urn:uuid:4865aeba-04d4-40c9-87d3-9a9bbcc7e100>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00167.warc.gz"}