content
stringlengths
86
994k
meta
stringlengths
288
619
Here we’ll extend what we learned in the last two chapters to regression models with multiple predictors and to ANCOVA (Analsysis of Covariance). We’ll also briefly touch on more complex designs and on using contrast to test specific between-group hypotheses. Upon completion of this lesson, you should be able to: 1. Fit and interpret regression models with multiple predictors 2. Fit and interpret ANCOVA models 3. Test specific contrasts from lm() objects 13.1 Multiple Regression I In this video we’ll extend the use of the formula interface in lm() to specify multiple continuous predictor variables. Video - STAT 485 Lesson: 13.1 13.2 Multiple Regression II Here we’ll continue our discussion of multiple regression. Video - STAT 485 Lesson: 13.2 13.3 ANCOVA I By now we’ve fit models with one or more continuous predictors and one or more categorical predictors. In ANCOVA we combine continuous and categorical predictors. Once again lm() is the tool of choice. Here we’ll focus on interpreting the coefficients in the ouput from summary(). Video - STAT 485 Lesson: 13.3 13.4 ANCOVA II In this second video on ANCOVA we’ll conclude with interpretation of the results and we’ll compare the output to graphical representation of the model. Video - STAT 485 Lesson: 13.4 13.5 A Note About Sums of Square in R Functions like anova() and aov() in R return Type I sums of squares, while some other statistical programs return type Type III sums of squares. He’re we’ll give an overview of what tham means and how it affects interpretation. Video - STAT 485 Lesson: 13.5 13.6 Resistant Regression Since all linear models are subject to the same regression assumptions, there are occasions where a resistant regression method might be appropriate. Here we’ll look at a resistant regression method in R. Video - STAT 485 Lesson: 13.6 13.7 Specifying Contrasts We previously learned about using TukeyHSD() for making all pairwise comparisons between groups in a model. In some cases we only want to make specific comparisons, and don’t want to lose statistical power by correcting for the larger number of comparisons. Here we show how to specify contrasts from a linear model. Video - STAT 485 Lesson: 13.7 13.8 More Complex Designs There are many possible experimental designs, and the details of the design can have important implications for how the data is analyzed. Here we’ll look at the example of a split-plot design, and consider how to analyze such an experiment in R. Video - STAT 485 Lesson: 13.8
{"url":"https://online.stat.psu.edu/stat484-485/485_Lesson13","timestamp":"2024-11-04T09:07:53Z","content_type":"application/xhtml+xml","content_length":"57173","record_id":"<urn:uuid:4e1f485e-4890-4b00-b077-3874b61e632d>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00791.warc.gz"}
Formula in Smartsheet that will tell me if the data in 4 different columns is the same I am looking for a formula in smartsheet that will tell me if the data in 4 different columns is the same. In excel I am able to use =COUNTIF(A3:D3,A3)=4 In Smartsheet I am getting "INVALID COUMN VALUE" Best Answer • Hi @Tanya B If your columns A-D are adjacent to one another for a range, then something like this should work: =IF(COUNTIF(A@row:D@row, A@row) = 4, "True", "False") Hope this helps, but if you've any problems/questions then let us know! 🙂 • Hi @Tanya B If your columns A-D are adjacent to one another for a range, then something like this should work: =IF(COUNTIF(A@row:D@row, A@row) = 4, "True", "False") Hope this helps, but if you've any problems/questions then let us know! 🙂 Help Article Resources
{"url":"https://community.smartsheet.com/discussion/131716/formula-in-smartsheet-that-will-tell-me-if-the-data-in-4-different-columns-is-the-same","timestamp":"2024-11-02T20:58:32Z","content_type":"text/html","content_length":"435127","record_id":"<urn:uuid:6ffce93a-5898-4be8-8516-172b6321f2b7>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00487.warc.gz"}
AP Calculus Archives - University Tutoring - Seattle Posts Tagged ‘AP Calculus’ You hung on to the last AP Calculus blog post. You’re on your way to a 5, but first…The Free-Response Section. For most students, this is probably the most intimidating part of the test, but if you’re solid with basic derivative and integral topics, you can survive this section and complete your solid score. Before we dive into specific topics, let’s discuss the format of the Free-Response Section. The section consists of six questions that… Read More Now that you know what to expect from the test, let’s tackle the multiple-choice questions (MCQs) first. In this post we’ll talk about some useful test-taking strategies and the key concepts you should study before test day. Practice, Practice, Practice The first strategy I’ll recommend has more to do with what you do before the test, rather than what you do while you’re taking it. As with any standardized test, you want to get as… Read More Are you ready to learn more about the AP Calculus test? I didn’t scare you away in Part 1 of this series? Great! Let’s talk about the actual structure of the exam and what resources we need to start studying. To be clear, all of the information here applies to both the AB and BC versions of the test. Let’s start by breaking it down in the two main parts of the test: Multiple Choice… Read More So you’ve decided to take a shot at the AP Calculus Exam. Whether it’s the AB or the BC, I have some tips for you to both maximize your score and study in an efficient manner. The AP Calc test is difficult, but if you know what to worry about and how to approach prep for it, you can maximize your chances of success. One of the first things I like to point out is… Read More
{"url":"https://universitytutoring.com/tag/ap-calculus/","timestamp":"2024-11-12T20:30:15Z","content_type":"text/html","content_length":"85057","record_id":"<urn:uuid:a47f7725-2585-4aea-a447-9c0344d40ebd>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00090.warc.gz"}
For the experiment of tossing a six sided die twice, what is the probability that the sum is 4? | Socratic For the experiment of tossing a six sided die twice, what is the probability that the sum is 4? 1 Answer Look at the possibilities mentioned in the image. There are 36 total possibilities, out of which only 3 satisfy the given that the sum is 4. i have marked those 3 in image, so: PROBABILITY = #frac{"No. of outcomes that satisfy the given"}{"Total no. of outcomes"}# $= \frac{3}{36}$ Impact of this question 12849 views around the world
{"url":"https://socratic.org/questions/for-the-experiment-of-tossing-a-six-sided-die-twice-what-is-the-probability-that#460338","timestamp":"2024-11-01T23:33:59Z","content_type":"text/html","content_length":"33447","record_id":"<urn:uuid:7fbed74a-000b-4303-b658-c5b60bf031b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00008.warc.gz"}
Recent Question/Assignment Question 1 Suppose that you work for a firm that operates stores across multiple markets, and you want to understand the revenues earned by specific store locations. To that end, you collect different pieces of information about each store, and run the following regression: ???????????????? = ??0 + ??1(???????? ???? ??????????) + ??2(???????????????? ???? ?????????????? ?????????? ????????) + ??3(???????????? ????????) In this case: (i) “Revenue” is the revenue (per employee) per week for each store in thousands of dollars (ii) “Size of Store” is the area of the store’s floorplan (in square metres). (iii) “Distance to Nearest Major Road” is the distance (in hundreds of metres) to the nearest 4-lane (or larger) road leading to the store. (iv) “Market Size” is the number of people, in thousands, who reside in the market served by each store. The regression results from the data you’ve acquired are displayed below: Coefficient Standard Error of Coefficient Intercept 4 1 Size of Store 0.1 0.02 Distance to Nearest Major Road –1 0.25 Market Size 0.5 0.1 The questions related to these results are displayed on the following page. (a) What is the proper, technical interpretation of the coefficient on “Size of Store” in the multiple regression? (5 marks) (b) In this multiple regression setting, how would you test the hypothesis that the effect of “Size of Store” on revenues was zero at the 5% level of significance for all markets in which your company operates? (5 marks) (c) Again, in this multiple regression setting, test (at the 5% level of significance) the hypothesis that the variable “Distance to Nearest Major Road” no effect on a store’s revenues. (5 marks) (d) Now suppose that you acquire one other piece of information: the weekly advertising budget (in thousands of dollars) for each particular store. You represent this information with the variable “Advertising Budget”, and your regression is now specified as: ???????????????? = ??0 + ??1(???????? ???? ??????????) + ??2(???????????????? ???? ?????????????? ?????????? ????????) + ??3(???????????? ????????) + ??4(?????????????????????? ????????????) Using this new regression, you obtain the following results: Coefficient Standard Error of Coefficient Intercept 4 1 Size of Store 0.01 0.02 Distance to Nearest Major Road –1 0.25 Market Size 0.5 0.1 Advertising Budget 2 0.4 In this case, test the hypothesis that the effect of “Size of Store” on revenues was zero at the 5% level of significance for all markets in which your company operates? (5 marks) (e) Explain why the coefficient value on “Size of Store” and the hypothesis test on this coefficient is different in part (d) than in part (a) and (b). (10 marks) Question 2 You’ve been contracted by a computer manufacturer to study the price of laptop computers in the market. You are asked to analyze the determinants of these prices, and so you collect data on the characteristics of various laptops sold by different companies, and use a regression to explore this issue: ?????????? = ??0 + ??1(???????? ???? ????????????) + ??2(?????????????????? ??????????) + ??3(??????) In this case: (v) “Price” is the price of the laptop (in dollars) (vi) “Size of screen” is the area, in square centimetres, of the laptop’s screen. (vii) “Processor Speed” is the computer’s processing speed in GHz. (viii) “RAM” is computer’s memory in GB. The regression results from the data analyzed by the internal research team are displayed below: Coefficient Standard Error of the Coefficient Intercept 500 100 Size of screen –5 2 Processor Speed 200 50 RAM 100 10 Standard Error of the Regression = 150 (a) Provide a definition of the coefficient on “Processor Speed”, and using the 5% level of significance, test the hypothesis that this variable has an effect on the price of the laptop. (10 marks) (b) Suppose that a new laptop has been made by this company that has: (i) a screen whose area is 120 square centimetres, (ii) a processor speed of 4 GHz, and (iii) 8 GB of RAM. If the company believed that a laptop like this would have a price of $1000, respond to this assertion by using the information from the regression as well as formal hypothesis testing techniques. (10 marks) (Please see the following page for the next part of this question) (c) Now suppose that you do some more analysis of the price of laptops by adding some information to the regression used in parts (a) and (b). In particular, you estimate: ?????????? = ??0 + ??1(???????? ???? ????????????) + ??2(?????????????????? ??????????) + ??3(??????) + ??4(????????h??) In this case, “Weight” is the weight of the laptop (in grams). The results from this regression are displayed below: Coefficient Standard Error of the Coefficient Intercept 500 100 Size of screen –1 2 Processor Speed 200 50 RAM 100 10 Weight –2 0.5 Standard Error of the Regression = 100 In this case, the coefficient on “Size of Screen” changed in this regression compared to the regression in part (a). How can you explain this coefficient’s change, while incorporating the correlation between “Size of Screen” and “Weight” into your answer? (10 marks) (d) Suppose that the company intends to manufacture the same laptop specified in part (b), but has a specific weight in mind. Specifically, the laptop it will make has: (i) a screen whose area is 120 square centimetres, (ii) a processor speed of 4 GHz, (iii) 8 GB of RAM, and (iv) a weight of 400g. If, again, the company believed that a laptop like this would have a price of $1000, respond to this assertion by using the information from the regression as well as formal hypothesis testing techniques. (10 marks) Question 3 (a) You work for a large company that wants to compare the performance of two sales teams (we’ll call them “Team A” and “Team B”) working at the firm. To formalize this comparison, you gather a sample of data on the weekly revenue created by each team. You then use this data to estimate the following regression: ?????????????? = ??0 + ??1(???????? ??) In this case, “Revenue” represents the weekly revenue (in dollars) generated by the sales team, and “Team A” is a dummy variable equal to one if the revenue is created by “Team A”, and zero if it’s created by “Team B”. Your regression results are listed below: Coefficient Standard Error of Coefficient Intercept 4000 1000 Team A 800 200 (i) Interpret the meaning of the intercept in the above regression. (5 marks) (ii) Interpret the meaning of the coefficient on the variable “Team A” in the above regression. (5 marks) (iii) Use these regression results to determine the average weekly revenue generated by Team A. (5 marks) (iv) Use your statistical training to rigorously test whether or not the two teams generate similar or different levels of weekly revenue. (5 marks) (b) Suppose that instead of running the regression listed above, you instead estimate the following regression: ?????????????? = ??0 + ??1(???????? ??) In this regression, “Revenue” is still defined in the same way as before, but “Team B” is a dummy variable equal to one if the revenue is created Team B, and zero if it was created by Team A. In this (i) Use the estimated coefficients from part (a) to determine the value of the intercept term in this regression (here in part (b)). Interpret the meaning of the intercept in this case. (5 marks) (ii) Use the estimated coefficients from part (a) to determine the value of the coefficient on the dummy variable “Team B” in this regression (here in part (b)). Interpret the meaning of the coefficient on “Team B” in this case. (5 marks) Editable Microsoft Word Document Word Count: 1157 words including Numericals and References Buy Now at $19.99 USD This above price is for already used answers. Please do not submit them directly as it may lead to plagiarism. Once paid, the deal will be non-refundable and there is no after-sale support for the quality or modification of the contents. Either use them for learning purpose or re-write them in your own language. If you are looking for new unused assignment, please use live chat to discuss and get best possible quote.
{"url":"https://www.australianbesttutor.com/recent_question/74718/question-1suppose-that-you-work-for-a-firm-that-operates","timestamp":"2024-11-13T05:10:45Z","content_type":"text/html","content_length":"40339","record_id":"<urn:uuid:4486b1fa-b31b-4764-8085-da0ae97e096c>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00216.warc.gz"}
The Rate of Diffusion of a Gas The rate at which a gas spreads out depends on the speed We can't measure directly the speed of the molecules of a gas, or the kinetic energy. Fortunately, if the gas behaves as an ideal gas, which most gases do at normal temperatures and pressures, the average kinetic energy of the gas molecules is directly proportional to the temperature so we can write When a gas diffuses, the gas molecules spread out to occupy volume, and this is obviously proportional to the speed of the molecules, hence to the square root of the temperature. At a given temperature, all the molecules of an ideal gas have the same kinetic energy. The kinetic energy is independent of the molecule – its mass, shape or size. The kinetic energy depends only on the temperature, so that gas molecules of different mass have the same kinetic energy. If we have two species of atom in a gas, of types 1 and 2, then the average kinetic energy of species 1 is the average kinetic energy of species 1 is These are equal at the same temperature so We can simplify this expression and rearrange to give
{"url":"https://mail.astarmathsandphysics.com/o-level-physics-notes/273-the-rate-of-diffusion-of-a-gas.html","timestamp":"2024-11-12T07:20:48Z","content_type":"text/html","content_length":"29421","record_id":"<urn:uuid:201356ee-3a23-4ad3-a8b2-99f7947d921b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00198.warc.gz"}
HCF and LCM (QQI BINGO) HCF and LCM The below QQI BINGO © gives your class a selection of answers to fill in their bingo grids. Once they have filled their grids, you reveal one question at a time, and students cross off the answers if they have them. The first to get a line or a full house calls "BINGO" and wins. First choose if you want question on the Highest Common Factor or Lowest Common Multiple (or choose Random to get a mixture of both). Next choose if you want 2 or 3 numbers to appear, again choosing Random will give a mixture. Now choose the size of the numbers you want to deal with. After the students have answered the question, you can reveal the answer. Ideas for Teachers This is a classic bingo activity, where students choose the answers to fill in their grid (either 3 by 3 or 4 by 4). Then questions are shown one at a time and if a student has the answer in their grid they cross it off. The winner is the first to cross off all their answers and call BINGO. Students love this game, and can be used to start or end a lesson. If you have found interactive-maths.com a useful website, then please support it by making a donation using the button opposite.
{"url":"https://www.interactive-maths.com/hcf-and-lcm-qqi-bingo.html","timestamp":"2024-11-10T18:52:15Z","content_type":"text/html","content_length":"212477","record_id":"<urn:uuid:5da3443c-edfe-4575-a654-b1964563a534>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00720.warc.gz"}
Matrix calculator online - calculating the sum of two matrices - Solumaths This matrix calculator allows to calculate online the product of two matrix with calculation step. The calculator can calculate online the product of two matrices. The matrix calculator may calculate the product of t matrix whose coefficients have letters or numbers, it is a formal matrix calculation calculator. Calculating the product of matrices The calculator can calculate the product of two matrices with the results in exact form: to calculate the product of matrices `((3,3,4),(1,2,0),(-5,1,1))*((3,3,4),(1,4,0),(2,1,1))`, enter matrix_product(`[[3;1;-5];[3;2;1];[4;0;1]];[[3;1;2];[3;4;1];[4;0;1]]`), after calculation, the result is returned. The calculator allows symbolic calculations, it is possible to use letters as well to calculate the product of two matrices like this: `((a,3),(a/2,4))*((a,1),(a/2,2))`, enter matrix_product(`[[a;a/ 2];[3;4]];[[a;a/2];[1;2]]`), after calculation, the result is returned. Syntax : Examples : matrix_product(`[[3;1;-5];[3;2;1];[4;0;1]];[[3;1;2];[3;4;1];[4;0;1]]`) returns `[[20;5;-12];[25;11;-10];[16;4;-19]]`
{"url":"https://www.solumaths.com/en/calculator/calculate/matrix_product","timestamp":"2024-11-02T09:09:45Z","content_type":"text/html","content_length":"59298","record_id":"<urn:uuid:0fa0a5ae-fbe0-4310-9c15-18afa68736aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00398.warc.gz"}
How do you find five numbers which have a mean of 14 and a median of 15? | Socratic How do you find five numbers which have a mean of 14 and a median of 15? 1 Answer If the median has to be 15 then the third number (in order of size) has to be 15. So you now have to find two numbers smaller than (or equal to) 15, and two that are larger than (or equal to) 15, so that their mean is 14. This means that the sum of the five numbers must be $5 \cdot 14 = 70$ We allready have the $15$ so you may take any combination of four additional numbers that add up to $70 - 15 = 55$, as long as two of them are $\le 15$ and the other two are $\ge 15$. Impact of this question 3193 views around the world
{"url":"https://socratic.org/questions/how-do-you-find-five-numbers-which-have-a-mean-of-14-and-a-median-of-15","timestamp":"2024-11-13T11:58:08Z","content_type":"text/html","content_length":"33499","record_id":"<urn:uuid:657bd41e-bde8-466d-86c7-201375b4ce25>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00846.warc.gz"}
Algebra 1 honors worksheets As a teacher, much of my time was taken up by creating effective lesson plans. Algebra Professor allows me to create each lesson in about half the time. My kids love it because I can spend more time with them! Once they are old enough, I hope they will find this program useful as well. B.C., Florida As a user of both Algebra Professor 2.0 and 3.0 I have to say that the difference is incredible. I found the old Algebra Professor useful, but it was really difficult to enter more complex expression. With your new WYSIWYG interface that problem has been completely eliminated! It's like Word Equation Editor, just simpler. Also, thank you for not using the new new software as an excuse to jack up the price. C. Jose, CA It is amazing to know that this software covers so many algebra topics. It does not limit you to preset auto-generated problems you simply enter your own. It is a must for all students. I highly recommend this package to all. Nobert, TX A solid software and we need more like it. Good job. Britany Burton, CA So far its great! J.V., Maryland Students struggling with all kinds of algebra problems find out that our software is a life-saver. Here are the search phrases that today's searchers used to find our site. Can you find yours among Search phrases used on 2012-10-28: • algebra 2 software • subtracting square roots worksheets • dependant system of equations • free e-book for aptitude • factorising highest common factor worksheet • Nonlinear Equation Examples • how to do algebra problems • variable in exponent • cost sheet solved examples • history of rational exponents • free aptitude test papers download • aptitude test paper with key answer • adding and subtracting integers worksheet • New York state math test 6th grade • cheat exam answers thinkwell • how is adding radical expression similar to adding polynomial expressions • non-linear inequalities worksheet • converting number to percentage formula • Free online calculator for radical and rational • difference between and equation and an expression • needed math tutors in new orleans • summer school worksheet 6th grade math • permutation or combination in nature • excel equations • DECIMAL TO DEGREE FORMULA • difference quotient solver • 6th grade algebra worksheet • convert decimal to fraction worksheet • Sample NUmber Problems and Trivia • algebra worksheets for year 9 level • conceptual physics lesson plans • the difference between dividing fractions and radical expressions • algebra graphing chart • algerbra online programs for 7th grade • sample of mathematical trivias • algebrator • numbers in scientific form word questions worksheet • 1st grade statistics • planimetric coordinates degrees convert • arithmetic sequence calculator online • chemistry powerpoints • algibra for kids • absolute value equations practice worksheets • least greatest common factors worksheets • middle school slopes worksheet • 6th grade word math problems worksheets • learning probability made easy • problems and solutions of I. N. Herstein, Topics in Algebra • free combining like terms practice worksheets • "system of linear inequalities" + "multiple choice" • at home free study guide for sixth graders • rational expressions calculator • simplifying radicals kids • math word problem solver • +trivias about math • 6th grade star test practice • converting mixed numbers to a percent • what is the difference between linear equation and linear inequality are similar but different • free online graphing calculator • algebra problem sove guild • 8th grade math worksheets 8th graders for free • running the simplex methond on a TI 83 plus • algebra 2 answers • high school algebra study sheets • sample crossword puzzle using exponents • free fifth grade sample question • explain college level algebra • lesson plan for 1st gr. • SOLVE MATH PROBLEMS OUT ONLINE • factoring cubes calculator • c language aptitude questions • Yr 7 Maths tests free • english worked exam papers • intermediate algebra free help • maths worksheet for ks3 • yr8 games • fluid mechanic 6th solution manual • 7th grade algebra worksheets • math book tutor for junior high • Simplifying calculator • permutation and combination worksheet • math trivia question • algebra set factor to zero • Algebrator • High School Algebra Worksheets Free • beginers english grammer.pdf • multiply and simplify rational exponent • square root used in algebra
{"url":"https://algebra-net.com/algebra-net/relations/algebra-1-honors-worksheets.html","timestamp":"2024-11-06T07:28:13Z","content_type":"text/html","content_length":"87422","record_id":"<urn:uuid:84c3aa17-bf12-43e8-8378-0f36573ff8fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00356.warc.gz"}
189. Abel’s and Dirichlet’s Tests of Convergence - Avidemia A more general test, which includes the test of § 188 as a particular test case, is the following. Dirichlet’s Test. If \(\phi_{n}\) satisfies the same conditions as in § 188, and \(\sum a_{n}\) is any series which converges or oscillates finitely, then the series \[a_{0}\phi_{0} + a_{1}\phi_{1} + a_{2}\phi_{2} + \dots\] is convergent. The reader will easily verify the identity \[a_{0}\phi_{0} + a_{1}\phi_{1} + \dots + a_{n}\phi_{n} = s_{0}(\phi_{0} – \phi_{1}) + s_{1}(\phi_{1} – \phi_{2}) + \dots + s_{n-1}(\phi_{n-1} – \phi_{n}) + s_{n}\phi_{n},\] where \(s_{n} = a_{0} + a_{1} + \dots + a_{n}\). Now the series \((\phi_{0} – \phi_{1}) + (\phi_{1} – \phi_{2}) + \dots\) is convergent, since the sum to \(n\) terms is \(\phi_{0} – \phi_{n}\) and \(\lim \phi_{n} = 0\); and all its terms are positive. Also since \(\sum a_{n}\), if not actually convergent, at any rate oscillates finitely, we can determine a constant \(K\) so that \(|s_{\nu}| < K\) for all values of \(\nu\). Hence the series \[\sum s_{\nu}(\phi_{\nu} – \phi_{\nu+1})\] is absolutely convergent, and so \[s_{0}(\phi_{0} – \phi_{1}) + s_{1}(\phi_{1} – \phi_{2}) + \dots + s_{n-1}(\phi_{n-1} – \phi_{n})\] tends to a limit as \(n \to \infty\). Also \(\phi_{n}\), and therefore \(s_{n}\phi_{n}\), tends to zero. And therefore \[a_{0}\phi_{0} + a_{1}\phi_{1} + \dots + a_{n}\phi_{n}\] tends to a limit, the series \(\sum a_{\nu}\phi_{\nu}\) is convergent. Abel’s Test. There is another test, due to Abel, which, though of less frequent application than Dirichlet’s, is sometimes useful. Suppose that \(\phi_{n}\), as in Dirichlet’s Test, is a positive and decreasing function of \(n\), but that its limit as \(n \to \infty\) is not necessarily zero. Thus we postulate less about \(\phi_ {n}\), but to make up for this we postulate more about \(\sum a_{n}\), viz. that it is convergent. Then we have the theorem: if \(\phi_{n}\) is a positive and decreasing function of \(n\), and \(\sum a_{n}\) is convergent, then \(\sum a_{n}\phi_{n}\) is convergent. For \(\phi_{n}\) has a limit as \(n \to \infty\), say \(l\): and \(\lim (\phi_{n} – l) = 0\). Hence, by Dirichlet’s Test, \(\sum a_{n}(\phi_{n} – l)\) is convergent; and as \(\sum{a_{n}}\) is convergent it follows that \(\sum a_{n}\phi_{n}\) is convergent. This theorem may be stated as follows: a convergent series remains convergent if we multiply its terms by any sequence of positive and decreasing factors. $\leftarrow$ 188. Alternating Series Main Page 190. Series of complex terms $\rightarrow$
{"url":"https://avidemia.com/pure-mathematics/abels-and-dirichlets-tests-of-convergence/","timestamp":"2024-11-10T04:55:16Z","content_type":"text/html","content_length":"88593","record_id":"<urn:uuid:6e183f65-b50e-4b3b-bc78-12ee076a3a8c>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00174.warc.gz"}
ThmDex – An index of mathematical definitions, results, and conjectures. Let $M = (\mathbb{R}^D, \mathcal{B}(\mathbb{R}^D))$ be a D2763: Euclidean real Borel measurable space such that (i) $\mu, \nu : \mathcal{B}(\mathbb{R}^D) \to [0, 1]$ are each a D198: Probability measure on $M$ (ii) $\mathfrak{F}_{\mu}$ and $\mathfrak{F}_{\nu}$ are each the D4131: Finite unsigned euclidean real Borel measure Fourier transform of $\mu$ and $\nu$, respectively Then $$\mathfrak{F}_{\mu} = \mathfrak{F}_{u} \quad \iff \quad \mu = u$$
{"url":"https://thmdex.com/r/4390","timestamp":"2024-11-13T22:55:53Z","content_type":"text/html","content_length":"7205","record_id":"<urn:uuid:919a67c1-45a0-4861-baee-e7b3c7fe4614>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00717.warc.gz"}
Dynamics of Elevated Vortices 1. Introduction The behavior of axisymmetric hydrodynamic vortices with axially varying rotation rates is investigated. We consider two classes of vortex flows: (i) radially unbounded solid body–type vortices and (ii) vortex cores of finite radius embedded within a radially decaying vortex profile. Both classes include the case of a vortex overlying nonrotating fluid. For the first type of flows, the von Kármán–Bödewadt similarity principle is applicable and leads to new exact solutions of the nonlinear Euler equations. These similarity solutions describe the induction of secondary (meridional) circulations in radially unbounded solid body vortices, and the subsequent feedback of these circulations on the vortex circulations. These solutions provide a description of nonlinear processes that may occur in the interior regions of broad geophysical vortices. Decaying, amplifying, and oscillatory solutions are found for different vertical boundary conditions and axial distributions of vorticity. The oscillatory solutions are characterized by pulsations of vortex strength in lower and upper levels associated with periodic reversals in the sense of the secondary circulation. The oscillatory behavior appears to be a manifestation of the “vortex valve effect,” a dynamical mechanism used to explain the “choking” of updrafts in vortex chambers and the morphological changes observed in some tornadic storms (Lemon et al. 1975; Davies-Jones 1986). Our study of the second class of vortex flows is motivated by a basic question concerning low-level mesocyclogenesis and associated tornadogenesis: in the absence of precipitation and thermodynamic effects, how should an isolated elevated vortex behave? To gain insight into this problem we perform a linear analysis of the Euler equations for an elevated vortex of finite core radius (in the exact solutions described above the solid body rotation extended to infinity—now we consider an inner core in solid body rotation embedded within a radially decaying outer vortex). The analysis, valid for small times, indicates the formation of an annular downdraft on the periphery of the vortex core. The downdraft is stronger and narrower for broader vortex cores and for more rapid radial decay rates in the outer vortex. On the basis of these results we speculate that “internal” storm dynamics associated with vertical gradients of vorticity may induce or facilitate downdraft formation on the periphery of elevated mesocyclones (though precipitation and asymmetric effects are obviously important as well). We speculate further that such a dynamically induced downdraft can transport vorticity to lower levels. To see why elevated vortices cannot exist in a steady state, consider the simplest example of a radially unbounded vortex overlying nonrotating fluid. Continuity of pressure across the horizontal interface separating the upper vortex from the lower nonrotating fluid demands that an inward-directed pressure gradient force be impressed in the nonrotating flow as well as in the rotating flow. As a consequence of this pressure gradient force, a secondary circulation develops such that the nonrotating fluid moves inward and upward (assuming the flow is bounded by a lower impermeable boundary), while the vortex is displaced upward and outward (assuming the flow is bounded either by an upper impermeable boundary or a region of high static stability). Vortex strength subsequently decreases due to the“squashing” of vortex lines in the thinning upper layer. If we modify the scenario so that the low-level vorticity is not zero but is smaller than in the upper layer, a secondary circulation should develop as before, but now the low-level vorticity can be stretched and amplified. When the strength of the low-level vortex exceeds that of the upper-level vortex, the secondary circulation should begin to weaken. The evolution of the secondary circulation and its feedback on the vortex circulation through nonlinear processes are the subjects of this investigation. The difficulty in solving the equations of fluid motion analytically stems from the presence of nonlinear terms associated with fluid inertia. Exact solutions have been obtained only in the special cases for which the nonlinearity could be cast in a tractable form, typically for flows characterized by a few degrees of freedom. The explicit dependence of an exact solution on a few key parameters collapses an infinite number of equivalent numerical simulations into one solution, thereby facilitating the analysis of flow structure and behavior. These rare solutions are prized for their insights into fundamental fluid flows and their utility as test solutions for the validation of numerical flow models (Shapiro 1993). We now briefly review some of the exact vortex solutions of the Navier–Stokes, Euler, and shallow water equations. Vortex solutions and vortex dynamics in general are surveyed in Greenspan (1968), Lugt (1983), and Saffman (1992). Exact vortex solutions of the Navier–Stokes equations for viscous incompressible flow include the decaying line vortex (Batchelor 1967), Taylor’s decaying vortex grid (Rosenhead 1963), the interaction of a potential vortex with a solid boundary (Serrin 1972; Yih et al. 1982; Paull and Pillow 1985), a vortex in a converging stagnation point flow (Burgers 1948; Rott 1958, 1959), axial flow reversal in a two-celled vortex in stagnation point flow (Sullivan 1959), unsteady multicellular vortices in stagnation point flow (Bellamy-Knights 1970, 1971; Hatton 1975), decaying viscous vortices satisfying the Beltrami condition for alignment of the velocity and vorticity vectors (Shapiro 1993), flow due to an infinite rotating disk (von Kármán 1921), and the closely related solution for the interaction of a solid body vortex with a stationary plate (Bödewadt 1940; Zandbergen and Dijkstra 1987). We also mention Long’s well-known viscous vortex (1958, 1961), which complemented his inviscid theory for rotating flow drawn into a sink at the base of a cylinder (Long 1956). However, Long’s viscous vortex is not quite a bona fide exact solution of the Navier–Stokes equations since an internal boundary-layer approximation was made. Similarity solutions for steady and unsteady convective atmospheric vortices have been described by Gutman (1957), Kuo (1966, 1967), and Bellamy-Knights and Saci (1983). These and other exact solutions of the Navier–Stokes equations are sometimes used as proxies for tornadolike vortices (Davies-Jones 1986; Lewellen 1993). For instance, the paradigm of “vorticity diffusion balancing advection” embodied by the viscous stagnation point flow vortices of Burgers, Rott, Sullivan, Bellamy-Knights, and others may be relevant to the inner core of tornadoes and other intense vortices. On the other hand, the secondary circulations in these stagnation flow–type vortices are dynamically decoupled from the azimuthal velocity component. This decoupling readily permits an analytic solution to be obtained, but is probably not realistic for most geophysical vortices (including the tornado). We also note that the secondary circulations in these stagnation flow type vortices have a singularity at infinity (a similar singularity also being present in the von Kármán–Bödewadt-type solutions and in the solutions described in our present study). If we restrict attention to the core region, the existence of this singularity may not be troublesome. In contrast, the axial singularity in Serrin’s vortex (which acts as a spinning wire) actually drives the flow, and is more offensive than the stagnation flow type singularity for the study of the core region. On the other hand, Serrin’s vortex provides one of the few exact solutions of the Navier–Stokes equations in which both the impermeability and no-slip conditions are enforced on a rigid horizontal boundary. The near-surface flow of Serrin’s vortex beyond the core region may be a useful analog for the frictional boundary layer in the region of tornadoes beyond the radius of maximum wind. Indeed, the inverse distance velocity scaling characterizing Serrin’s vortex (and other, simpler vortices such as the Rankine vortex) has been observed in real tornadoes (Wurman et al. 1996). Exact vortex solutions of the Euler equations for inviscid flow include the Rankine vortex (circular patch of constant vorticity), Kirchoff’s rotating elliptical vortex patch (Lamb 1945), elliptical vortex patch in a uniform straining field (Moore and Saffman 1971), Hill’s (1894) propagating spherical vortex, simple configurations of mutually advecting line vortices (Lamb 1945; Saffman 1992; Aref et al. 1992), inviscid Beltrami flows (Lilly 1983, 1986; Davies-Jones 1985), inviscid sink vortex (Long 1956), and vortex flows through turbomachinery (Bragg and Hawthorne 1950). A class of exact polynomial solutions of the shallow water equations, which includes some vortex solutions, has been studied by Ball (1964), Thacker (1981), Cushman-Roisin (1984, 1987), and Cushman-Roisin et al. (1985), with Cushman-Roisin applying his elliptical vortex solution to oceanic warm-core rings. The exact solutions described in our present study are very closely related to these shallow water The organization of this paper is as follows. In section 2 we show how the Euler equations for radially unbounded solid body–vortex-type flows reduce to a simpler form under the von Kármán–Bödewadt similarity principle. In section 3 we set up the problem of an N-layer vortex flow satisfying this similarity principle and characterized by radial and azimuthal velocity fields that are piecewise constant functions of height. The N-layer flow is bounded at the bottom by an impermeable plate, and is either bounded at the top by an impermeable plate or is unbounded vertically. Exact analytic solutions are derived in section 4 for two-layer (N = 2) flows. We consider (i) a vortex overlying nonrotating fluid and (ii) a vortex overlying a vortex of different strength. Numerical results for some particular three-layer vortices (N = 3) are presented in section 5. In section 6 we turn attention to an elevated vortex core in solid body rotation embedded within a radially decaying outer vortex, a specification that includes the classical (but elevated) Rankine vortex. The presence of the radial length scale greatly complicates the analysis, and we abandon a pursuit of nonlinear solutions in favor of a linear analysis valid for small times. A summary and discussion follow in section 7. 2. Similarity hypothesis and its consequences The governing equations for inviscid, axisymmetric vortex flows are the Euler equations, expressed in cylindrical polar coordinates ( r, ϕ, z ) as, We also consider the flow to be incompressible, u, υ, are the radial, azimuthal (swirling), and vertical velocity components, respectively, is the (constant) density, and is the perturbation pressure (deviation of the total pressure from a hydrostatic reference state). The initial azimuthal velocity profile consists of solid body rotation, r, z, 0) = Ω( with the angular velocity Ω( ) varying in a prescribed manner along the axis of symmetry. The flow is unbounded in the radial direction and there is a singularity at radial infinity. A secondary circulation is anticipated to develop in the flow, with the azimuthal velocity profile remaining of solid body type, r, z, t ) = Ω( z, t with an evolving angular velocity profile. Inspection of suggests that if such a flow is mathematically feasible, the velocity field must satisfy the von Kármán–Bödewadt similarity principle: urFz, tυrz, twHz, t According to this scaling, the vertical velocity field is independent of radius. Thus, all fluid comprising a horizontal surface is displaced vertically at the same rate, and initially horizontal material surfaces remain horizontal for all time. For the flows considered herein, the initially horizontal interface between vortices rotating with different angular velocities (or between a vortex overlying nonrotating air) remains horizontal. We note that the angular velocity Ω, the vertical vorticity ζ [≡(1/r)∂rυ/∂r − (1/r)∂u/∂ϕ = 2Ω], and the horizontal divergence δ [≡(1/r)∂ru/∂r = 2F] are also independent of radius. These similarity relations (without the time dependence) were used to describe steady-state flows induced by an infinite rotating disk (von Kármán 1921), the flow of a rotating fluid over a stationary disk (Bödewadt 1940), and flows between infinite rotating coaxial disks (Batchelor 1951; Stewartson 1953). The time-dependent relations were applied to the development of the von Kármán–Bödewadt flows by Pearson (1965), Bodonyi and Stewartson (1977), and Bodonyi (1978). Recent investigations into von Kármán–Bödewadt-type flows indicate the nonexistence of solutions as well as the existence of multiple solutions for certain parameter values (Zandbergen and Dijkstra 1987, and references therein). In view of and the Stokes streamfunction [defined by = (1/ = (−1/ ] are related to Since the angular velocity Ω is half the vertical vorticity, the azimuthal equation of motion (9) can also be interpreted as the vertical vorticity equation. Equation (9) relates local changes in vertical vorticity to vertical advection and stretching of vertical vorticity. An equation for the azimuthal vorticity component, − ∂ = ∂ = (− ], is obtained from the azimuthal component of the curl of the equations of motion. Taking ∂/∂ , yields ∂ = 0, that is, the radial pressure gradient force is independent of height, a feature characteristic of boundary layer flows. In the context of geophysical vortices, however, the radial independence of the axial pressure gradient is an inadequacy of our similarity model (one shared by the stagnation-type Burgers, Rott, Sullivan, and Bellamy–Knights vortices and the viscous von Kármán–Bödewadt vortex flows). In view of this independence, the vertical derivative of The azimuthal vorticity equation for axisymmetric, incompressible, inviscid flows is For a flow satisfying the similarity hypothesis and − are of equal magnitude and opposite sign [(± )], and splits into two equations that hold simultaneously, According to , local changes in azimuthal vorticity are forced by vertical advection of azimuthal vorticity and by differential centrifugal forcing. Equation (13b) describes a balance between the radial advection of azimuthal vorticity and the stretching of azimuthal vorticity associated with radially displaced toroidal vortex lines. It can readily be shown is equivalent to with respect to gives back with a function of integration accounting for the radial pressure gradient force, We can therefore think of as either the vertically integrated azimuthal vorticity equation or as the radial equation of motion. Alternatively, we note that under the similarity hypothesis , the horizontal divergence equation ( Brandes et al. 1988 ) reduces to = 2Ω and = −∂ , with ) identified as can also be interpreted as the divergence equation. To determine the pressure, integrate with respect to r, obtaining z, t ) + /4. The function of integration z, t ) is evaluated by applying this equation to and integrating with respect to In this manner we find, We will take = 0 to be an impermeable boundary, in which case is the stagnation pressure. Here, ), the height-independent forcing term in , is proportional to the radial pressure gradient force (and is equal to It should be noted that our two-dimensional (z, t) partial differential equations (9) and (14), and pressure formula (16) follow from the three-dimensional (r, z, t) Euler equations without approximation. The fact that radius does not appear in (9) or (14) confirms that exact solutions of the Euler equations in the similarity form (5) are at least mathematically feasible. In sections 3–5, we seek exact solutions of these equations for flows with piecewise constant vertical profiles of radial and angular velocity. 3. Governing equations for the N-layer vortex We consider the special class of flows in which the azimuthal velocity is a piecewise constant function of height. In this case the azimuthal vorticity equation (12) can be expressed in Lagrangian form within each vortex layer as d(η/r)/dt = ∂Ω^2/∂z = 0, showing that η/r (=∂u/∂z) is conserved. If we consider the initial radial velocity field to be a piecewise function of height (so that ∂u/∂z = 0 initially in a vortex layer), then η/r is zero initially, and the conservation principle indicates that η/r (and hence ∂u/∂z) must be zero within each layer for all time (nonzero azimuthal vorticity is associated with infinite shear on the interfaces between the vortex layers). Thus, the radial velocity, angular momentum, and horizontal divergence are constant within each layer, and the vertical velocity varies linearly with height within each layer. In general, we consider N fluid layers in solid body rotation with different thicknesses and rotation rates. We speculate that a vortex with a continuous profile of angular velocity should be well approximated by the discrete N layer model for large N. With piecewise constant height dependencies for the azimuthal and radial velocity functions, and a piecewise linear height dependence for the vertical velocity function, the partial differential equations (9) and (14) [or equivalently (9) and (15)] reduce, without approximation, to a system of ordinary differential equations for the time-dependent amplitudes of the velocity functions. Low-order polynomials in the spatial coordinates (linear for velocity, quadratic for free-surface displacement) have been used previously to obtain exact solutions of the nonlinear shallow water equations corresponding to elliptic paraboloidal vortices and to free oscillations in rotating elliptic paraboloidal basins (Ball 1963; Miles and Ball 1963; Ball 1964 and 1965; Thacker 1981; Cushman-Roisin 1984 and 1987; Cushman-Roisin et al. 1985; Shapiro 1996). Numerical solutions of the shallow water equations with this polynomial model were employed by Tsonis et al. (1994) in a study of nonlinear time series analysis. The vortex layers are labeled in order of increasing height, from the lowest layer ( = 1) to the highest layer ( ). The th layer thickness, angular momentum, and horizontal divergence functions are denoted by ), Ω ), and ), respectively. The height of the ( − 1)th interface (the interface between the ( − 1)th and th layers) is given by . The th vertical velocity function z, t ) is related to the horizontal divergence by ) = −∂ Integrating this latter equation with respect to height within each layer yields z, t ) = − ) + ), where ) is the th layer function of integration. Equivalently, we may write Imposing the impermeability condition on the lower boundary ( = 0) yields = 0. The requirement that the vertical velocity be continuous across the layer interfaces then yields a recursion relation for the rest of the functions of integration, ). Thus, the vertical velocity field can be expressed completely in terms of the layer thicknesses and divergences. ) = −∂ the vertical vorticity (azimuthal velocity) equation (9) and the divergence equation (15) An equation for the evolution of the th layer thickness ) is obtained by evaluating at the bottom and top of the th layer ( , respectively) and subtracting the expression at the bottom from the expression at the top, We suppose the flow is bounded from below by an impermeable boundary at = 0 and consider two possible upper boundary conditions: (i) the flow is bounded at by an impermeable boundary or (ii) the flow is unbounded vertically, though with finite vertical velocity at vertical infinity (necessitating zero divergence in the top layer ) = 0). In the latter case, the top layer is displaced vertically, as a solid body, with no change in thickness ( = 0), and with a vertical velocity equal to the vertical velocity at the top of the underlying ( − 1)th layer. According to the azimuthal equation of motion , the angular velocity in the top layer would then be unchanged [Ω ) = Ω (0)], and the divergence equation (19) would yield In the case of the vertically bounded vortex, the total thickness of the vortex is constant, and therefore, = 0, or, in view of = 0. Equations (18)–(20) compose 3N ordinary differential equations in 3N + 1 unknowns: T[n](t), Ω[n](t), δ[n](t), and C(t) (=∇^2[H]p/ρ). Closure is provided by boundary data in the form of (21) [or, equivalently δ[N](t) = 0] for the vertically unbounded vortex, or (22) for the bounded vortex. A first integral of the motion is obtained by eliminating , resulting in the th-layer potential vorticity conservation equation, = 0, or ) > 0, the rotation rate Ω ) is always of the same sense as the initial rotation rate Ω Eliminating C(t) from Eq. (19) as applied to the th and ( − 1)th layers yields − 1 equations of the form to eliminate , Ω , and Ω in favor of the layer thicknesses, we get − 1 coupled second-order equations in Closure is provided by the boundary data, as described above. 4. Two-layer vortices a. Vortex overlying nonrotating fluid—Rigid lower boundary First consider the special case of a solid body vortex of infinite radial and vertical extent overlying nonrotating fluid bounded from below by a rigid horizontal plate. The initial radial and vertical velocity components in both layers are taken to be zero. After the initial time, the vortex pressure gradient (which is impressed on the nonrotating fluid) induces a radial inflow in the nonrotating fluid. Associated with this converging low-level flow is a horizontally uniform vertical velocity field that increases in magnitude with height from the lower boundary up to the vortex/ nonrotating fluid interface. Since there is no upper boundary, the vertical motion in the vortex should not be impeded, and the vortex can be displaced upward, as a solid body, with zero radial velocity and with a vertical velocity equal to the vertical velocity at the interface. According to the azimuthal equation of motion , the angular velocity in the vortex Ω would be unchanged by the upward displacement, and the azimuthal velocity in the lower fluid would be zero since it was zero initially. Thus, our special case corresponds to = 2, ≡ 0, and ≡ 0. From ) is equal to , a positive constant, and the divergence equation (19) for the nonrotating layer ( = 1) becomes which has the general solution The initial condition (0) = 0 implies that = 0, and reduces to The thickness of the nonrotating layer (interface height) is obtained from Thus, the divergence (and the radial and vertical velocity components) and the interface height increase rapidly and become infinite at a finite time, ), equal to a quarter of the orbital period * (=2 ) of upper-layer parcels about the axis of symmetry. This intriguing result can be compared with the singular behavior of some unsteady viscous von Kármán–Bödewadt-type flows. Bodonyi (1978) Bodonyi and Stewartson (1977) report a breakdown of the numerical solution (verified by an asymptotic analysis) of rotating flow in which a lower disk is abruptly forced to counterrotate. In these studies the velocity field and boundary layer depth on the lower disk grew explosively and became singular within half an orbital period. We speculate that the breakdown mechanism for the viscous counterrotating flow is similar to that in our inviscid elevated vortex flow: in the absence of an upper boundary, the maintained impressing of a vortex pressure gradient force on nonrotating fluid leads to explosive vertical accelerations. In the context of our inviscid elevated vortex, a nonrotating fluid layer is specified in the initial condition. In the viscous counterrotating flows, a level of nonrotating fluid (the exact location of which is influenced by diffusion) is always present. b. Two layer vortices—Rigid lower boundary If the lower fluid has some rotation, no matter how small, the singular behavior deduced above disappears. Instead, an oscillatory secondary circulation is set up in the flow in which the angular velocity and thickness of the lower-layer vortex alternately increases and decreases. In the absence of an upper rigid lid, the elevated vortex oscillates vertically as a solid body (Ω[2] = const, δ [2] ≡ 0). For the lower-layer flow ( = 1), , and results in a second-order nonlinear ordinary differential equation for the lower-layer thickness (interface height), does not explicitly involve the independent variable, its order may be reduced by changing the dependent variable to and regarding as the new independent variable, Equation (34) is a first-order linear equation for , with solution (subject to the condition of no initial vertical motion), or, after taking the root, The sign in is determined by the requirement that the solution be real, that is, that [ (0) − ] be nonnegative. If upper-vortex rotation is initially greater in magnitude than lower-vortex rotation [ (0) > 0], the solution must initially proceed such that (0) > 0, that is, the positive branch must be chosen and the interface rises. Conversely, if upper-vortex rotation is initially smaller in magnitude than lower-vortex rotation [ (0) < 0], the solution initially proceeds on the negative branch and the interface falls. Therefore, the ± symbol may be replaced with [ Separating variables in , integrating, and applying the initial condition yields or, after rearrangement, is the ratio of the (squared) lower-layer to upper-layer angular velocities. Applying , we obtain and Ω In contrast to the singular nature of a vertically unbounded elevated vortex overlying nonrotating fluid, the two-layer vortex is well behaved for all time (see Fig. 1 ). Here the converging “in-up” flow induced in the lower layer by the upper-layer vortex spins up the weak vertical vorticity in the lower layer. Eventually the lower-layer vortex becomes stronger than the upper-layer vortex and the vertical pressure gradient force reverses. In response, the secondary circulation reverses and the lower-layer vortex spins down, eventually becoming weaker than the upper-layer vortex. The process repeats itself and we obtain an oscillation of the interface height and lower-layer vortex strength. The period of these oscillations, |, is half the orbital period of parcels in the upper vortex. It can be inferred from that the interface height oscillates vertically between levels (0) and (0). Thus, the amplitude of the interface oscillation increases sharply with decreasing initial rotation in the lower level. For the case of strong initial lower-level rotation, the interface height rapidly drops to a small value and maintains small values throughout much of the oscillation period. c. Two-layer vortices—Rigid upper and lower boundaries Now suppose that impermeable horizontal boundaries confine the flow on the bottom (z = 0) and at the top (z = h). Both layers of this two-layer vortex are initially in solid body rotation with different (nonzero) angular velocities and no initial secondary circulation. The special case of a vortex overlying nonrotating fluid is examined in section 4d. yields a second-order nonlinear ordinary differential equation for the interface height, Again, changing the dependent variable to and regarding as the new independent variable yields a first-order linear equation for subject to the initial condition (0) = 0, we obtain: The qualitative behavior of ) can be deduced with analogy to one-dimensional particle motion in a potential field, that is, by regarding as a particle displacement and as an energy equation for a conservative system. We regard ( as the kinetic energy and the right-hand side of as (the negative of) a nonlinear potential energy function. Since ( is nonnegative, the right-hand side must be nonnegative on any domain of physical interest. The behavior of the solution depends on the nature of this nonlinear potential energy, especially on the points for which the potential energy vanishes. These points are local extrema of ), and represent the turning points of the differential equation. These extrema can be identified as , where It is straightforward to show that if |Ω (0)| < |Ω (0)|, then 0 < (0), whereas if |Ω (0)| > |Ω (0)|, then (0). If the lower layer is not rotating [Ω (0) = 0], then whereas if the upper layer is not rotating [Ω (0) = 0], then = 0. We consider the case where neither Ω (0) nor Ω (0) are zero. Taking the root of where the choice of sign is determined by the requirement that the solution be real, that is, that [ (0)] be nonnegative. If upper-layer rotation is initially stronger than lower-layer rotation [so that (0) > 0], the solution must proceed initially on the positive branch of and the interface rises, whereas if upper-layer rotation is initially weaker than lower-layer rotation [so that (0) < 0], the solution must proceed initially on the negative branch of and the interface falls. Thereafter, in either case, the sign in changes each time reaches a turning point. The solution is such that oscillates between (0) and . The solution first reaches the turning point /2 and completes one period of oscillation at returns to The solution is obtained by separating variables in and integrating, where sgn is the unit sign function: sgn[ (0)] =1 for (0), =−1 for (0). A partial fractions decomposition puts the left-hand side of in the form of tabulated integrals. The constant of integration is determined piecewise (constant for each half-period) by considering the initial condition and the continuity of ) at the turning points. Thus, we obtain the implicit solution for ) over a period of this oscillation as travels from (0) to (0 < /2), and travels from back to (0) ( /2 < ), where The oscillation period is , the oscillation period becomes Thus T is equal to π times the mean reciprocal magnitude of the initial angular velocity, or half the mean initial orbital period. Solutions for Ω and Ω in terms of the interface height follow immediately from . Solutions for are obtained from the signs being inferred piecewise (on half-period intervals) from the above considerations. Two examples are presented in Fig. 2. In both cases the initial interface height T[1](0) is set at 0.2h. In Fig. 2a the lower-layer rotation is initially weaker than the upper-layer rotation and the interface quickly rises (thus strengthening the lower-layer angular velocity and sowing the seeds for the eventual reversal of the secondary circulation). The interface rapidly displaces much of the upper-layer fluid and maintains a high altitude throughout much of the period. In the case of strong low-layer rotation [Fig. 2(b)], the interface initially falls. Despite the gentleness of the interface descent, a jet of strong radial velocity appears in the lower layer, a consequence of mass conservation and the relative shallowness of the lower layer. d. Vortex overlying nonrotating fluid—Rigid upper and lower boundaries If there is no rotation in the lower layer initially, Ω (0) = 0, and therefore Ω ) = 0 for all time and In this case, we need only consider the positive branch of . Separating variables and integrating, we obtain, The solution for Ω follows from (42), and the solutions for are given by the negative and positive branches of , respectively, with ) determined implicitly from . The solution, depicted in Fig. 3 (0) = 0.2 shows a period of rapid initial interface ascent followed by a long period of gentle ascent. The slowness of the radial velocity decay in the top layer is again a consequence of mass conservation and the thinness of that layer. Since there is no vertical vorticity in the lower layer, the stretching mechanism does not operate and there is no mechanism to reverse the sense of the secondary e. Invariance—Three-layer and multiple-layer planar-symmetric vortices Equations (8)–(10) are invariant to the transformation: → − z, H → − Therefore, from symmetry considerations, the two-layer solutions described above can be reflected about the lower impermeable boundary, = 0, to produce analytic three-layer solutions, In the case of the semi-infinite two-layer vortices confined by a lower boundary, a reflection of the solution results in an unconfined three-layer vortex solution defined piecewise on the vertical ∈ (−∞, − ∈ (− ), and ∈ ( , ∞). In the case of two-layer vortices confined between horizontal boundaries at = 0 and a reflection of the solution results in a confined three-layer vortex solution defined piecewise on the vertical intervals, ∈ (− ∈ (− ), and ∈ ( ). Further reflections of these confined vortices about the new boundaries are possible and lead to new planar-symmetric multiple-layer vortex solutions. Apart from their own intrinsic interest, these analytic solutions can also be used to validate the numerical algorithms for the more general (asymmetric) multiple-layer vortices described in the next section. 5. Numerical solutions for three-layer vortices The behavior of three-layer vortices follows from the solution of = 3. It is convenient to nondimensionalize variables as = 1 − , and rewrite as a system of four coupled first-order equations, By definition, 1/ is proportional to the sum of the reciprocal potential vorticities in the three layers, while α, β, are proportional to the squared potential vorticities in the lower, middle, and upper layers, respectively. These definitions imply so that only two of α, β, are independent. How- ever, rather than specifying α, β, γ, directly (and then deriving the initial angular velocity functions), we found it convenient to calculate α, β, γ, from specified values of (0), Ω (0), Ω (0), and Ω (0). It can be noted that if Ω (0), Ω (0), and Ω (0) undergo proportional changes [so that the ratios Ω (0) and Ω (0) are preserved], changes while α, β, remain the same. Thus, the shape of the solution curve is affected by the relative magnitudes of the rotation rates while the timescale is affected by the mean rotation rate (as measured by Equations (60)–(63) were integrated numerically for a variety of initial conditions with the fourth-order Runge–Kutta formula (Press et al. 1992). In each case the integrations were performed from τ = 0 to τ = 15 with a nondimensional time step size Δτ = 0.01. A variety of interesting waveforms were obtained by varying the parameter settings, but in all cases the solutions were periodic. We speculate that the three-layer vortex is nonchaotic but that chaos might be possible in vortices with more layers. Results from selected three-layer calculations are depicted in Figs. 4–7. In these cases the three layers are initially of equal thickness: T̃[1](0) = T̃[2](0) = 1 − T̃[1](0) − T̃[2](0) = ⅓. The parameter settings are given in Table 1. In vortex A1:2:4 (Fig. 4) the initial angular velocity functions are specified to increase upward in a ratio of 1: 2:4. There is no initial vertical motion [P(0) = Q(0) = 0]. As in the two-layer case, the initially weakest vortex layer (low layer in this case) rapidly thickens during the first half of the oscillation period and increases its angular velocity, while the initially strongest vortex layer (upper layer) rapidly thins and weakens its angular velocity. The layer of middle strength rotation (middle layer) thickens slightly at first but then thins. The motion of the interfaces during the first half of the oscillation period is associated with an in-up-out secondary circulation in which the lowest layer participates in the inflow and the upper layer participates in the outflow. The middle layer first participates in the inflow and then participates in the outflow. The sense of the circulation reverses for all layers halfway through the oscillation period. Vortex B1:2:10 (Fig. 5) is similar to vortex A1:2:4 except the initial angular velocities now increase upward in a ratio of 1:2:10. The increased initial discrepancy between the rotation rates in the lowest two layers and the upper layer amplifies the subsequent secondary circulation in all layers. Next consider vortex C1:2:4 (Fig. 6), which has an initially descending upper interface with a vertical velocity of Q(0) = −0.5 (all other initial conditions being the same as in A1:2:4). Introduction of the nonzero interface velocity creates an asymmetry in the waveforms of all the variables. Compared to A1:2:4, we see that the initial descent of the upper-layer interface in C1:2:4 results in a short-term increase in the upper-layer thickness and a decrease in the middle-layer thickness. However, the lower layer, responding to the vortex pressure gradient, thickens as in A1:2: 4. Vortex D1:2:10, the counterpart of B1:2:10 with nonzero initial vertical velocity Q(0) = −0.5, is depicted in Fig. 6. Initial behavior of an elevated vortex with radial power-law decay for r > R The similarity vortices in sections 2–5 are noteworthy in that they provide rare exact descriptions of nonlinear interactions between vortex circulations and vortex-induced secondary circulations. However, the restrictive nature of the similarity scalings for the velocity and pressure fields suggest that the relevance of the similarity solutions to geophysical flows may be limited to the interior portions of broad vortices. Although the general validity of the similarity approach to solid body vortices of finite area lies beyond the scope of this investigation, a specific comparison at a small time is provided later in this section. Mid- and lower-level mesocyclones and other mesoscale geophysical vortices typically consist of an isolated region of large vorticity embedded within a more-or-less nonrotating larger-scale environment. It is therefore of interest to study a solid body-type vortex of finite area embedded within a radially decaying vortex. As before, we consider our vortex to be elevated, that is, both inner and outer parts of the vortex overlie a layer of nonrotating fluid. There is no initial secondary circulation in either the vortex or the lower fluid. The length scale associated with a finite vortex core greatly complicates the analysis and we abandon our search for exact solutions. Instead, we present a linear analysis of the flow appropriate for small times. The long-term behavior will be investigated in future numerical simulations. Consider a vortex with an initial azimuthal velocity distribution r, z, 0)] given by This specification includes the previous example of unbounded solid-body rotation ( = −1) and the classical Rankine vortex ( = 1). For > 1, the vertical vorticity in the outer region is negative. In the following, we restrict attention to > 0 to ensure that the azimuthal velocity decays with radius. In the absence of an initial secondary circulation, the azimuthal equation of motion shows that the initial azimuthal velocity tendency is zero, while the azimuthal vorticity equation (12) yields an equation for the initial streamfunction tendency r, z, The right-hand side of vanishes everywhere except along the interfacial singularity at . Integrating an infinitesimal distance across this discontinuity, we obtain a jump condition for ∂ (indicating a jump in the radial velocity tendency and thus a jump in the radial velocity itself), ≡ lim + |ε|) and ≡ lim − |ε|). We take itself as being continuous across the interface so that the normal velocity component is continuous. Thus, we seek solutions of satisfying the jump condition . We impose the impermeability condition on the top and bottom boundaries, assume there is no source or sink of mass along the axis of symmetry, and let the mass flux vanish far from the axis of ψ^1zψ^1r,ψ^1r, hψ^1z (see the appendix), we obtain This solution was evaluated for a range of aspect ratios R/h, decay exponents n, and interface heights T[1]. The modified Bessel functions I[1] and K[1] were evaluated with the IMSL MATH/LIBRARY special functions FORTRAN subroutines BSI1 and BSK1, respectively, except for large arguments (>10), where asymptotic formulas were used (Abramowitz and Stegun 1964). The integrals were evaluated with the trapezoidal rule. Although analytic forms for the initial meridional velocity tendencies u^1 [≡(1/r)∂ψ^1/∂z] and w^1 [≡(−1/r)∂ψ^1/∂r] are available, it is convenient to obtain these components from ψ^1 via finite-difference discretizations. Results are presented for vortices overlying nonrotating fluid of depth T[1] = 0.2h. We consider a columnar vortex (R/h = 0.5) with weak (n = 1) and strong (n = 4) outer decay (Figs. 8 and 9, respectively), and a broad vortex (R/h = 2) with weak (n = 1) and strong (n = 4) outer decay (Figs. 10 and 11, respectively). In all cases an updraft extends across the vortex core as well as in the nonrotating flow beneath the core. The peak updraft speed at a fixed radius occurs at the horizonal interface between the rotating and nonrotating fluid. The updraft strength and pattern (i.e., flatness of vertical velocity isolines) is remarkably similar for vortices of the same aspect ratio, with greater flatness for the broader vortices. The near constancy of w with respect to radius in the core region and the relative insensitivity of the updraft to the decay exponent suggest that the similarity solutions presented in the previous sections may be applicable within the core region of the finite radius vortices, at least for a short time. This can be seen for the examples considered herein by comparing Figs. 8–11 with Fig. 12. This latter figure depicts the azimuthal velocity and the time tendency of the meridional velocity fields for a radially unbounded (similarity) vortex overlying nonrotating fluid (initial depth of 0.2h). Direct comparison with the linear finite radius solutions is facilitated by evaluating the similarity solution (55) at a small nondimensional time Ωt = 0.24748 (when the interface has risen slightly to 0.21h), expressing the meridional velocity fields as tendencies (u/t and w/t, at small time t) and contouring the scaled fields to match Figs. 8–11. As can be seen, the similarity solution is in good quantitative agreement with the broad linear vortices (Figs. 10 and 11) for radii extending to ∼¾ of the core radius. In contrast, the similarity solution exhibits only good qualitative agreement with the two columnar vortices, and then only out to ∼½ the core radius. For radii near and beyond the core radius, the similarity solution departs significantly from both the broad and columnar vortices. Since the similarity solution is independent of a radial length scale, it is not surprising that it is in better agreement with the broader vortices than the columnar vortices. Indeed, it can be shown that for an infinite aspect ratio R/h, the vertical velocity tendency associated with the linear solution (73) reduces to the vertical velocity tendency associated with the nonlinear similarity solution (55) in the limit of vanishing time. Perhaps the most intriguing feature in Figs. 8–11 is the annular downdraft ringing the updraft just beyond the radius of maximum tangential wind. This downdraft is very sensitive to the aspect ratio and decay exponent, being stronger and narrower for larger R/h and larger n. Indeed, for the broad vortex with strong radial decay depicted in Fig. 11, the peak downdraft speed actually exceeds the peak updraft speed. Although our linear analysis should not be used to quantify the angular momentum transport in this downdraft, qualitatively we see that the downdraft is at least “poised” to transport vortex angular momentum downward and radially inward. A complete picture of angular momentum transport in the downdraft and its subsequent feedback on the secondary circulation must await a nonlinear numerical simulation. Of course, whether these symmetric vortex-induced downdrafts are physically realizable depends, in part, on the stability of the solutions. Of particular interest is the stability of these vortices with anticyclonic vorticity in the outer region, that is, when Rayleigh’s stability criterion is violated. Determining the stability bounds of our unsteady vortices, while important, is a formidable task and must be deferred to a future study. 7. Summary and discussion This investigation is concerned with the inviscid dynamics of vortices with axially varying rotation rates, including the case of a vortex overlying nonrotating fluid. We consider radially unbounded vortices in solid body rotation and elevated Rankine-type vortices. For the former class of vortices, the von Kármán–Bödewadt similarity principle is applicable and leads to exact unsteady solutions of the nonlinear Euler equations. These similarity solutions are noteworthy in that the meridional circulation is not prescribed but is generated by the vortex circulation. The solutions describe decaying, amplifying, and oscillatory behavior for both the primary vortex and the vortex-induced secondary circulation. The behavior of the oscillatory similarity solutions is typified by the case of a strong vortex overlying a weaker vortex. In this case the radial pressure gradient force induces a converging low-level flow and an associated updraft that spins up the initially weak lower-layer vorticity to values exceeding that in the upper-layer vortex. The reversed axial distribution of vorticity (and the associated reversal of the axial pressure gradient force) causes a reversal of the secondary circulation with a subsequent spindown of the low-level vorticity. For our inviscid, unforced hydrodynamical model, the oscillation proceeds ad infinitum. The absence of a radial length scale and the unbounded nature of the similarity scalings (u and υ increase with radius, while w and ∂p/∂z are independent of radius) suggest that the similarity solutions may be most relevant to the dynamics of the interior portions of broad mesoscale vortices. The dynamics embodied by the similarity solutions might also be important as a modulating factor for columnar vortices embedded within a broader parent vortex. A comparison (at a small time) between a similarity solution and its finite radius counterpart in section 6 indicated that the similarity solution was in better agreement with the broader vortices than the columnar vortices. However, long time numerical simulations of a variety of finite radius vortices will be required to establish the areal and temporal bounds of validity of the similarity solutions. The oscillatory behavior of the similarity solutions appears to be similar to the vortex valve effect sometimes used to explain the cyclic appearance, demise, and reappearance of supercell characteristics in tornadic storms and the apparent paradox of tornado formation in association with storm top collapse (Lemon et al. 1975; Davies-Jones 1986). The vortex valve effect can be demonstrated in a vortex chamber by feeding low-level rotating air into an updraft that vents through a hole at the top of the chamber. As the rotating air approaches the axis of symmetry at low levels, the azimuthal velocity increases in accordance with angular momentum conservation. The increased low-level azimuthal velocity is associated with a decrease in low-level pressure, a reduction of the upward axial pressure gradient force, and a “choking” of the updraft. Most columnar geophysical vortices with axially varying rotation rates will not have the radial pressure gradient independent of the axial direction. However, we believe the essential dynamical mechanism for real vortex–updraft oscillations is provided in its most basic form by our similarity model. We also examine the short-term behavior of elevated vortices with cores in solid body rotation embedded within radially decaying angular momentum profiles, a class that includes the elevated Rankine vortex. A linear analysis of this case shows that an annular downdraft should form on the periphery of the vortex core. The peak vortex-induced downdraft speed is greatest for broad vortex cores and for large outer vortex radial-decay rates, and can exceed the peak vortex-induced updraft speed. Although our linear analysis should not be used to quantify angular momentum transport, based on the form of the downdraft we hypothesize that the vortex angular momentum can be advected downward and radially inward. The details of this transport and its subsequent feedback on the meridional circulation are of particular interest. For what set of parameters does the angular momentum remain suspended or reach the ground? For what set of parameters does the radial convergence cause the angular momentum to spin up at midlevels and build downward via the dynamic pipe effect (Smith and Leslie 1978; Trapp and Davies-Jones 1997) or first descend to the ground, spin up in the converging low-level flow and then build upward? Is an oscillation set up, as in the similarity solutions? Clearly a longer-term nonlinear simulation is required to answer these questions. It should be borne in mind that our hydrodynamical model of elevated rotation of finite radius with no initial meridional circulation is highly specialized. This model was chosen because it provided one of the simplest possible “thought experiments” for studying the behavior of elevated vortices. Evidence for a dynamically induced downdraft in real mesocyclones or other geophysical vortices will require careful analysis of high resolution four-dimensional data from observed or numerically modeled phenomena. We note that in Fig. 9d of Ray et al. (1981) an “unexplained” elongated downdraft appears in the multiple-Doppler analysis of the Del City storm at z = 2 km “... in inflow air characterized by weak reflectivities....” This narrow north–south-oriented downdraft straddles a line extending from approximately x = 5, y = 8 to x = 5, y = 15. It appears along the edge of the mesocyclone at the approximate location of the maximum wind, and may be a manifestation of the vortex-induced downdraft discussed herein [Figs. 9e and 9f of Ray et al. (1981) also show a downdraft ringing much of the mesocyclone at the 2-km level, though much of this is likely associated with precipitation loading]. Brooks et al. (1993) found a downdraft in a similar position, but attributed it to the presence of an inflow low due to high wind speeds at low levels (Bernoulli relationship). We also note the fortuitous measurements of a dust devil that struck an instrumented tower while data acquisition was in progress (Kaimal and Businger 1970). Measurements of the horizontal and vertical velocity components were taken at heights of 5.66 and 22.6 m. The trace of the lower-level data revealed a narrow downdraft on either side of the dust devil updraft in the zone of radially decreasing tangential velocity (Fig. 2 of Kaimal and Businger 1970). Downdrafts, though not necessarily annular downdrafts, have been implicated in mesocyclonic tornadogenesis. Davies-Jones (1982a,b) argued that tilting and stretching of environmental horizontal vorticity by an updraft alone would fail to produce appreciable rotation at low levels (i.e., only a midlevel mesocyclone would be produced), since vertical vorticity is generated only as parcels move upward, away from the ground, in an updraft. This was verified in the numerical simulations of Rotunno and Klemp (1985). Davies-Jones hypothesized that a downdraft was necessary for the genesis of near-ground rotation. Barnes (1978) and Lemon and Doswell (1979) hypothesized that the transition to tornadic phase in a supercell is initiated by the rear-flank downdraft (RFD). They suggested that the RFD formed at midlevels and intensified the low-level rotation by creating strong shear (Barnes) or thermal gradients (Lemon and Doswell) between the updraft and the RFD. Browning and Donaldson (1963) may have provided the first documentation of the RFD in an early supercell study. Three-dimensional numerical simulations (Klemp and Rotunno 1983) have, however, implied the opposite cause and effect relationship; the low-level rotation intensifies, followed by formation of an “occlusion downdraft,” a smaller-scale downdraft within the RFD. In this scenario, Klemp and Rotunno hypothesized that the RFD is dynamically driven by a local, low-level pressure drop due to the intensifying rotation, which generates a downward-directed pressure gradient. Brandes (1984a,b) also presented this hypothesis based on Doppler radar analysis. Our proposed mechanism of downdraft formation and downward transport of angular momentum relies on the presence of strong elevated rotation and thus differs from Rotunno and Klemp’s (1983) occlusion downdraft, which is driven by strong low-level rotation. The reader is referred to Klemp (1987) and Davies-Jones and Brooks (1993) for more detailed surveys of past modeling and theoretical studies. We hypothesize that the hydrodynamic vortex-induced process described herein may play a role in lower-level mesocyclogenesis or tornadogenesis, either through the formation of an annular (or semiannular) downdraft or by facilitating the development of an RFD. This basic process may be important in vortex-dominated flows in other geophysical and engineering contexts as well. The longer-term behavior of our idealized vortex including the downward transport of the vortex circulation by the annular downdraft will be examined in future numerical simulations. Detailed and insightful comments by the anonymous referees led to a substantially improved manuscript. Discussions with Doug Lilly and Kathy Kanak are gratefully acknowledged. Tom Condo, Tim Kwiatkowski, and Courtney Garrison provided computer assistance. This research was supported by the Center for Analysis and Prediction of Storms (CAPS) under Grant ATM91-20009 from the National Science Foundation. One of us (P.M.) was supported by an AMS fellowship sponsored by GTE Federal Systems Division, Chantilly, Virginia. • Abramowitz, M., and I. A. Stegun, 1964: Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. National Bureau of Standards, 1046 pp. • Aref, H., N. Rott, and H. Thomann, 1992: Grobli’s solution of the three-vortex problem. Annu. Rev. Fluid Mech.,24, 1–20. • Ball, F. K., 1963: Some general theorems concerning the finite motion of a shallow liquid lying on a paraboloid. J. Fluid Mech.,17, 240–256. • ——, 1964: An exact theory of simple finite shallow water oscillations on a rotating earth. Hydraulics and Fluid Mechanics, R. Silvester, Ed., Macmillan, 293–305. • ——, 1965: The effect of rotation on the simpler modes of motion of a liquid in an elliptic paraboloid. J. Fluid Mech.,22, 529–545. • Barnes, S. L., 1978: Oklahoma thunderstorms on 29–30 April 1970. Part I: Morphology of a tornadic storm. Mon. Wea. Rev.,106, 673–684. • Batchelor, G. K., 1951: Note on a class of solutions of the Navier–Stokes equations representing steady rotationally-symmetric flow. Quart. J. Mech. Appl. Math.,4, 29–41. • ——, 1967: An Introduction to Fluid Dynamics. Cambridge University Press, 615 pp. • Bellamy-Knights, P. G., 1970: An unsteady two-cell vortex solution of the Navier–Stokes equations. J. Fluid Mech.,41, 673–687. • ——, 1971: Unsteady multicellular viscous vortices. J. Fluid Mech.,50, 1–16. • ——, and R. Saci, 1983: Unsteady convective atmospheric vortices. Bound.-Layer Meteor.,27, 371–386. • Bödewadt, U. T., 1940: Die Drehströmung über festem Grunde. Z. angew. Math. Mech.,20, 241–253. • Bodonyi, R. J., 1978: On the unsteady similarity equations for the flow above a rotating disc in a rotating fluid. Quart. J. Mech. Appl. Math.,31, 461–472. • ——, and K. Stewartson, 1977: The unsteady laminar boundary layer on a rotating disk in a counter-rotating fluid. J. Fluid Mech.,79, 669–688. • Bragg, S. L., and W. R. Hawthorne, 1950: Some exact solutions of the flow through annular cascade actuator discs. J. Aeronaut. Sci.,17, 243–249. • Brandes, E. A., 1984a: Relationships between radar-derived thermodynamic variables and tornadogenesis. Mon. Wea. Rev.,112, 1033–1052. • ——, 1984b: Vertical vorticity generation and mesocyclone sustenance in tornadic thunderstorms: The observational evidence. Mon. Wea. Rev.,112, 2253–2269. • ——, R. P. Davies-Jones, and B. C. Johnson, 1988: Streamwise vorticity effects on supercell morphology and persistence. J. Atmos. Sci.,45, 947–963. • Brooks, H. E., C. A. Doswell III, and R. P. Davies-Jones, 1993: Environmental helicity and the maintenance and evolution of low-level mesocyclones. The Tornado: Its Structure, Dynamics, Prediction, and Hazards, Geophys. Monogr., No. 79, Amer. Geophys. Union, 97–104. • Browning, K. A., and R. J. Donaldson Jr., 1963: Airflow and structure of a tornadic storm. J. Atmos. Sci.,20, 533–545. • Burgers, J. M., 1948: A mathematical model illustrating the theory of turbulence. Adv. Appl. Mech.,1, 171–199. • Cushman-Roisin, B., 1984: An exact analytical solution for a time-dependent, elliptical warm-core ring with outcropping interface. Ocean Modelling, 59, 5–6. • ——, 1987: Exact analytical solutions for elliptical vortices of the shallow-water equations. Tellus,39A, 235–244. • ——, W. H. Heil, and D. Nof, 1985: Oscillations and rotations of elliptical warm-core rings. J. Geophys. Res.,90, 11756–11764. • Davies-Jones, R. P., 1982a: Observational and theoretical aspects of tornadogenesis. Topics in Atmospheric and Oceanic Sciences: Intense Atmospheric Vortices, L. Bengtsson and J. Lighthill, Eds., Springer-Verlag, 175–189. • ——, 1982b: A new look at the vorticity equation with application to tornadogenesis. Preprints, 12th Conf. on Severe Local Storms, Boston, MA, Amer. Meteor. Soc., 249–252. • ——, 1985: Dynamical interaction between an isolated convective cell and a veering environmental wind. Preprints, 14th Conf. on Severe Local Storms, Indianapolis, IN, Amer. Meteor. Soc., 216–219. • ——, 1986: Tornado dynamics. Thunderstorm Morphology and Dynamics, 2d ed., Univeristy of Oklahoma Press, 197–236. • ——, and H. Brooks, 1993: Mesocyclogenesis from a theoretical perspective. The Tornado: Its Structure, Dynamics, Prediction, and Hazards, Geophys. Monogr., No. 79, Amer. Geophys. Union, 105–114. • Greenspan, H. P., 1968: The Theory of Rotating Fluids. Cambridge University Press, 327 pp. • Gutman, L. N. 1957: Theoretical model of a waterspout. Bulletin of the Academy of Science USSR. Geophysics Series, Vol. 1, Pergamon Press translation, 87–103. • Hatton, L., 1975: Stagnation point flow in a vortex core. Tellus,27, 269–280. • Hill, M. J. M., 1894: On a spherical vortex. Philos. Trans. Roy. Soc. London, Ser. A,185, 213–245. • Kaimal, J. C., and J. A. Businger, 1970: Case studies of a convective plume and a dust devil. J. Appl. Meteor.,9, 612–620. • Klemp, J. B., 1987: Dynamics of tornadic thunderstorms. Ann. Rev. Fluid Mech.,19, 369–402. • ——, and R. Rotunno, 1983: A study of the tornadic region within a supercell thunderstorm. J. Atmos. Sci.,40, 359–377. • Kuo, H. L., 1966: On the dynamics of convective atmospheric vortices. J. Atmos. Sci.,23, 25–42. • ——, 1967: Note on the similarity solutions of the vortex equations in an unstably stratified atmosphere. J. Atmos. Sci.,24, 95–97. • Lamb, H., 1945: Hydrodynamics. Dover Publications, 738 pp. • Lemon, L. R., and C. A. Doswell III, 1979: Severe thunderstorm evolution and mesocyclone structure as related to tornadogenesis. Mon. Wea. Rev.,107, 1184–1197. • ——, D. W. Burgess, and R. A. Brown, 1975: Tornado production and storm sustenance. Preprints, Ninth Conf. on Severe Local Storms, Norman, OK, Amer. Meteor. Soc., 100–104. • Lewellen, W. S., 1993: Tornado vortex theory. The Tornado: Its Structure, Dynamics, Prediction and Hazards, Geophys. Monogr., No. 79, Amer. Geophys. Union, 19–39. • Lilly, D. K., 1983: Dynamics of rotating thunderstorms. Mesoscale Meteorology—Theories, Observations, and Models, D. K. Lilly and T. Gal-Chen, Eds., D. Reidel, 531–544. • ——, 1986: The structure, energetics and propagation of rotating convective storms. Part II: Helicity and storm stabilization. J. Atmos. Sci.,43, 126–140. • Long, R. R., 1956: Sources and sinks at the axis of a rotating liquid. Quart. J. Mech. Appl. Math.,9, 385–393. • ——, 1958: Vortex motion in a viscous fluid. J. Meteor.,15, 108–112. • ——, 1961: A vortex in an infinite viscous fluid. J. Fluid Mech.,11, 611–624. • Lugt, H. J., 1983: Vortex Flow in Nature and Technology. John Wiley and Sons, 297 pp. • Miles, J. W., and F. K. Ball, 1963: On free-surface oscillations in a rotating paraboloid. J. Fluid Mech.,17, 257–266. • Moore, D. W., and P. G. Saffman, 1971: Structure of a line vortex in an imposed strain. Aircraft Wake Turbulence and Its Detection, J. H. Olsen, A. Goldburg, and M. Rogers, Eds., Plenum, 339–354. • Paull, R., and A. F. Pillow, 1985: Conically similar viscous flows. Part III. Characterization of axial causes in swirling flow and the one-parameter flow generated by a uniform half-line source of kinematic swirl angular momentum. J. Fluid Mech.,155, 359–379. • Pearson, C. E., 1965: Numerical solutions for the time-dependent viscous flow between two rotating coaxial disks. J. Fluid Mech.,21, 623–633. • Press, W. H., S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery, 1992: Numerical Recipes in FORTRAN: The Art of Scientific Computing. 2d ed. Cambridge University Press, 963 pp. • Ray, P. S., B. C. Johnson, K. W. Johnson, J. S. Bradberry, J. J. Stephens, K. K. Wagner, R. B. Wilhelmson, and J. B. Klemp, 1981: The morphology of several tornadic storms on 20 May 1977. J. Atmos. Sci.,38, 1643–1663. • Rosenhead, L., Ed., 1963: Laminar Boundary Layers. Oxford University Press, 687 pp. • Rott, N., 1958: On the viscous core of a line vortex. Z. Angew. Math. Phys.,9, 543–552. • ——, 1959: On the viscous core of a line vortex II. Z. Angew. Math. Phys.,10, 73–81. • Rotunno, R., and J. B. Klemp, 1985: On the rotation and propagation of simulated supercell thunderstorms. J. Atmos. Sci.,42, 271–292. • Saffman, P. G., 1992: Vortex Dynamics. Cambridge University Press, 311 pp. • Serrin, J., 1972: The swirling vortex. Philos. Trans. Roy. Soc. London, Ser. A,271, 325–360. • Shapiro, A., 1993: The use of an exact solution of the Navier–Stokes equations in a validation test of a three-dimensional non-hydrostatic numerical model. Mon. Wea. Rev.,121, 2420–2425. • ——, 1996: Nonlinear shallow-water oscillations in a parabolic channel: Exact solutions and trajectory analyses. J. Fluid Mech.,318, 49–76. • Smith, R. K., and L. M. Leslie, 1978: Tornadogenesis. Quart. J. Roy. Meteor. Soc.,104, 189–199. • Stewartson, K., 1953: On the flow between two rotating coaxial disks. Proc. Cambridge Philos. Soc.,49, 333–341. • Sullivan, R. D., 1959: A two-cell vortex solution of the Navier–Stokes equations. J. Aeronaut. Sci.,26, 767–768. • Thacker, W. C., 1981: Some exact solutions to the nonlinear shallow-water wave equations. J. Fluid Mech.,107, 499–508. • Trapp, R. J., and R. Davies-Jones, 1997: Tornadogenesis with and without a dynamic pipe effect. J. Atmos. Sci.,54, 113–133. • Tsonis, A. A., G. N. Triantafyllou, J. B. Elsner, J. J. Holdzkom II, and A. D. Kirwan Jr., 1994: An investigation of the ability of nonlinear methods to infer dynamics from observables. Bull. Amer. Meteor. Soc.,75, 1623–1633. • von Kármán, T., 1921: Über laminare und turbulente Reibung. Z. Angew. Math. Mech.,1, 233–252. • Watson, G. N., 1944: A Treatise on the Theory of Bessel Functions. 2d ed. Cambridge University Press, 804 pp. • Wurman, J., J. M. Straka, and E. N. Rasmussen, 1996: Fine-scale Doppler radar observations of tornadoes. Science,272, 1774–1777. • Yih, C.-S., F. Wu, A. K. Garg, and S. Leibovich, 1982: Conical vortices: A class of exact solutions of the Navier–Stokes equations. Phys. Fluids,25, 2147–2158. • Zandbergen, P. J., and D. Dijkstra, 1987: Von Kármán swirling flows. Annu. Rev. Fluid Mech.,19, 465–491. Derivation of the Initial Streamfunction Tendency ψ^1 We seek the initial streamfunction tendency satisfying the partial differential equation (71) , the jump condition , and the boundary conditions . Toward that end, introduce a function Ψ satisfying For later use we note that ) can be extended as an odd periodic function of period 2 Although Ψ does not satisfy (=Ψ + Φ) does satisfy provided that Φ satisfies In terms of Φ, the boundary conditions zr,r, hz We also want to ensure that the full solution Ψ + Φ and its normal derivative ∂Ψ/∂ + ∂Φ/∂ are continuous at Since ∂Ψ/∂ obtained from is discontinuous at there must be an equal and opposite discontinuity in ∂Φ/∂ ≡ lim + |ε|), and ≡ lim − |ε|). , and expanding Φ in a sine series, we obtain ) satisfies , we obtain the radial jump condition, The solution of in the core region is is a modified Bessel function of the first kind of order one and is a constant. The second linearly dependent solution, a modified Bessel function of the third kind of order one, , was rejected because of its singular behavior at the origin. Recurrence relations, derivative formulas, and other results pertaining to modified Bessel functions are described in Abramowitz and Stegun (1964) Watson (1944) The solution of in the outer region can be expressed in terms of Lommel’s functions [section 10.7 of Watson (1944) ], or left in a form obtained by the method of variation of parameters, Here one of the two constants in the general solution was specified to make vanish at infinity. To verify this behavior, write as a sum of indeterminate forms, apply L’Hôpital’s rule, and use asymptotic and derivative formulas for modified Bessel functions: vanishes at infinity for the cases of interest, > 0. Moreover, since Φ decays by a factor of 1/ faster than Ψ, the solution = Ψ + Φ is dominated by Ψ far from the axis of symmetry. Continuity of and the radial jump condition yield two equations for and applying a formula for the Wronskian of Bessel functions, we get This completes the specification of the initial streamfunction tendency. Collecting results, we write the solution as Fig. 1. Evolution of nondimensional interface height T[1]/T[1](0) (solid line) and lower layer convergence −δ[1]/2|Ω[2]| (line with circles) for the vertically unconfined two-layer vortex with (a) α = 0.1, (b) α = 2.0, and (c) α = 10.0. The lower-layer radial velocity function is given by F[1] = δ[1]/2. Here α ≡ Ω^2[1](0)/Ω^2[2] is the initial ratio of (squared) lower-layer to upper-layer angular velocity, and τ ≡ 2|Ω[2]|t is nondimensional time. Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Fig. 2. Evolution of a vertically confined two-layer vortex. Nondimensional interface height T[1]/h (solid line), lower-layer convergence −δ[1]/2|Ω[2]| (line with circles), and upper-layer divergence δ[2]/2| Ω[2]| (line with squares) are shown for an initial interface height T[1](0) of 0.2h. Initial ratio of lower-layer to upper-layer angular velocities Ω[1](0)/Ω[2](0) is (a) 0.1 and (b) 3.0. The radial velocity function is given by F[1] = δ[1]/2 in the lower layer and F[2] = δ[2]/2 in the upper layer. The angular velocity functions are proportional to the respective layer thicknesses. Here τ ≡ [2h| Ω[2](0)|]/[h − T[1](0)]t is nondimensional time. Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Fig. 3. Evolution of a vertically confined vortex overlying nonrotating fluid. Nondimensional interface height T[1]/h (solid line), lower-layer convergence −δ[1]/2|Ω[2]| (line with circles), and upper-layer divergence δ[2]/2|Ω[2]| (line with squares) are shown for an initial interface height T[1](0) of 0.2h. The radial velocity function is given by F[1] = δ[1]/2 in the lower layer and F[2] = δ[2]/2 in the upper layer. The angular velocity function in the upper layer is proportional to the upper-layer thickness. Here τ ≡ [2h|Ω[2](0)|]/[h − T[1](0)]t is nondimensional time. Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Fig. 4. Evolution of vortex A1:2:4, a vertically confined three-layer vortex with initially equal layer thicknesses, and initial layer angular velocities increasing upward in a ratio of 1:2:4. The two interfaces have no initial vertical motion. (a) Nondimensional lower-layer thickness T̃[1] (solid line), middle-layer thickness T̃[2] (line with plus symbols), and upper-layer thickness T̃[3] = 1 − T̃[1] − T̃[2] (line with circles). (b) Nondimensional convergence values −δ[1]/Ω, −δ[2]/Ω, and −δ[3]/Ω in the lower (solid line), middle (line with plus symbols), and upper layers (line with circles), respectively. The nth layer radial velocity function is given by F[n] = δ[n]/2 for n = 1, 2, 3. The angular velocity functions are proportional to the respective layer thicknesses. Here τ ≡ Ωt is nondimensional time. Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Fig. 5. As in Fig. 4 but for vortex B1:2:10, a three-layer vortex with initial layer angular velocities increasing upward in a ratio of 1:2:10. Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Fig. 6. As in Fig. 4 but for vortex C1:2:4, a three-layer vortex with an upper interface initially descending with Q(0) = −0.5. Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Fig. 7. As in Fig. 4 but for vortex D1:2:10, a three-layer vortex with initial layer angular velocities increasing upward in a ratio of 1:2:10, and with an upper interface initially descending with Q(0) = Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Fig. 8. Initial azimuthal velocity and initial tendency fields for a columnar vortex with weak radial decay (R/h = 0.5, n = 1). Low-level nonrotating fluid depth is T[1] = 0.2h. (a) Azimuthal velocity υ^0/(Ω R), (b) radial velocity tendency u^1 × 10/(RΩ^2), (c) streamfunction tendency ψ^1 × 10/(hΩ^2R^2), and (d) vertical velocity tendency w^1 × 10/(hΩ^2). Negative contours in (b) and (d) are dashed. Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Fig. 9. As in Fig. 8 but for a columnar vortex with strong radial decay (R/h = 0.5, n = 4). Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Fig. 10. As in Fig. 8 but for a broad vortex with weak radial decay (R/h = 2, n = 1). Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Fig. 11. As in Fig. 8 but for a broad vortex with strong radial decay (R/h = 2, n = 4). Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Fig. 12. Similarity solution for a radially unbounded vortex overlying nonrotating fluid with initial depth of 0.2h. Fields are obtained from (55) at nondimensional time Ωt = 0.24748, that is, when the interface has risen slightly to 0.21h. (a) Azimuthal velocity υ/(ΩR), (b) radial velocity tendency u/t × 10/(RΩ^2), and (c) vertical velocity tendency w/t × 10/(hΩ^2). Negative contours in (b) are dashed. Fields are scaled in the same manner and plotted with the same contour interval as in Figs. (8)–(11). Citation: Journal of the Atmospheric Sciences 56, 9; 10.1175/1520-0469(1999)056<1101:DOEV>2.0.CO;2 Table 1. Parameter settings for selected three-layer vortices with T̃[1] (0) = T̃[2] (0) = 1/3. Vortex names consist of a letter followed by the ratio φ[1] (0)/φ[2] (0)/φ[3] (0) of initial layer angular velocities. All quantities are nondimensional. It can also be shown that the flow becomes singular within a finite time for (i) Ω[2] ≠ 0 and any choice of initial value δ[1](0), and for (ii) Ω[2] = 0 and any negative initial value δ[1](0). In either case, the singularity is associated with the nonlinear term δ^2[1] in (26), or, equivalently, the (∂H/∂z)^2 term in (8), which accounts for radial advection of radial momentum.
{"url":"https://journals.ametsoc.org/view/journals/atsc/56/9/1520-0469_1999_056_1101_doev_2.0.co_2.xml","timestamp":"2024-11-11T11:34:20Z","content_type":"text/html","content_length":"922338","record_id":"<urn:uuid:c301025f-c2e9-4d4b-bdf8-4a5e97c2ad0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00375.warc.gz"}
Simplifying Polynomial Expressions In mathematics, simplifying expressions is a fundamental skill. This involves combining like terms and performing operations to make the expression more concise. Let's explore how to simplify the (5b - 6b^3 + 2b^4) - (9b^3 + 4b^4 - 7) Understanding the Expression Before we start simplifying, let's break down the expression: • Parentheses: The expression has two sets of parentheses, indicating that we need to consider the order of operations. • Terms: The expression is made up of several terms, each consisting of a coefficient and a variable raised to a power. For example, 5b is a term with a coefficient of 5 and a variable 'b' raised to the power of 1. • Like Terms: Terms with the same variable raised to the same power are considered like terms. For example, -6b^3 and 9b^3 are like terms. Simplifying the Expression 1. Distribute the negative sign: Since we have a minus sign in front of the second set of parentheses, we need to distribute it to each term inside the parentheses. This changes the signs of all the terms within the parentheses: (5b - 6b^3 + 2b^4) - (9b^3 + 4b^4 - 7) = 5b - 6b^3 + 2b^4 - 9b^3 - 4b^4 + 7 2. Combine like terms: Now we can combine the terms with the same variable and exponent. □ b^4 terms: 2b^4 - 4b^4 = -2b^4 □ b^3 terms: -6b^3 - 9b^3 = -15b^3 □ b terms: 5b □ Constant terms: + 7 3. Rewrite the simplified expression: -2b^4 - 15b^3 + 5b + 7 Final Result Therefore, the simplified form of the given expression (5b - 6b^3 + 2b^4) - (9b^3 + 4b^4 - 7) is -2b^4 - 15b^3 + 5b + 7.
{"url":"https://jasonbradley.me/page/(5b-6b%255E3%252B2b%255E4)-(9b%255E3%252B4b%255E4-7)","timestamp":"2024-11-11T10:21:05Z","content_type":"text/html","content_length":"60608","record_id":"<urn:uuid:220b9b7d-3f3f-4908-a36d-4f13979c7f71>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00477.warc.gz"}
3.1.14: Orientate Element | Karamba3D v1.3.3 “Orientate Element” is a multi-component where the drop-down list under “Element Type” lets one select between beams and shells. Orientate Beam In Karamba3D the default orientation of the local coordinate system of a beam or truss follows these conventions: The local X-axis (of red color) is the beam axis and points from starting-node to end-node. The local Y-axis (green) is at right angle to the local X-axis and parallel to the global XY-plane. This specifies the local Y-axis uniquely unless the local X-axis is perpendicular to the XY-plane. If this is the case, the local Y-axis is chosen parallel to the global Y-axis. The default criteria for verticality is, that the z-component of the unit vector in axial direction is larger or equal to $0.999 999 995$. This value can be changed in the "karamba.ini" file via the “limit_parallel” property. The local Z-axis (blue) follows from the local X- and Y-axis so that the three of them form a right-handed coordinate system. The local coordinate system affects the direction of locally defined loads and the orientation of the element’s cross section. Use the “Orientate Beam”-component to set the local coordinate system (see fig. 3.1.14.1): The input plug “X-axis” accepts a vector. The local X-axis will be oriented in such a way that its angle with the given vector is less than 90degree. This allows to give a consistent orientation to a group of beams. The local Y-axis lies in the plane which is defined by the local X-axis and the vector plugged into the “Y-axis”-input. If the Y-axis is parallel to the beam axis it is not applicable to the If no vector is supplied at the “Y-axis”-input or the given Y-axis is not applicable, then the local Z-axis of the beam lies in the plane which is defined by the local X-axis and the vector plugged into the “Z-axis”-input. “Alpha” represents an additional rotation angle (in degree) of the local Z-axis about the local X-axis. In order to control the orientation of a beam, the “Orientate Beam”-component can be applied in two ways: “Flow-through”: Plug it in between a “LineToBeam”- and an “Assemble”-component. The changes will be applied to all beam elements which pass through it. In case of shell elements the output is “Agent”: Specify beams by identifier via input “BeamId” and plug the resulting beam-agent directly into the “Elem”-input of the “Assemble”-component. This method allows to harness the power of regular expressions for selecting elements (see section 3.1.15). Orientate Shell For shells the default orientation of their local coordinate systems can be seen in fig. 3.1.14.2. The following convention applies: The local x-axis is parallel to the global x-direction unless the element normal is parallel to the global x-direction. In that case the local x-axis points in the global y-direction. The local z-axis is always perpendicular to the shell element and its orientation depends on the order of the vertices of the underlying mesh face: If the z-axis points towards ones nose, the order of the face vertices is counter-clockwise (See fig. 6.17 in [12] for an unforgettable way of remembering the right-hand rule of rotation.). The “Orientate Shell”-component lets one control local x- and z- directions of the faces which make up a shell: “X-Oris” and “Z-Oris” inputs expect lists of direction vectors, one for each mesh face. In case the number of vectors does not match the number of faces the longest list principle applies. Infeasible directions (e.g. a prescribed z-vector which lies in the plane of an element) get Regarding the application of the “Orientate Shell”-component the same two options (“flow-through” or “agent”) exist as for the “Orientate Beam”-component.
{"url":"https://manual-1-3.karamba3d.com/3-in-depth-component-reference/3.1-model/3.1.14-orientate-element","timestamp":"2024-11-13T07:44:18Z","content_type":"text/html","content_length":"359363","record_id":"<urn:uuid:10f17c5b-759b-40af-b2d2-917a4335bc48>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00333.warc.gz"}
Statistical analysis and simulation techniques in wall-bounded turbulence Documenti full-text disponibili: Documento PDF (English) - Richiede un lettore di PDF come Xpdf o Adobe Acrobat Reader Download (4MB) | Anteprima The present work is devoted to the assessment of the energy fluxes physics in the space of scales and physical space of wall-turbulent flows. The generalized Kolmogorov equation will be applied to DNS data of a turbulent channel flow in order to describe the energy fluxes paths from production to dissipation in the augmented space of wall-turbulent flows. This multidimensional description will be shown to be crucial to understand the formation and sustainment of the turbulent fluctuations fed by the energy fluxes coming from the near-wall production region. An unexpected behavior of the energy fluxes comes out from this analysis consisting of spiral-like paths in the combined physical/scale space where the controversial reverse energy cascade plays a central role. The observed behavior conflicts with the classical notion of the Richardson/Kolmogorov energy cascade and may have strong repercussions on both theoretical and modeling approaches to wall-turbulence. To this aim a new relation stating the leading physical processes governing the energy transfer in wall-turbulence is suggested and shown able to capture most of the rich dynamics of the shear dominated region of the flow. Two dynamical processes are identified as driving mechanisms for the fluxes, one in the near wall region and a second one further away from the wall. The former, stronger one is related to the dynamics involved in the near-wall turbulence regeneration cycle. The second suggests an outer self-sustaining mechanism which is asymptotically expected to take place in the log-layer and could explain the debated mixed inner/outer scaling of the near-wall statistics. The same approach is applied for the first time to a filtered velocity field. A generalized Kolmogorov equation specialized for filtered velocity field is derived and discussed. The results will show what effects the subgrid scales have on the resolved motion in both physical and scale space, singling out the prominent role of the filter length compared to the cross-over scale between production dominated scales and inertial range, lc, and the reverse energy cascade region lb. The systematic characterization of the resolved and subgrid physics as function of the filter scale and of the wall-distance will be shown instrumental for a correct use of LES models in the simulation of wall turbulent flows. Taking inspiration from the new relation for the energy transfer in wall turbulence, a new class of LES models will be also proposed. Finally, the generalized Kolmogorov equation specialized for filtered velocity fields will be shown to be an helpful statistical tool for the assessment of LES models and for the development of new ones. As example, some classical purely dissipative eddy viscosity models are analyzed via an a priori procedure. The present work is devoted to the assessment of the energy fluxes physics in the space of scales and physical space of wall-turbulent flows. The generalized Kolmogorov equation will be applied to DNS data of a turbulent channel flow in order to describe the energy fluxes paths from production to dissipation in the augmented space of wall-turbulent flows. This multidimensional description will be shown to be crucial to understand the formation and sustainment of the turbulent fluctuations fed by the energy fluxes coming from the near-wall production region. An unexpected behavior of the energy fluxes comes out from this analysis consisting of spiral-like paths in the combined physical/scale space where the controversial reverse energy cascade plays a central role. The observed behavior conflicts with the classical notion of the Richardson/Kolmogorov energy cascade and may have strong repercussions on both theoretical and modeling approaches to wall-turbulence. To this aim a new relation stating the leading physical processes governing the energy transfer in wall-turbulence is suggested and shown able to capture most of the rich dynamics of the shear dominated region of the flow. Two dynamical processes are identified as driving mechanisms for the fluxes, one in the near wall region and a second one further away from the wall. The former, stronger one is related to the dynamics involved in the near-wall turbulence regeneration cycle. The second suggests an outer self-sustaining mechanism which is asymptotically expected to take place in the log-layer and could explain the debated mixed inner/outer scaling of the near-wall statistics. The same approach is applied for the first time to a filtered velocity field. A generalized Kolmogorov equation specialized for filtered velocity field is derived and discussed. The results will show what effects the subgrid scales have on the resolved motion in both physical and scale space, singling out the prominent role of the filter length compared to the cross-over scale between production dominated scales and inertial range, lc, and the reverse energy cascade region lb. The systematic characterization of the resolved and subgrid physics as function of the filter scale and of the wall-distance will be shown instrumental for a correct use of LES models in the simulation of wall turbulent flows. Taking inspiration from the new relation for the energy transfer in wall turbulence, a new class of LES models will be also proposed. Finally, the generalized Kolmogorov equation specialized for filtered velocity fields will be shown to be an helpful statistical tool for the assessment of LES models and for the development of new ones. As example, some classical purely dissipative eddy viscosity models are analyzed via an a priori procedure. Altri metadati Statistica sui download
{"url":"http://amsdottorato.unibo.it/3821/","timestamp":"2024-11-02T06:11:37Z","content_type":"application/xhtml+xml","content_length":"45253","record_id":"<urn:uuid:2ec7142d-badb-47f0-8966-2f3724c24416>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00142.warc.gz"}
Development Meters: Understanding Bicycle Gearing Through Distance Photo by Ben Guernsey / Unsplash Development meters, or meters of development, is a measurement that specifies how many meters a bicycle will travel with a single revolution of the crank. It's commonly used to assess bicycle gearing, especially in countries that use the metric system. Development meters illustrate how specific gear ratios translate to real-world performance by expressing bicycle gearing as a distance traveled. This article will explain how to calculate meters of development and its practical use. Calculating wheel circumference The first step to understanding meters of development is knowing how to calculate the drive wheel's circumference. Thisis the distance the wheel rolls with one full rotation. To calculate a wheel's circumference, multiply the wheel's diameter in meters by π (approximately 3.14). wheel circumference = π * wheel diameter For example, a 700c wheel with a gravel tire will commonly have a diameter of around 0.7 meters. Multiplying this by π tells us the wheel will travel about 2.2 meters per rotation. Understanding gear ratios The second component of understanding meters of development is gear ratios. They show how many times the rear wheel rotates with each pedal rotation. Different gear size combos rotate the rear wheel by different amounts with each pedal rotation. Gear ratios are calculated by dividing the number of teeth on the front chainring by the number of teeth on the rear sprocket. gear ratio = front teeth / rear teeth This gives you a ratio that relates pedal revolutions to drive wheel revolutions. For example, with a 42-tooth front chain ring and a 15-tooth sprocket, your gear ratio would be 42 divided by 15, or 2.8. This means the rear wheel rotates 2.8 times for every rotation of the pedals. Calculating meters of development We can calculate meters of development by combining what we know about wheel circumference and gear ratios. The gear ratio is the number of times the rear wheel rotates for each full rotation of the pedals. The wheel circumference is the distance the wheel will roll each time it rotates. Therefore, multiplying the circumference by the gear ratio tells us how far the bike will travel with each rotation of the pedals, or meters of development. meters of development = gear ratio * wheel circumference We get the full calculation by breaking it down further with our previous formulas for gear ratio and wheel circumference. meters of development = front teeth / rear teeth * π * wheel diameter Now we can calculate the distance any bike will travel using any combination of gears. Meters of development vs gear inches You may have heard of a similar measurement, gear inches. It's a simpler calculation that provides an easy way to compare gears. The formula for gear inches is the front teeth divided by the rear teeth, multiplied by the wheel's diameter. gear inches = front teeth / rear teeth * wheel diameter Meters of development is more cumbersome to use than gear inches because it requires calculating with the irrational constant, π, which gear inches doesn't need. You can see how the two formulas differ when you compare them. The inclusion of π in the meters of development formula is where the distance traveled comes into play. Calculating gear inches is easier, but relating it to real-world use is harder. Meters of development rationalizes bicycle gearing by relating it to actual distances. Comparing gears You can calculate development meters for different gear combinations and then compare them. This will give you an idea of how the bike will perform in one versus the other. For example, if you had a 1x gravel bike with a 40T chainring, 10-44T rear cassette, and 700c wheels (~0.7 meter tire diameter), you can compare the two highest gears. You can compare 40:10 and 40:11, which result in 8.79 development meters and 7.99 development meters respectively. With each pedal rotation in the highest gear, you'll go 0.8 meters further vs the next one lower. Of course, you will have to pedal harder with each pedal stroke for that increased movement, but you can start to see how the two relate to each other. It's useful to compare sequential gears to understand the relative steps between them. As sprockets get larger, bigger jumps in their number of teeth are needed to make the effort between each feel Meters of development is more common in countries that use the metric system, and it is foundational for understandingbicycle gearing. It rationalizes different gear combinations in terms of distance traveled, which is the entire point of these machines, isn't it?
{"url":"https://www.cyclops.cc/what-are-meters-of-development/","timestamp":"2024-11-04T08:52:43Z","content_type":"text/html","content_length":"35530","record_id":"<urn:uuid:4b4d8573-7b7a-4e02-a319-1b91a1bcb479>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00876.warc.gz"}
Neyman-Pearson lemma Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory In statistics, the Neyman-Pearson lemma states that when performing a hypothesis test between two point hypotheses H[0]: θ=θ[0] and H[1]: θ=θ[1], then the likelihood-ratio test which rejects H[0] in favour of H[1] when ${\displaystyle \Lambda(x)=\frac{ L( \theta _{0} \mid x)}{ L (\theta _{1} \mid x)} \leq \eta \mbox{ where } P(\Lambda(X)\leq \eta|H_0)=\alpha}$ is the most powerful test of size α for a threshold η. If the test is most powerful for all ${\displaystyle \theta_1 \in \Theta_1}$, it is said to be uniformly most powerful (UMP). In practice, the likelihood ratio itself is not actually used in the test. Instead one computes the ratio to see how the key statistic in it is related to the size of the ratio (i.e. whether a large statistic corresponds to a small ratio or to a large one). Example[ ] Please expand this article. This template may be found on the article's talk page, where there may be further information. Alternatively, more information might be found at Requests for expansion. Please remove this message once the article has been expanded. See also[ ] References[ ] • Jerzy Neyman, Egon Pearson (1933). On the Problem of the Most Efficient Tests of Statistical Hypotheses. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character 231: 289-337. External links[ ]
{"url":"https://psychology.fandom.com/wiki/Neyman-Pearson_lemma","timestamp":"2024-11-07T17:08:19Z","content_type":"text/html","content_length":"164919","record_id":"<urn:uuid:66713eb8-66e5-4355-a516-5fb5add958f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00063.warc.gz"}
Research Guides: Mathematics: Websites Consider the following five criteria: 1. Accuracy - Is the information correct? 2. Authority - What are the author's credentials? 3. Coverage - Is the content covered comprehensively? 4. Currency - Is the information current? 5. Purpose - Is the content biased? If you're not certain, ask a librarian!
{"url":"https://libguides.pima.edu/c.php?g=198471&p=1303799","timestamp":"2024-11-11T06:35:24Z","content_type":"text/html","content_length":"40039","record_id":"<urn:uuid:17bede8b-6506-44a9-a7f2-bf9ccf0f3e2a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00673.warc.gz"}
How do I standardize variables in SAS? | SAS FAQ To standardize variables in SAS, you can use proc standard. The example shown below creates a data file cars and then uses proc standard to standardize weight and price. DATA cars; INPUT mpg weight price ; PROC STANDARD DATA=cars MEAN=0 STD=1 OUT=zcars; VAR weight price ; PROC MEANS DATA=zcars; The mean=0 and std=1 options are used to tell SAS what you want the mean and standard deviation to be for the variables named on the var statement. Of course, a mean of 0 and standard deviation of 1 indicate that you want to standardize the variables. The out=zcars option states that the output file with the standardized variables will be called zcars. The proc means on zcars is used to verify that the standardization was performed properly. The output below confirms that the variables have been properly standardized. Variable N Mean Std Dev Minimum Maximum MPG 5 19.2000000 3.1144823 15.0000000 22.0000000 WEIGHT 5 -4.44089E-17 1.0000000 -1.1262551 1.5324455 PRICE 5 -4.44089E-17 1.0000000 -0.7835850 1.7233892 Often times you would like to have both the standardized variables and the unstandardized variables in the same data file. The example below shows how you can do that. By making extra copies of the variables zweight and zprice, we can standardize those variables and then have weight and price as the unchanged values. DATA cars2; SET cars; zweight = weight; zprice = price; PROC STANDARD DATA=cars2 MEAN=0 STD=1 OUT=zcars; VAR zweight zprice ; PROC MEANS DATA=zcars; As before, we use proc means to confirm that the variables are properly standardized. Variable N Mean Std Dev Minimum Maximum MPG 5 19.2000000 3.1144823 15.0000000 22.0000000 WEIGHT 5 3250.00 541.6179465 2640.00 4080.00 PRICE 5 5058.00 1606.72 3799.00 7827.00 ZWEIGHT 5 -4.44089E-17 1.0000000 -1.1262551 1.5324455 ZPRICE 5 -4.44089E-17 1.0000000 -0.7835850 1.7233892 As we see in the output above, zweight and zprice have been standardized, and weight and price remain unchanged.
{"url":"https://stats.oarc.ucla.edu/sas/faq/how-do-i-standardize-variables-in-sas/","timestamp":"2024-11-03T02:47:41Z","content_type":"text/html","content_length":"34315","record_id":"<urn:uuid:ca33768e-00b9-413d-964b-6f821e664f2e>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00000.warc.gz"}
CIELab | Virtual Colour Atlas top of page The CIELab system is derived from Adams's (1942) system, the colour difference formula being a cube root version of the Adams-Nickerson formula. The colour difference formula developed by Adams is usually referred to as the Adams-Nickerson colour difference formula due to the work of Nickerson (1947,1950) which led to the successful introduction of the formula in industrial applications. The rectangular coordinates L*,a*,b* of this system are calculated from the CIE X,Y,Z tristimulus values by equations. Lightness forms the central axis of the system with the a and b axes perpendicular to the lightness axis and each other. Planes of constant lightness form ab chromaticity planes. The a axis approximates to the red-green axis and varies in extent based on Lightness. The b axis approximates to the blue-yellow axis and varies in extent based on Lightness. bottom of page
{"url":"https://www.vcsconsulting.uk/cielab","timestamp":"2024-11-12T17:22:56Z","content_type":"text/html","content_length":"673047","record_id":"<urn:uuid:b150e926-31c9-46ed-9786-b5f4c95db179>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00330.warc.gz"}
Real-Time Simulation of Physical Systems Using Simscape By Steve Miller, MathWorks and Jeff Wendlandt, MathWorks Replacing a physical system like a vehicle, plane, or robot with a real-time simulation of a virtual system drastically reduces the cost of testing control software and hardware. In real-time simulation, the inputs and outputs in the virtual world of simulation are read or updated synchronously with the real world. When the simulation time reaches 5, 50, or 500 seconds, exactly the same amount of time has passed in the real world. Testing can take place 24 hours a day, 7 days a week, under conditions that would damage equipment or injure personnel, and it can begin well before physical prototypes are available. Real-time simulation of physical systems requires finding a combination of model complexity, solver type, solver settings, and simulation hardware that permits execution in real time and delivers results sufficiently close to the results obtained from desktop simulation. Changing these items will often speed up the simulation but reduce the accuracy, or vice-versa. Simscape™ provides several capabilities that make it easier to configure your models for real-time simulation. This article shows how to configure a Simscape model of a pneumatic actuation system for real-time simulation (Figure 1). The steps described apply regardless of the real-time hardware used. Moving from Desktop to Real-Time Simulation Configuring the Simscape model for real-time simulation involves five steps: 1. Obtain reference results with a variable-step solver. 2. Examine the step sizes during simulation. 3. Choose a fixed-step solver and configure a fixed-cost simulation. 4. Find the right balance between accuracy and simulation speed. 5. Simulate the model on the real-time platform. 1. Obtaining Reference Results with a Variable-Step Solver A variable-step solver shrinks the step size to accurately capture events and dynamics in the system. Since this reduces the amount of time that the real-time target has to calculate the results for that time step, a variable-step solver cannot be used for real-time simulation. Instead, an implicit or explicit fixed-step solver must be used. To ensure that the results obtained with the fixed-step solver are accurate, we compare them with reference results obtained by simulating the system with a variable-step solver and tightening the error tolerances until the simulation results stop changing (Figure 2). The recommended variable-step solvers for Simscape models are ode15s and ode23t. 2. Examining the Step Sizes During Simulation A variable-step solver varies the step size to stay within the error tolerances and to react to zero crossing events. If the solver abruptly reduces the step size to a small value, such as 1e-15s, this indicates that the solver is trying to accurately identify a zero crossing event such as a reverse in flow or closing of a switch. A fixed-step solver may have trouble capturing these events at a step size large enough to permit real-time simulation. We use the following MATLAB^® commands to generate a plot showing how the time step varies during the simulation: (‘Step Size vs. Simulation Time’,’F (‘Simulation Time(s)’,’FontSize’,12); (‘Step Size(s)’,’FontSize’,12); The plot indicates that we should adjust the friction model (see Figure 1, Friction Load) to make the system model real-time capable (Figure 3). 3. Choosing a Fixed-Step Solver and Configuring a Fixed-Cost Simulation We must use a fixed-step solver that provides robust performance and delivers accurate results at a step size large enough to permit real-time simulation. We compare the simulation results generated by an implicit fixed-step solver and an explicit fixed-step solver for the same model at different step sizes (Figure 4). For our example model, the implicit solver provides more accurate results. For real-time simulation, overruns that occur when the execution time is longer than the sample time must be prevented (Figure 5). To do this, we run a fixed-cost simulation, limiting the number of iterations per time step. As Figure 5 shows, iterations are often necessary for each Simscape physical network for both explicit and implicit solvers. The iterations in Simscape are limited by setting the checkbox “Use fixed-cost runtime consistency iterations” and entering the number of nonlinear iterations in the Solver Configuration block (Figure 6). For the best balance between accuracy and cost, we recommend initially setting the number of nonlinear iterations to two or three. To indicate the relative cost for the available fixed-step solvers, we compare the normalized execution time for a nonlinear model containing a single Simscape physical network with each fixed-step solver (Figure 7). In our example, the two local Simscape solvers, Backward Euler and Trapezoidal Rule, require the least computational effort. By using the local solver option in Simscape, we can use an implicit fixed-step solver on the stiff portions of the model and an explicit fixed-step solver on the remainder of the model (Figure 8). This minimizes the number of computations performed per time step, making it more likely that the model will run in real time. 4. Balancing Accuracy and Simulation Speed We can now run the simulation using the C code that will run on the actual processor. During each time step, the real-time system reads the inputs, calculates the simulation results for the next time step, and writes the outputs. If this task takes less time than the specified time step, the processor remains idle during the remainder of the step. The challenge is to find settings that provide accurate results while permitting real-time simulation (Figure 9). In each case, there is a tradeoff of accuracy versus speed. Choosing a computationally intensive solver, increasing the number of nonlinear iterations, or reducing the step size increases accuracy and reduces idle time, raising the risk that the simulation will not run in real time. On the other hand, adjusting these settings in the opposite direction increases the amount of idle time but reduces accuracy. By using the Simscape Backward Euler local solver and limiting the number of iterations to two, we obtain accurate simulation results for our Simscape model at an acceptable simulation speed. 5. Simulating the Model on the Real-Time Platform Our model ran in real time on a 700 MHz processor and used only 62% of the available step size to perform the necessary calculations. The results obtained from the real-time simulation were identical to those obtained using the same solver settings during the desktop simulation. These results were also very close to the reference results obtained using a variable-step solver (Figure 10). The approach described in this article is not limited to one type of model. We applied this approach to more than 30 models spanning a range of applications and physical domains. These models contain hydraulic, electrical, mechanical, pneumatic, and thermal elements, and include applications such as hydromechanical servovalves, brushless DC motors, hydraulic pipelines with water hammer effects, and pneumatic actuation systems with stick-slip friction. All models were able to run in real time on an Intel^® Core 2 Duo E6700 (2.66 GHz) that was running xPC Target™. The maximum percentage of a step spent in simulation execution was less than 18%, leaving a wide safety margin for processing I/O and other tasks. The average percentage spent in simulation execution was 3.9%, and the minimum was 6e-4%. Published 2011 - 91881v00 View Articles for Related Capabilities View Articles for Related Industries
{"url":"https://ch.mathworks.com/company/technical-articles/real-time-simulation-of-physical-systems-using-simscape.html","timestamp":"2024-11-05T19:44:10Z","content_type":"text/html","content_length":"121915","record_id":"<urn:uuid:e4fdd089-86c8-4371-868f-8c7a71231582>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00892.warc.gz"}
Comment On The Use Of Selection Rate Versus Relative Fitness In most of our research with the long-term lines, we report changes in performance in terms of relative fitness, W. Relative fitness is a dimensionless quantity, which is calculated as the ratio of the growth rate of one strain relative to that of another during their direct competition. In some cases, however, it is preferable to express performance in terms of selection rates, r, which have units of inverse time. This formulation is useful for (i) certain theoretical purposes (see Lenski et al. 1991); (ii) when one competitor is much less fit than the other (Travisano and Lenski 1996); or (iii) when one or both competitors are declining in abundance, such as during competition assays under starvation conditions or in the presence of an antibiotic (see below). The following text (extracted from a letter of mine to a colleague) explains the issues in this last context. Illustrative calculations of W and r in the context of two growing populations We grow up two strains, A and B, separately to a density of ~4 x 10^9 cells/ml. We take 0.05 ml of each and add them to 9.9 ml of fresh medium. So the density of each strain at time 0 in the competition experiment is ~2 x 10^7 cells/ml. Given an appropriate dilution (two times 100-fold) and then plating 0.1 ml, let us say that we find that our sample yields 192 colonies of A and 204 colonies of B. Now we let the mixed populations grow and compete for one day. At the end of this period, the total density has grown back to ~4 x 10^9 cells/ml. We make an appropriate dilution (now three times 100-fold) and we plate 0.1 ml. We find that our sample yields 319 colonies of A and 107 colonies of B. Here are the relevant calculations. A(0) = Estimated density of A at time 0 = 192 x 10 x 100 x 100 = 1.92 x 10^7 cells/ml B(0) = Estimated density of B at time 0 = 204 x 10 x 100 x 100 = 2.04 x 10^7 cells/ml A(1) = Estimated density of A at time 1 day = 319 x 10 x 100 x 100 x 100 = 3.19 x 10^9 cells/ml B(1) = Estimated density of B at time 1 day = 107 x 10 x 100 x 100 x 100 = 1.07 x 10^9 cells/ml r = {ln[A(1)/A(0)] - ln[B(1)/B(0)]}/day = [ln(166.15) - ln(52.45)]/day = (5.113 - 3.960)/day = 1.153 per day That is, over the course of one day of competition, the density of A increased by about 1.1 natural-logs more than did the density of B. By the way, we can define mA = realized Malthusian parameter = ln[A(1)/A(0)]/day for strain A, with mB defined similarly. Hence, the selection rate constant, r, is equal to the difference in the two strains' Malthusian parameters during direct competition. To convert this selection rate constant into a relative (Darwinian) fitness, W, we would use the ratio of their Malthusian parameters instead of the difference. W = mA/mB = (5.113/day)/(3.960/day) = 1.29 That is, during the competition, A increased at a rate about 29% faster than did B. Notice that W is a dimensionless quantity, because the inverse time units cancel in the numerator and in the Approximate conversion of r into W We can also calculate W from r, as follows, but this is only an approximation. m = average Malthusian parameter = ln[N(1)/N(0)]/day ==> where N(1) = A(1) + B(1) and N(0) = A(0) + B(0) = ln[(4.26 x 10^9)/(3.96 x 10^7)]/day = 4.678/day That is, the total population density increased by about 4.7 natural-logs during the one-day competition experiment. The following conversion is only an approximation. W = 1 + r/m = 1 + (1.153/day)/(4.678/day) = 1.246 If the total population density increases by the same factor as the dilution, d, used to start the competition experiment, then m = ln d. (This might happen if the experiment begins and ends with populations in stationary phase in the same medium.) In our hypothetical example, each population was diluted 200-fold to begin the experiment. So d = 1/(1/200 + 1/200) = 100 and ln d = 4.605/day. Note that this is very close to m, but not usually identical: d is a theoretical expectation, whereas m takes into account the actual data. I usually prefer to calculate W directly (as above) rather than to use these approximations. So, we have two measures, r and W, that are clearly related. The former, r, is a difference in Malthusian parameters and has units of inverse time; the latter, W, is a ratio of Malthusian parameters and is a dimensionless quantity. I usually prefer to use W because I find it easier to compare numbers when I don't have to think about the different time units that might be used in different experiments (hours, days, or whatever). However, W does not make any sense if one or both competitors decline in density during the competition experiment. We don't have to worry about this in most of our experiments, but in your work with antibiotics it is a very common situation. Illustrative calculation of r when both populations are declining Let's imagine now another hypothetical where both competitors declined, but not to the same extent. Let's say that the plate counts gave us the following estimates of the initial and final densities for competing strains A and B. A(0) = 1.54 x 10^7 cells/ml A(1) = 7.73 x 10^5 cells/ml B(0) = 2.08 x 10^7 cells/ml B(1) = 9.55 x 10^2 cells/ml So, calculating the Malthusian parameters, we obtain mA = ln[(7.73 x 10^5)/(1.54 x 10^7)]/day = -2.992/day mB = ln[(9.55 x 10^2)/(2.08 x 10^7)]/day = -9.989/day In other words, A declined by about 3 natural-logs, whereas B declined by almost 10 natural-logs. Expressing this as a selection rate constant, r = (-2.992/day) - (-9.989/day) = 6.997/day Thus, A had an advantage of about 7 natural-logs over B. However, if we express this as a relative fitness, it makes no sense. W = (-2.992/day)/(-9.989/day) = 0.300 This W would seem to imply that A was less fit than B, but that is clearly wrong. And if one strain declines, while the other increases, you will obtain negative values for W, which also do not make So never use W if either or both populations decrease in density, but instead use r.
{"url":"https://lenski.mmg.msu.edu/ecoli/srvsrf.html","timestamp":"2024-11-08T23:57:11Z","content_type":"application/xhtml+xml","content_length":"15525","record_id":"<urn:uuid:ce967a3e-e402-4236-9376-554309acc2a2>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00143.warc.gz"}
Abusua Pa Abusua Pa means "Good Family". It is the symbol for the family unit. We will use the 5 pixel grid to trace out this image. The image of this is shown below: The plan to draw this shape is given below: 1. Lift up the pen 2. Increase the pensize to 40 3. Move it to the lower left hand corner of the outer square (-100, -100) 4. Place the pen down 5. Move forward to the position (100, -100) 6. Turn left by 90 degrees 7. Repeat steps 5 and 6 3 times 8. Draw the center lines 9. Draw the outer circles 10. Reduce the pensize to 5 11. Draw the inner squares Using Python Turtle We will use the template.py file and rename it to abusuapa.py. The code for steps 1 and 2 is given below: To move the pen to the lower left hand corner, we have to use the setposition function. The position we want to move it to is (-10, -10). The code to do this is shown below: turtle.setposition(-100, -100) To find the distance between to points, we use the coordinateDistance function which is shown below: def coordinateDistance(x1, y1, x2, y2): dx = x1 - x2 dy = y1 - y2 D = math.sqrt((dx * dx) + (dy * dy)) return D We calculate the length between the two points using the code shown below: length = coordinateDistance(-100, -100, 100, -100) Rather than repeat steps 5, 6 and 7 we shall use the drawSquare function. The code to do this is shown below: For this to work, we need to comment out the turtle.reset command in the drawSquare function. The generated image is now shown below: I realize that since we are using the drawSquare function, we no longer need the setposition code. We can comment it out. To draw the center lines, we have to move the turtle to the left hand side and move forward by the length of the side. Next we move the turtle to the bottom, set its heading to 90 degrees and move up by the length of the side. The code to do this is shown below: turtle.setposition(-100, 0) turtle.setposition(0, -100) The generated image is shown below: To draw the outer circle we will start with the top and move clockwise. To draw the upper circle, we need to move the turtle to the position (60, 120). Then we draw the semi-circle. The code to do this is shown below: turtle.setposition(50, 120) turtle.circle(50, 180) The generated image is shown below: To draw the remaining semi-circles, we move clockwise and also change the heading of our turtle accordingly. The code to do this is shown below: turtle.setposition(120, -50) turtle.circle(50, 180) turtle.setposition(-50, -120) turtle.circle(50, 180) turtle.setposition(-120, 50) turtle.circle(50, 180) The generated image is shown below: Completing this shape is easy. All we have to do is draw the lines that are within the squares. To do this we must reduce the pensize to 5 and set the orientation of the turtle appropriately to draw the lines. The code to do this is shown below: turtle.setposition(-60, -100) turtle.setposition(-40, -100) turtle.setposition(40, -100) turtle.setposition(60, -100) The generated image is shown below: To draw the remaining horizontal lines, I shall start from the bottom of the symbol and work my way up the code to do this is shown below: turtle.setposition(-100, -60) turtle.setposition(-100, -40) turtle.setposition(-100, 40) turtle.setposition(-100, 60) The generated image is shown below: We have successfully drawn the Abusua Pa symbol using the Python programming language. I would comment that it is a truly beautiful symbol.
{"url":"https://trustonteachestech.blogspot.com/2018/03/abusua-pa.html","timestamp":"2024-11-11T09:38:03Z","content_type":"text/html","content_length":"58212","record_id":"<urn:uuid:eb04d7fa-80a7-46e1-91e1-0c5e55bde0e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00832.warc.gz"}
National Curriculum Primary Keystage 2 Year 6 Mathematics Important note: National Curriculum content shared on this website is under the terms of the Open Government Licence. To view this licence, visit http://www.nationalarchives.gov.uk/doc/ open-government-licence/. You can download the full document at http://www.gov.uk/dfe/nationalcurriculum Number – number and place value Statutory requirements Pupils should be taught to: • read, write, order and compare numbers up to 10 000 000 and determine the value of each digit • round any whole number to a required degree of accuracy • use negative numbers in context, and calculate intervals across zero • solve number and practical problems that involve all of the above. Notes and guidance (non-statutory) Pupils use the whole number system, including saying, reading and writing numbers accurately. Number – addition, subtraction, multiplication and division Statutory requirements Pupils should be taught to: • multiply multi-digit numbers up to 4 digits by a two-digit whole number using the formal written method of long multiplication • divide numbers up to 4 digits by a two-digit whole number using the formal written method of long division, and interpret remainders as whole number remainders, fractions, or by rounding, as appropriate for the context • divide numbers up to 4 digits by a two-digit number using the formal written method of short division where appropriate, interpreting remainders according to the context • perform mental calculations, including with mixed operations and large numbers • identify common factors, common multiples and prime numbers • use their knowledge of the order of operations to carry out calculations involving the four operations • solve addition and subtraction multi-step problems in contexts, deciding which operations and methods to use and why • solve problems involving addition, subtraction, multiplication and division • use estimation to check answers to calculations and determine, in the context of a problem, an appropriate degree of accuracy. Notes and guidance (non-statutory) Pupils practise addition, subtraction, multiplication and division for larger numbers, using the formal written methods of columnar addition and subtraction, short and long multiplication, and short and long division (seeMathematics Appendix 1). They undertake mental calculations with increasingly large numbers and more complex calculations. Pupils continue to use all the multiplication tables to calculate mathematical statements in order to maintain their fluency. Pupils round answers to a specified degree of accuracy, for example, to the nearest 10, 20, 50 etc., but not to a specified number of significant figures. Pupils explore the order of operations using brackets; for example, 2 + 1 x 3 = 5 and (2 + 1) x 3 = 9. Common factors can be related to finding equivalent fractions. Number – fractions (including decimals and percentages) Statutory requirements Pupils should be taught to: • use common factors to simplify fractions; use common multiples to express fractions in the same denomination • compare and order fractions, including fractions > 1 • add and subtract fractions with different denominators and mixed numbers, using the concept of equivalent fractions • multiply simple pairs of proper fractions, writing the answer in its simplest form [for example, × =] • divide proper fractions by whole numbers [for example, ÷ 2 =] • associate a fraction with division and calculate decimal fraction equivalents [for example, 0.375] for a simple fraction [for example,] • identify the value of each digit in numbers given to three decimal places and multiply and divide numbers by 10, 100 and 1000 giving answers up to three decimal places • multiply one-digit numbers with up to two decimal places by whole numbers • use written division methods in cases where the answer has up to two decimal places • solve problems which require answers to be rounded to specified degrees of accuracy • recall and use equivalences between simple fractions, decimals and percentages, including in different contexts. Notes and guidance (non-statutory) Pupils should practise, use and understand the addition and subtraction of fractions with different denominators by identifying equivalent fractions with the same denominator. They should start with fractions where the denominator of one fraction is a multiple of the other (for example, + =) and progress to varied and increasingly complex problems. Pupils should use a variety of images to support their understanding of multiplication with fractions. This follows earlier work about fractions as operators (fractions of), as numbers, and as equal parts of objects, for example as parts of a rectangle. Pupils use their understanding of the relationship between unit fractions and division to work backwards by multiplying a quantity that represents a unit fraction to find the whole quantity (for example, if of a length is 36cm, then the whole length is 36 × 4 = 144cm). They practise calculations with simple fractions and decimal fraction equivalents to aid fluency, including listing equivalent fractions to identify fractions with common denominators. Pupils can explore and make conjectures about converting a simple fraction to a decimal fraction (for example, 3 ÷ 8 = 0.375). For simple fractions with recurring decimal equivalents, pupils learn about rounding the decimal to three decimal places, or other appropriate approximations depending on the context. Pupils multiply and divide numbers with up to two decimal places by one-digit and two-digit whole numbers. Pupils multiply decimals by whole numbers, starting with the simplest cases, such as 0.4 × 2 = 0.8, and in practical contexts, such as measures and money. Pupils are introduced to the division of decimal numbers by one-digit whole number, initially, in practical contexts involving measures and money. They recognise division calculations as the inverse of multiplication. Pupils also develop their skills of rounding and estimating as a means of predicting and checking the order of magnitude of their answers to decimal calculations. This includes rounding answers to a specified degree of accuracy and checking the reasonableness of their answers. Statutory requirements Pupils should be taught to: • solve problems involving the relative sizes of two quantities where missing values can be found by using integer multiplication and division facts • solve problems involving the calculation of percentages [for example, of measures, and such as 15% of 360] and the use of percentages for comparison • solve problems involving similar shapes where the scale factor is known or can be found • solve problems involving unequal sharing and grouping using knowledge of fractions and multiples. Notes and guidance (non-statutory) Pupils recognise proportionality in contexts when the relations between quantities are in the same ratio (for example, similar shapes and recipes). Pupils link percentages or 360° to calculating angles of pie charts. Pupils should consolidate their understanding of ratio when comparing quantities, sizes and scale drawings by solving a variety of problems. They might use the notationa:b to record their work. Pupils solve problems involving unequal quantities, for example, ‘for every egg you need three spoonfuls of flour’, ‘ of the class are boys’. These problems are the foundation for later formal approaches to ratio and proportion. Statutory requirements Pupils should be taught to: • use simple formulae • generate and describe linear number sequences • express missing number problems algebraically • find pairs of numbers that satisfy an equation with two unknowns • enumerate possibilities of combinations of two variables. Notes and guidance (non-statutory) Pupils should be introduced to the use of symbols and letters to represent variables and unknowns in mathematical situations that they already understand, such as: • missing numbers, lengths, coordinates and angles • formulae in mathematics and science • equivalent expressions (for example,a +b =b +a) • generalisations of number patterns • number puzzles (for example, what two numbers can add up to). Statutory requirements Pupils should be taught to: • solve problems involving the calculation and conversion of units of measure, using decimal notation up to three decimal places where appropriate • use, read, write and convert between standard units, converting measurements of length, mass, volume and time from a smaller unit of measure to a larger unit, and vice versa, using decimal notation to up to three decimal places • convert between miles and kilometres • recognise that shapes with the same areas can have different perimeters and vice versa • recognise when it is possible to use formulae for area and volume of shapes • calculate the area of parallelograms and triangles • calculate, estimate and compare volume of cubes and cuboids using standard units, including cubic centimetres (cm3) and cubic metres (m3), and extending to other units [for example, mm3 and km3]. Notes and guidance (non-statutory) Pupils connect conversion (for example, from kilometres to miles) to a graphical representation as preparation for understanding linear/proportional graphs. They know approximate conversions and are able to tell if an answer is sensible. Using the number line, pupils use, add and subtract positive and negative integers for measures such as temperature. They relate the area of rectangles to parallelograms and triangles, for example, by dissection, and calculate their areas, understanding and using the formulae (in words or symbols) to do this. Pupils could be introduced to compound units for speed, such as miles per hour, and apply their knowledge in science or other subjects as appropriate. Geometry – properties of shapes Statutory requirements Pupils should be taught to: • draw 2-D shapes using given dimensions and angles • recognise, describe and build simple 3-D shapes, including making nets • compare and classify geometric shapes based on their properties and sizes and find unknown angles in any triangles, quadrilaterals, and regular polygons • illustrate and name parts of circles, including radius, diameter and circumference and know that the diameter is twice the radius • recognise angles where they meet at a point, are on a straight line, or are vertically opposite, and find missing angles. Notes and guidance (non-statutory) Pupils draw shapes and nets accurately, using measuring tools and conventional markings and labels for lines and angles. Pupils describe the properties of shapes and explain how unknown angles and lengths can be derived from known measurements. These relationships might be expressed algebraically for example,d = 2 ×r; a = 180 – (b +c). Geometry – position and direction Statutory requirements Pupils should be taught to: • describe positions on the full coordinate grid (all four quadrants) • draw and translate simple shapes on the coordinate plane, and reflect them in the axes. Notes and guidance (non-statutory) Pupils draw and label a pair of axes in all four quadrants with equal scaling. This extends their knowledge of one quadrant to all four quadrants, including the use of negative numbers. Pupils draw and label rectangles (including squares), parallelograms and rhombuses, specified by coordinates in the four quadrants, predicting missing coordinates using the properties of shapes. These might be expressed algebraically for example, translating vertex (a, b) to (a – 2, b + 3); (a, b) and (a +d, b +d) being opposite vertices of a square of sided. Statutory requirements Pupils should be taught to: • interpret and construct pie charts and line graphs and use these to solve problems • calculate and interpret the mean as an average. Notes and guidance (non-statutory) Pupils connect their work on angles, fractions and percentages to the interpretation of pie charts. Pupils both encounter and draw graphs relating two variables, arising from their own enquiry and in other subjects. They should connect conversion from kilometres to miles in measurement to its graphical representation. Pupils know when it is appropriate to find the mean of a data set.
{"url":"https://teacherworksheets.co.uk/national-curriculum/primary/keystage-2/year-6/mathematics","timestamp":"2024-11-11T13:14:03Z","content_type":"text/html","content_length":"80917","record_id":"<urn:uuid:e0d52c5e-f2a6-4a26-9483-73a7e25ac4b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00160.warc.gz"}
Beyond the Second Law of Thermodynamics | Quanta Magazine Maggie Chiang for Quanta Magazine Since the steam engine began modernizing the world, the second law of thermodynamics has reigned over physics, chemistry, engineering and biology. Now, an upgrade is underway. Thermodynamics — the study of energy — originated during the 1800s, as steam engines drove the Industrial Revolution. To understand its second law, imagine a sponge cake, fresh from the oven, cooling on a countertop. Scent molecules carrying heat drift away from the cake. A physicist might wonder: In how many ways can these molecules be arranged throughout the volume of space they currently occupy? We call this number of arrangements the molecules’ entropy. If the volume just encloses the cake (as it does when the cake is freshest), the entropy is relatively small. If the volume encompasses the whole kitchen (after the molecules have had time to travel farther), the entropy is exponentially larger. The second law of thermodynamics decrees that the entropy of every closed, isolated system (such as our kitchen, assuming the windows and doors are shut) grows or remains constant. Accordingly, the scent of sponge cake wafts across the entire kitchen and never recedes. We sum up this behavior in an inequality: $latex S_f \ge\ S_i$, where $latex S_i$ is the molecules’ initial entropy and $latex S_f$ their final entropy. The inequality is useful but vague, because it doesn’t tell us how much the entropy will grow, except in a special case: when the molecules are at equilibrium. That happens when large-scale properties — such as temperature and volume — remain constant, and no net flows of anything — such as energy or particles — enter or leave the system. (For example, our cake’s scent molecules reach equilibrium after they’ve fully filled the kitchen.) At equilibrium, the second law strengthens to an equality: $latex S_f = S_i$. This simple, general equality provides precise information about many different types of thermodynamic systems at But you and I and most of the world are far from equilibrium. And “far from equilibrium” is the wild west to theoretical physicists and chemists: unpredictable and untidy. Imposing laws on the wild west — meaning, for us, proving equalities about physics far from equilibrium — is quite difficult. But it’s not impossible. For decades, physicists have worked with equalities that strengthen the second law. These equalities are known as fluctuation relations. They connect properties of systems far from equilibrium (which are difficult to reason about theoretically) with equilibrium properties (which are easy to reason about). To see fluctuation relations in action, imagine a microscopic strand of DNA floating in water. Floating quietly, the DNA is at equilibrium, sharing the water’s temperature. Using lasers, we can hold one end of the strand steady and pull the other end. Stretching the strand jolts it out of equilibrium and requires work in the physics sense of the word: structured energy harnessed to accomplish a useful task. The amount of work required fluctuates from one pulling of the strand to the next, since a water molecule sometimes kicks the strand here, sometimes there. That means every possible amount of work has some probability of being needed during the next pull. It turns out that these probabilities — which describe the DNA when it’s far from equilibrium — are directly related to properties that the DNA has at equilibrium. And that relation can be captured by an equality. This is the core of fluctuation relations: Properties of a system far from equilibrium participate in an equality with equilibrium properties. My colleague Chris Jarzynski at the University of Maryland discovered this in 1997. (He’s so modest, he calls the equality the nonequilibrium fluctuation relation, while the rest of us call it Jarzynski’s equality.) Although the DNA experiment provided one of the most famous tests of this principle, the equation governs loads of systems, including those involving electrons, beads the size of bacteria and brass oscillators that resemble centimeter-long tire swings. Fluctuation relations have implications fundamental and practical. For starters, from these equalities we can derive an expression of the second law of thermodynamics. So fluctuation relations not only extend our knowledge far from equilibrium, as we saw with the DNA strand, but also recapitulate information we know about equilibrium. But the true power of fluctuation relations lies in an ironic fact: While equilibrium properties are easier to reason about theoretically, they are harder to measure experimentally than far-from-equilibrium properties. For instance, to measure the work needed to stretch the DNA far out of equilibrium, we can simply pull the strand quickly — for a short time. In contrast, to measure the work needed to stretch it while it remains at equilibrium, we’d have to stretch so slowly that the DNA would always remain practically at rest — so our experiment would take an infinitely long Chemists, biologists and pharmacologists are interested in the equilibrium properties of proteins and other molecules, so using fluctuation relations gives them an experimental foothold. They can perform many short nonequilibrium trials and measure the work required in each. From this data, they can infer the probability of needing any given amount of work in the next nonequilibrium trial. Then they can plug those probabilities into the far-from-equilibrium side of the fluctuation relation to determine the equilibrium side. This method still requires oodles of trials, but researchers have leveraged mathematical tools to mitigate the difficulty. In this way, fluctuation relations have revolutionized thermodynamics, galvanizing experiments and providing detailed predictions about the world far from equilibrium. But their usefulness doesn’t stop there. During the 2000s, quantum thermodynamicists — those of us who study how quantum physics changes classical concepts like work, heat and efficiency — wanted in on the fun, even though our discipline introduces extra puzzles. How to define and measure quantum work is unclear thanks to quantum uncertainty; for instance, measuring a quantum system’s energy changes that energy. As a result, different researchers have proposed different definitions for quantum work. I imagine the various definitions as species in a Victorian menagerie. The “hummingbird” definition requires us to measure the quantum system gently, to disturb the energy only a little — as the fluttering of a hummingbird’s wings by your ear for an instant would disturb you. A “wildebeest” definition keeps to the middle of the pack, focusing our attention on average energy exchanges. Other definitions flutter, twitter and trumpet across the quantum-thermodynamics literature. As you might expect, different definitions lead to different quantum fluctuation relations. The same is true for similar definitions adapted to different physical settings. Some relations are easier to test experimentally, while some are abstract and mathematical. Some describe high-energy particles, like those smashed together at CERN; one describes chaos in black holes; and one describes the universe’s expansion. Experimentalists have tested some quantum fluctuation relations — with trapped ions, quantum dots and more. Will one equality rise to the top of the pile, like a monarch who’s bested all their relatives for the throne? I expect not. In my opinion, which definitions and equations are useful depends on which system you’re interested in, how you poke it and how you can measure it. The plurality of quantum fluctuation relations contrasts with the unity stereotypically prized by physicists, such as the long-sought Theory of Everything expected to unify all the fundamental forces. Perhaps some principle will unify the quantum fluctuation relations, revealing them to be different sides of a multidimensional coin. Or perhaps quantum thermodynamics is simply richer than other fields of physics.
{"url":"https://www.quantamagazine.org/beyond-the-second-law-of-thermodynamics-20220331/","timestamp":"2024-11-11T05:36:59Z","content_type":"text/html","content_length":"201487","record_id":"<urn:uuid:05a1f3af-1bb1-4bec-8d37-5236d402e23e>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00782.warc.gz"}
Black Box Statistics Excited about starting to learn python on April 1st, so thought I will try to look into something I should have looked into during this whole month of March… Last day huh. Like always. Hm.. anyway, some black boxing of some statistics terms. MEAN - the average of the numbeers. Add up all numbers, divide by how many numbesr there are. MEDIAN - middle of a sorted list of numbers. If can't find the middle number, add the 2 close ones and divide them by two. MODE - Numbers in order, count how many of each number, which appears most often is the mode. Two modes - bimodal. Grouping is useful also. VARIANCE + (standard deviation) - the average of the squared differences from the Mean. Dogs. IQR - interquartile range - numbers in order, find median. Take from both sides medians again and then minus one from the other, you get the IQR. CORRELATION and CAUSALITY - explanation Khan. MAXIMUM LIKELIHOOD - looks cool. Finding the best fit first stat quest video. 2021-04-08 10:24 Was learning more statistics, feeling bad that I left it behind. Have some time before the python course that I registered for, so will catch up with stats. Was looking at Khan accademy statistics course and found some things that I struggle with. The first paragraph presented the two way table and it was so confusing for me. The second day I crcked it. Okay back to the business. Since I dont have a proper notebook that I dream about, I will just write my notes here and improve my vim and html skils at the same time. Point means one or more value is there, then other side the same, then the meat of the distribution is where the box is. Line is the median - the middle number. Distribution is skewed to the left. more values on the left. Outlier - data point is way off from all the other data points. Cluster - group of data. 0-2 days is a cluster for example. 2021-04-09 15:08 Standard deviation - how far on average we are form the mean. Cool, makes sense in the last sentence of This video Few notes on variance and how to find it. From the video above as well. Just wannah have those here.
{"url":"http://arvydas.dev/20210330T071300--black-box-statistics__learning.html","timestamp":"2024-11-09T03:05:34Z","content_type":"text/html","content_length":"6842","record_id":"<urn:uuid:fe23f4bf-ddc0-45e9-b2ff-512b1b9449e3>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00207.warc.gz"}
Writing Six-Digit Numbers Using a Place Value Table Question Video: Writing Six-Digit Numbers Using a Place Value Table Mathematics • Third Year of Primary School Write down the number given in the figure in digits. Video Transcript Write down the number given in the figure in digits. In the figure or the picture that we can see in front of us, there’s an abacus. And we know that an abacus is a way of representing numbers. Each set of beads, and in this case they’re all different-colored beads, represents a different place value column in the number. We can see that there are six different sets of beads here, so we know that it’s a six-digit number. And on this abacus, each place value is labeled, which is really helpful for us. In the hundred thousands place, there’s one bead. So, that’s 100,000. Then, we’ve got six beads in the ten thousands place, these have a value of 60,000, and then another six beads in the thousands place, or as it’s labeled here the one thousands place. So altogether, the number of thousands in our number is 166,000. And if we’re representing this part of our number using digits, we’d write a one, a six, and a six, followed by a comma, which is to separate the thousands from the next part of our number, the hundreds, the tens, and the ones. There are two beads in the hundreds place, these have a value of 200, three beads in the tens place, so they have a value of 30, and we’ve got five ones. So, the last part of our number reads 235. So, we just need to make sure that we write our two, our three, and our five after that comma. We’ve used what we know about place value to read this abacus and write the number that it shows in digits. The number is one hundred sixty-six thousand two hundred thirty-five. And we write it in digits as one-six-six-two-three-five. And we’ve used a comma to separate the thousands from the hundreds, tens, and ones.
{"url":"https://www.nagwa.com/en/videos/959139387823/","timestamp":"2024-11-06T10:32:10Z","content_type":"text/html","content_length":"242390","record_id":"<urn:uuid:94e67110-a07d-415c-b441-979c10d0c044>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00868.warc.gz"}
Multiplication Worksheet For 2s Math, specifically multiplication, forms the cornerstone of numerous scholastic techniques and real-world applications. Yet, for lots of students, mastering multiplication can pose a challenge. To resolve this obstacle, instructors and moms and dads have actually embraced an effective device: Multiplication Worksheet For 2s. Intro to Multiplication Worksheet For 2s Multiplication Worksheet For 2s Multiplication Worksheet For 2s - Welcome to The Multiplying 1 to 12 by 2 100 Questions A Math Worksheet from the Multiplication Worksheets Page at Math Drills This math worksheet was created or last revised on 2021 02 19 and has been viewed 1 278 times this week and 1 598 times this month It may be printed downloaded or saved and used in your classroom home school or other educational environment to help Timed Quiz 0 2 THis 50 question timed assessment has a mixture of multiplication facts with 0s 1s and 2s We have basic multiplication worksheets with factors up to 10 or 12 We have multi digit multiplication pages as well This page has links to filled partly filled and blank multiplication tables Value of Multiplication Method Recognizing multiplication is critical, laying a solid structure for sophisticated mathematical principles. Multiplication Worksheet For 2s supply structured and targeted method, cultivating a much deeper comprehension of this essential arithmetic operation. Evolution of Multiplication Worksheet For 2s A Minute Of Multiplication With 2s A Minute Of Multiplication With 2s Multiplication by 2s This page is filled with worksheets of multiplying by 2s This is a quiz puzzles skip counting and more Multiplication by 3s Jump to this page if you re working on multiplying numbers by 3 only Multiplication by 4s Here are some practice worksheets and activities for teaching only the 4s times tables Multiplication This skip counting by 2s worksheet with multiplication activity can be used as a center for early finishers or for guided or independent practice It builds numbers sense and helps students learn their 2 times table The skip counting by 2 song for this worksheet is very catchy and will definitely get stuck in students heads Listen to the From typical pen-and-paper exercises to digitized interactive layouts, Multiplication Worksheet For 2s have advanced, accommodating diverse knowing designs and choices. Kinds Of Multiplication Worksheet For 2s Standard Multiplication Sheets Straightforward workouts concentrating on multiplication tables, assisting learners construct a strong math base. Word Issue Worksheets Real-life situations integrated into troubles, boosting vital thinking and application abilities. Timed Multiplication Drills Tests made to boost speed and accuracy, helping in quick psychological mathematics. Advantages of Using Multiplication Worksheet For 2s Printable Multiplication Facts 2S PrintableMultiplication Printable Multiplication Facts 2S PrintableMultiplication Here s another timed multiplication test with 0s 1s 2s and 3s 2nd through 4th Grades View PDF Multiplication Timed Quiz 0 4 We have thousands of multiplication worksheets This page will link you to facts up to 12s and fact families We also have sets of worksheets for multiplying by 3s only 4s only 5s only etc Use these free multiplication worksheets to help your second grader practice and build fluency in multiplication Start with some of the easier 1 digit number worksheets Be sure to introduce multiplication by explaining its relationship to addition Then start with multiplying by one or even zero Boosted Mathematical Abilities Regular practice sharpens multiplication effectiveness, enhancing total mathematics capacities. Improved Problem-Solving Abilities Word problems in worksheets create logical reasoning and strategy application. Self-Paced Understanding Advantages Worksheets fit specific discovering rates, promoting a comfy and adaptable discovering setting. Just How to Produce Engaging Multiplication Worksheet For 2s Including Visuals and Colors Lively visuals and shades record interest, making worksheets aesthetically appealing and involving. Consisting Of Real-Life Situations Connecting multiplication to day-to-day circumstances adds significance and usefulness to workouts. Customizing Worksheets to Different Skill Levels Personalizing worksheets based upon varying efficiency degrees guarantees inclusive learning. Interactive and Online Multiplication Resources Digital Multiplication Equipment and Gamings Technology-based resources use interactive understanding experiences, making multiplication interesting and satisfying. Interactive Websites and Apps Online platforms give diverse and obtainable multiplication technique, supplementing traditional worksheets. Personalizing Worksheets for Various Understanding Styles Visual Students Visual help and representations aid comprehension for learners inclined toward aesthetic knowing. Auditory Learners Spoken multiplication troubles or mnemonics deal with students who realize concepts through acoustic methods. Kinesthetic Learners Hands-on tasks and manipulatives sustain kinesthetic learners in comprehending multiplication. Tips for Effective Execution in Understanding Consistency in Practice Routine practice reinforces multiplication abilities, promoting retention and fluency. Stabilizing Repetition and Variety A mix of recurring workouts and varied trouble layouts maintains passion and understanding. Providing Positive Feedback Responses help in recognizing locations of renovation, urging continued development. Obstacles in Multiplication Method and Solutions Motivation and Interaction Obstacles Monotonous drills can bring about disinterest; cutting-edge approaches can reignite inspiration. Conquering Fear of Math Unfavorable assumptions around mathematics can prevent development; creating a positive understanding setting is necessary. Influence of Multiplication Worksheet For 2s on Academic Performance Research Studies and Research Searchings For Study indicates a favorable relationship between constant worksheet use and enhanced math efficiency. Multiplication Worksheet For 2s become versatile devices, fostering mathematical effectiveness in learners while accommodating diverse learning styles. From standard drills to interactive online sources, these worksheets not just enhance multiplication skills yet likewise advertise essential thinking and problem-solving abilities. Multiplication Worksheets 2S PrintableMultiplication Worksheet On 2 Times Table Printable Multiplication Table 2 Times Table Check more of Multiplication Worksheet For 2s below 2nd Grade Math Worksheets Multiplication Worksheet Resume Examples Free Multiplication Worksheet 2s Worksheets4Free Multiplication Worksheets 2S PrintableMultiplication Kids Page 2 Times Multiplication Table Worksheet Free Multiplication Worksheet 1s 2s And 3s Free4Classrooms Free Multiplication Worksheet 1s And 2s Free4Classrooms Multiplication Worksheets Multiply by 2s Super Teacher Worksheets Timed Quiz 0 2 THis 50 question timed assessment has a mixture of multiplication facts with 0s 1s and 2s We have basic multiplication worksheets with factors up to 10 or 12 We have multi digit multiplication pages as well This page has links to filled partly filled and blank multiplication tables Multiplying by 2 worksheets K5 Learning The first worksheet is a table of all multiplication facts 1 12 with two as a factor 2 times table Worksheet 1 49 questions Worksheet 2 Worksheet 3 100 questions Worksheet 4 Worksheet 5 3 More Timed Quiz 0 2 THis 50 question timed assessment has a mixture of multiplication facts with 0s 1s and 2s We have basic multiplication worksheets with factors up to 10 or 12 We have multi digit multiplication pages as well This page has links to filled partly filled and blank multiplication tables The first worksheet is a table of all multiplication facts 1 12 with two as a factor 2 times table Worksheet 1 49 questions Worksheet 2 Worksheet 3 100 questions Worksheet 4 Worksheet 5 3 More Kids Page 2 Times Multiplication Table Worksheet Free Multiplication Worksheet 2s Worksheets4Free Free Multiplication Worksheet 1s 2s And 3s Free4Classrooms Free Multiplication Worksheet 1s And 2s Free4Classrooms 4Th Grade Multiplication Worksheets Free 4th Grade Multiplication Worksheets Best Coloring 14 Best Images Of Hard Multiplication Worksheets 100 Problems Math Fact Worksheets 14 Best Images Of Hard Multiplication Worksheets 100 Problems Math Fact Worksheets 2 Multiplication Facts Worksheets Frequently Asked Questions (Frequently Asked Questions). Are Multiplication Worksheet For 2s ideal for all age teams? Yes, worksheets can be tailored to different age and ability degrees, making them adaptable for numerous students. How typically should pupils exercise using Multiplication Worksheet For 2s? Consistent technique is vital. Regular sessions, ideally a couple of times a week, can produce considerable improvement. Can worksheets alone enhance mathematics abilities? Worksheets are an important device yet should be supplemented with varied understanding approaches for comprehensive ability advancement. Exist online platforms supplying cost-free Multiplication Worksheet For 2s? Yes, many educational websites provide free access to a large range of Multiplication Worksheet For 2s. Exactly how can moms and dads support their youngsters's multiplication method in your home? Encouraging consistent practice, providing aid, and producing a favorable learning atmosphere are valuable steps.
{"url":"https://crown-darts.com/en/multiplication-worksheet-for-2s.html","timestamp":"2024-11-13T21:38:50Z","content_type":"text/html","content_length":"28631","record_id":"<urn:uuid:142906fb-6268-45c5-af5b-c06f4f5e816c>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00318.warc.gz"}
Plasma kinetic theory without the Markovian approximation: Numerical results A numerical computation is carried out for the evolution of the amplitude and the frequency shift of a monochromatic electron wave in an unmagnetized plasma by using a recently generalized quasilinear theory. The numerical results show that the theory gives a good representation of trapping phenomena. It is found that trapping originates from retaining fast scale terms, both in the orbits and in the propagation of the perturbed distribution f'. Such terms in the latter represent space and time correlations of the orbits, or, equivalently, the ``memory'' of the average distribution f¯. Both theory and computation were done in the regime f'≪ f¯, or eΦ/T<1. Mode-coupling terms are not needed in such a regime. Physics of Fluids B Pub Date: March 1989 □ Kinetic Theory; □ Nonlinear Equations; □ Plasma Waves; □ Coupled Modes; □ Numerical Stability; □ Perturbation; □ Plasma Oscillations; □ Trapping; □ Plasma Physics
{"url":"https://ui.adsabs.harvard.edu/abs/1989PhFlB...1..485P/abstract","timestamp":"2024-11-07T22:35:23Z","content_type":"text/html","content_length":"36427","record_id":"<urn:uuid:8a7ccd47-ddba-407b-b82f-efc2e20ccb15>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00820.warc.gz"}
SICP Solutions Chapter 2, Building Abstractions with Data Section - 2.1 - Introduction to Data Abstraction Exercise 2.16 We know from previous exercise that this problem arises because we expect that an interval-variable should take same value in all the occurrences in an expression. Thus to fix this we have to make sure that an an interval-variable should takes same value in all the occurrences in an expression while doing the interval arithmetic. This problem can be approached in two ways: • Convert the expression so that each interval variable occurs only once. This is however not possible. For eg: $I^n + I^{n-1}$ can not be converted to an expression containing $I$ only once. • To do interval-arithmetic in such a way that when a variable is repeated then all occurrences of an interval variable takes same value. Thus we have only second approach as first approach will not work. I will try to outline my approach how we may do it: Note: I think there are high chances that it is an incorrect approach. I wrote it to remember so that if I ever visit again I may get some direction from this approach. We will do lazy-evaluation i.e. we will not perform computation till we get the whole expression and after analyzing the whole expression we will be able to come up with a formulae for lower-bound and upper-bound. Instead of computing lower-bound and upper-bound while doing interval arithmetic we will remember the interval-variables used for computing the lower-bounds and upper-bounds and the corresponding operation. For eg: In case of addition of $I_1 + I_2$, remember that $\text{ lower-bound } = \text{ lower-bound}( I_1 ) + \text{ lower-bound}(I_2)$. Note that we are not computing the sum, but we rememeber that the lower-bound of $I_1 + I_2$ is sum of lower-bounds of $I_1$ and $I_2$. Similarly we store for upper-bounds. Same process can be repeated for multiplication: $I_1 \times I_2$, we will follow the same process to remember which values are used of the intervals to compute lower or upper bounds. Thus we can extend the same process of remembering for finding upper-bound of $I_1 + I_2 + ... I_n$ or $I_1 \times I_2 \times ... I_n$. Extending further we recursively keep remembering the overall formula for computing without actually computing the lower-bound and upper-bound till we are done reading the complete expression. For eg: $I_1 \times I_2 + I_2 \times I_3$, may be we have lower-bound = $lb(I_1) \times ub(I_2) + lb(I_1) \times lb(I_3)$ where $lb =$ lower-bound and $ub =$ upper-bound. Once we are done, we have the formulas for computing lower-bound and upper-bound(one for each). Now we can compute the final value of lower-bound and upper-bound using these formulas as follows: If a variable is repeated multiple times in the formula then we only need to worry when some repetitions are using lower-bound of the repeated variable and other using upper-bound. Otherwise if all repetitions uses same bound then we can use that value while computing the overall value of the bound(lower or upper). For the case, we compute the bound(lower or uppert) two times, one by using lower-bound and other by using upper-bound. And depending on which bound, lower or upper, we are computing, we will pick the bound-value(lower or upper) of the repetitive variable which results in smallest or largest value accordingly. We can use arbitrary values for other variables(obviously from the corresponding interval range only)
{"url":"https://www.inchmeal.io/sicp/ch-2/ex-2.16.html","timestamp":"2024-11-13T05:36:07Z","content_type":"text/html","content_length":"139509","record_id":"<urn:uuid:246f2f56-0451-4c55-a238-22e7acb0dee7>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00454.warc.gz"}
The Stacks project Lemma 4.27.14. Let $\mathcal{C}$ be a category and let $S$ be a right multiplicative system of morphisms of $\mathcal{C}$. Let $A, B : X \to Y$ be morphisms of $S^{-1}\mathcal{C}$ which are the equivalence classes of $(f : X' \to Y, s : X' \to X)$ and $(g : X' \to Y, s : X' \to X)$. The following are equivalent 1. $A = B$, 2. there exists a morphism $t : X'' \to X'$ in $S$ with $f \circ t = g \circ t$, and 3. there exists a morphism $a : X'' \to X'$ with $f \circ a = g \circ a$ and $s \circ a \in S$. Comments (0) There are also: • 23 comment(s) on Section 4.27: Localization in categories Post a comment Your email address will not be published. Required fields are marked. In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). All contributions are licensed under the GNU Free Documentation License. In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 04VJ. Beware of the difference between the letter 'O' and the digit '0'. The tag you filled in for the captcha is wrong. You need to write 04VJ, in case you are confused.
{"url":"https://stacks.math.columbia.edu/tag/04VJ","timestamp":"2024-11-14T14:01:32Z","content_type":"text/html","content_length":"14613","record_id":"<urn:uuid:2088cb12-72ea-4733-b51d-1eeefb6322bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00535.warc.gz"}
Experiment #1: Hydrostatic Pressure 1. Introduction Hydrostatic forces are the resultant force caused by the pressure loading of a liquid acting on submerged surfaces. Calculation of the hydrostatic force and the location of the center of pressure are fundamental subjects in fluid mechanics. The center of pressure is a point on the immersed surface at which the resultant hydrostatic pressure force acts. 2. Practical Application The location and magnitude of water pressure force acting on water-control structures, such as dams, levees, and gates, are very important to their structural design. Hydrostatic force and its line of action is also required for the design of many parts of hydraulic equipment. 3. Objective The objectives of this experiment are twofold: • To determine the hydrostatic force due to water acting on a partially or fully submerged surface; • To determine, both experimentally and theoretically, the center of pressure. 4. Method In this experiment, the hydrostatic force and center of pressure acting on a vertical surface will be determined by increasing the water depth in the apparatus water tank and by reaching an equilibrium condition between the moments acting on the balance arm of the test apparatus. The forces which create these moments are the weight applied to the balance arm and the hydrostatic force on the vertical surface. 5. Equipment Equipment required to carry out this experiment is the following: • Armfield F1-12 Hydrostatic Pressure Apparatus, • A jug, and • Calipers or rulers, for measuring the actual dimensions of the quadrant. 6. Equipment Description The equipment is comprised of a rectangular transparent water tank, a fabricated quadrant, a balance arm, an adjustable counter-balance weight, and a water-level measuring device (Figure 1.1). The water tank has a drain valve at one end and three adjustable screwed-in feet on its base for leveling the apparatus. The quadrant is mounted on a balance arm that pivots on knife edges. The knife edges coincide with the center of the arc of the quadrant; therefore, the only hydrostatic force acting on the vertical surface of the quadrant creates moment about the pivot point. This moment can be counterbalanced by adding weight to the weight hanger, which is located at the left end of the balance arm, at a fixed distance from the pivot. Since the line of actions of hydrostatic forces applied on the curved surfaces passes through the pivot point, the forces have no effect on the moment. The hydrostatic force and its line of action (center of pressure) can be determined for different water depths, with the quadrant’s vertical face either partially or fully submerged. A level indicator attached to the side of the tank shows when the balance arm is horizontal. Water is admitted to the top of the tank by a flexible tube and may be drained through a cock in the side of the tank. The water level is indicated on a scale on the side of the quadrant [1]. Figure 1.1: Armfield F1-12 Hydrostatic Pressure Apparatus 7. Theory In this experiment, when the quadrant is immersed by adding water to the tank, the hydrostatic force applied to the vertical surface of the quadrant can be determined by considering the following • The hydrostatic force at any point on the curved surfaces is normal to the surface and resolves through the pivot point because it is located at the origin of the radii. Hydrostatic forces on the upper and lower curved surfaces, therefore, have no net effect – no torque to affect the equilibrium of the assembly because the forces pass through the pivot. • The forces on the sides of the quadrant are horizontal and cancel each other out (equal and opposite). • The hydrostatic force on the vertical submerged face is counteracted by the balance weight. The resultant hydrostatic force on the face can, therefore, be calculated from the value of the balance weight and the depth of the water. • The system is in equilibrium if the moments generated about the pivot points by the hydrostatic force and added weight (=mg) are equal, i.e.: m : mass on the weight hanger, L : length of the balance arm (Figure 1.2) F : Hydrostatic force, and y : distance between the pivot and the center of pressure (Figure 1.2). Then, calculated hydrostatic force and center of pressure on the vertical face of the quadrant can be compared with the experimental results. 7.1 Hydrostatic Force The magnitude of the resultant hydrostatic force (F) applied to an immersed surface is given by: P[c] : pressure at centroid of the immersed surface, A: area of the immersed surface, y[c ]: centroid of the immersed surface measured from the water surface, g : acceleration due to gravity. The hydrostatic force acting on the vertical face of the quadrant can be calculated as: • Partially immersed vertical plane (Figure 1.2a): • Fully immersed vertical plane (Figure 1.2b): B : width of the quadrant face, d : depth of water from the base of the quadrant, and D : height of the quadrant face. 7.2 Theoretical Determination of Center of Pressure The center of pressure is calculated as: ^nd moment of area of immersed body about an axis in the free surface. By use of the parallel axes theorem: where ^nd moment of area of immersed body about the centroidal axis. • Partially immersed vertical plane: • Fully immersed vertical plane: The depth of the center of pressure below the pivot point is given by: in which H is the vertical distance between the pivot and the base of the quadrant. Substitution of Equation (6a and 6b) and into (4) and then into (7) yields the theoretical results, as follows: • Partially immersed vertical plane (Figure 1.2a): • Fully immersed vertical rectangular plane (Figure 1.2b): Figure 1.2a: Partially submerged quadrant (c: centroid, p: center of pressure) Figure 1.2b: Fully submerged quadrant (c: centroid, p: center of pressure) 7.3 Experimental Determination of Center of Pressure For equilibrium of the experimental apparatus, moments about the pivot are given by Equation (1). By substitution of the derived hydrostatic force, F from Equation (3a and b), we have: • Partially immersed vertical plane (Figure 1.2a): • Fully immersed vertical rectangular plane (Figure 1.2b): 8. Experimental Procedure Begin the experiment by measuring the dimensions of the quadrant vertical endface (B and D) and the distances (H and L), and then perform the experiment by taking the following steps: • Wipe the quadrant with a wet rag to remove surface tension and prevent air bubbles from forming. • Place the apparatus on a level surface, and adjust the screwed-in feet until the built-in circular spirit level indicates that the base is horizontal. (The bubble should appear in the center of the spirit level.) • Position the balance arm on the knife edges and check that the arm swings freely. • Place the weight hanger on the end of the balance arm and level the arm, using the counter weight, so that the balance arm is horizontal. • Add 50 grams to the weight hanger. • Add water to the tank and allow time for the water to settle. • Close the drain valve at the end of the tank, then slowly add water until the hydrostatic force on the end surface of the quadrant is balanced. This can be judged by aligning the base of the balance arm with the top or bottom of the central marking on the balance rest. • Record the water height, which displayed on the side of the quadrant in mm. If the quadrant is partially submerged, record the reading in the partially submerged portion of the Raw Data Table. • Repeat the steps, adding 50 g weight each time, until the final weight of 500 g is reached. When the quadrant is fully submerged, record the readings in the fully submerged part of the Raw Data • Repeat the procedure in reverse by progressively removing the weights. • Release the water valve, remove the weights, and clean up any spilled water. 9. Results and Calculations Please visit this link for accessing the excel workbook for this experiment. 9.1 Result Record the following dimensions: • Height of quadrant endface, D (m) = • Width of submerged, B (m)= • Length of balance arm, L (m)= • Distance from base of quadrant to pivot, H (m)= All mass and water depth readings should be recorded in the Raw Data Table: Raw Data Table Test No. Mass, m (kg) Depth of Immersion, d (m) Partially submerged 3 Fully Submerged 8 9.2 Calculations Calculate the following for the partially and fully submerged quadrants, and record them in the Result Table: • Hydrostatic force (F) • Theoretical depth of center of pressure below the pivot (y) • Experimental depth of center of pressure below the pivot (y) Result Table Test No. Mass m(kg) Depth of immersion d(m) Hydrostatic force F(N) Theoretical depth of center of pressure (m) Experimental depth of center of pressure (m) 10. Report Use the template provided to prepare your lab report for this experiment. Your report should include the following: • Table (s) of raw data • Table (s) of results • Plots of the following graphs: □ Hydrostatic force (y-axis) vs depth of immersion (y-axis), □ Theoretical depth of center of pressure (y-axis) vs depth of immersion (x-axis), □ Experimental depth of center of pressure (y-axis) vs depth of immersion (x-axis), □ Theoretical depth of centre of pressure (y-axis) vs experimental depth of center of pressure (x-axis). Calculate and present value for this graph, and □ Mass (y-axis) vs depth of immersion (x-axis) on a log-log scale graph. • Comment on the variations of hydrostatic force with depth of immersion. • Comment on the relationship between the depth of the center of pressure and the depth of immersion. • For both hydrostatic force and theoretical depth of center of pressure plotted vs depth of immersion, comment on what happens when the vertical endface of quadrant becomes fully submerged. • Comment on and explain the discrepancies between the experimental and theoretical results for the center of pressure.
{"url":"https://uta.pressbooks.pub/appliedfluidmechanics/chapter/experiment-1/","timestamp":"2024-11-07T02:40:25Z","content_type":"text/html","content_length":"94343","record_id":"<urn:uuid:fa6ecf82-85c9-4078-9df1-e98190b82764>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00678.warc.gz"}
IntroductionMaterials and methodsParticipantsEEG acquisition and preprocessingCortical sources reconstruction and regions of interest definitionGranger causality analysisIndices derived from graph theoryThe graphCentrality indicesConnectivity analysisResultsAnalysis of the complete connectivity matrixLobes’ analysisAnalysis on the individual regions of interestsAnalysis on the sparse connectivity matrixLobes’ analysisAnalysis of the individual regions of interestDiscussionGranger causalityGraph theoryConnectivity among macro-areas (lobes)Connectivity among individual regions of interestsNeurophysiological meaningLimitations of the present studyData availability statementEthics statementAuthor contributionsConflict of interestPublisher’s noteReferences Autism is a complex neurodevelopmental condition characterized by several behavioral peculiarities, involving avoidance of social interactions, reduced communication, and restricted interests [see the American Psychiatric Association [APA] (2022)]. The biological origin of this condition is a subject of active research, in an effort to understand its fundamental neural mechanisms. In this regard, a current perspective is that autistic traits could be explained by modifications in brain network characteristics, especially in the connectivity among brain areas underlying perception, social cognition, language, and executive functions (Kana et al., 2014). Indeed, many recent studies have reported that individuals within the autism spectrum disorder (ASD) exhibit altered brain connectivity compared to typically developing individuals. However, literature reports are often inconsistent [see review papers by Maximo et al. (2014); Mohammad-Rezazadeh et al. (2016), Carroll et al. (2021)]. The traditional point of view, predominantly supported by studies using structural and functional MRI, hypothesizes that autism is characterized by long-range underconnectivity, potentially combined with local overconnectivity (Just et al., 2012; Abrams et al., 2013; Delbruck et al., 2019). Conversely, there have been several studies, using EEG and MEG, in which the hypoconnectivity hypothesis could not be confirmed in ASD. Rather, several studies pointed to hyperconnectivity among specific brain areas, especially between thalamic and sensory regions (Nair et al., 2013) or between the extrastriatal cortex, frontal and temporal regions (Murphy et al., 2012; Uddin et al., 2013; Fu et al., 2019). Finally, a third line of evidence points towards the existence of a more subtle mixture of hypo- and hyper-connectivity, suggesting the presence of multiple mechanisms (Di Martino et al., 2011; Lynch et al., 2013; Kana et al., 2014; Abbott et al., 2018). Some of these differences, of course, can derive from methodological issues. Connectivity is an elusive concept that can be dramatically affected by the measurement technique adopted (for instance, fMRI vs. EEG/MEG), by the particular task involved (vs. resting state analysis), and perhaps more importantly, by the specific measure employed to estimate the connection strength (e.g., functional, effective or anatomical connectivity, directed or undirected measures, bivariate or multivariate). Indeed, most connectivity measures in literature are not-directional and hence are inadequate to discover differences in lateralization or in top-down vs. bottom-up information processing (O’Reilly et al., 2017). In particular, it is well-known that cognitive functions are characterized by a complex balance between integration, involving the coordination among several brain areas, and segregation, involving specialized computations in local areas. According to the predictive coding theory (Clark, 2013), the brain continually generates models of the world by integrating data coming from sensory input with information from memory. Sensory perception is thus the result of a combination between present data from the external world (usually carried by feedforward bottom-up connectivity) and past or prior knowledge (mainly conveyed through feedback, top-down connections); hence, an equilibrium between these directional connectivity patterns is necessary to adaptatively integrate stimuli-driven and internally-driven representations, preventing their segregation or excessive bias towards one or the other. Recent hypotheses (Pellicano and Burr, 2012; Van de Cruys et al., 2014) assume that ASD individuals exhibit an impaired predictive coding, characterized by an imbalance between these two processing streams, i.e., dominant bottom-up processing and relatively weaker top-down influences compared with control individuals. This signifies that people in the autistic spectrum would pose much more emphasis on present sensory stimuli and somewhat less weight on contextual information. This imbalance, in turn, may result in poor social adaptation and insufficient appropriateness to social requirements (Sinha et al., 2014). Results that support this point of view include a reduced susceptibility to illusions and top-down expectations (Skewes et al., 2015; Crespi and Dinsdale, 2019) and increased local (vs. global) processing in individuals within the autism spectrum (Mottron et al., 2006; Cribb et al., 2016) leading to a more stimulus- and detail-driven perceptual style. The aforementioned alterations in predictive coding may be caused by altered brain connectivity, especially concerning top-down vs. bottom-up circuitry (Tarasi et al., 2022). Additionally, alterations in connectivity patterns may involve a different transmission of brain rhythms and an impaired wave synchronization, which plays a pivotal role in several cognitive tasks, including attention, information selection, working memory, and emotion (Basar-Eroglu et al., 2007; Clayton et al., 2015). Finally, increasing evidence both at the genetic and behavioral levels demonstrates that autism does not represent a dichotomy condition (i.e., one ON/OFF in type) but is best described as a spectrum of manifestations ranging from clinical forms to trait-like expressions within the general population (Baron-Cohen et al., 2001; Cribb et al., 2016; Bralten et al., 2018) that share a peculiar cognitive style that distinguishes them from the rest of the clinical and nonclinical population (Tarasi et al., 2022). Following these ideas, in a recent paper (Tarasi et al., 2021), we investigated whether the patterns of brain connectivity, estimated with Granger causality from EEG source reconstruction, exhibit differences in two nonclinical groups classified as low or high on autistic traits. Preliminary results suggested that connectivity along the fronto-posterior axis is sensitive to the magnitude of the autistic features and that a prevalence of ascending connections characterized participants with higher autistic traits. The present study aims to further extend the previous work on a larger cohort allowing for an improved connectivity analysis by implementing measures taken from the graph theory. In particular, new aspects of the present study concern: (i) the use of a larger data set; (ii) a preliminary analysis at the lobe level; (iii) the use of more sophisticated indices taken from the graph theory, such as hubness and authority; (iv) the use of a more sophisticate statistical analysis (i.e., the use of sparse connectivity matrices) to better point out differences in connectivity between the two groups. Particularly, graph theory represents a powerful tool able to summarize complex networks consisting of hundreds of edges, using a few parameters with a clear geometrical meaning. Recently, this theory has been applied with increasing success as an integrative approach, able to evaluate the complex networks that mediate brain cognitive processes (van Wijk et al., 2010; Wang, 2010; Minati et al., 2013; Farahani et al., 2019). In particular, since our attention here is primarily devoted to the presence of differences in the direction of connections (ascending vs. descending, lateralization, etc.), we focused our analysis on the in degree and out degree, defined as the sum of connection strengths entering or leaving a given node. Furthermore, we also tested whether two analogous but more specialized measures of centrality, hubness and authority, can provide additional information to better characterize directionality. The hub’s index of a node is the weighted sum of the authority’s indices of all its successors; hence, this measure summarizes the capacity of a node to send information to other critical, authoritative nodes. The authority’s index of a node is the weighted sum of the hub’s indices of all its predecessors and summarizes the capacity of a node to receive essential information from hubs. Here, we investigate whether differences in these measures, and the pattern of out and in connections from the dominant nodes, can reveal a difference in the network’s topology, and alterations in information processing, as a function of the autistic trait. Forty participants (23 female; age range 21–30, mean age = 24.1, SD = 2.4), with no neurocognitive or psychiatric disorders, took part in the study. All participants signed a written informed consent before taking part in the study, conducted according to the Declaration of Helsinki and approved by the Bioethics Committee of the University of Bologna. All participants completed the Autism-Spectrum Quotient test (AQ) (Baron-Cohen et al., 2001). The mean AQ score was 16.1 ± 6.6. The AQ is a self-report widely used to measure autistic traits in the general population. It provides a global score, with higher values indicating higher levels of autistic traits. We used the original scoring methods converting each item into a dichotomous response (agree/disagree) and assigning the response a binary code (0/1). In the present study, the total score of the AQ was considered, and the Italian version of the AQ was adopted (Ruta et al., 2012). The participants were divided into two groups, depending on their AQ score being below or above a given cutoff, with the cutoff set to 17, since this value corresponds to the average AQ score in the non-clinical population (Ruzich et al., 2015). In the following, we will refer to the two groups of participants as Low AQ score Group (N = 21) and High AQ score Group (N = 19). Participants comfortably sat in a room with dimmed lights. Electroencephalographic activity (EEG) was recorded at rest for 2 min while participants kept their eyes closed. A set of 64 electrodes was mounted according to the international 10–10 system. EEG was measured with respect to a vertex reference (Cz), and all impedances were kept below 10 kΩ. EEG signals were acquired at a rate of 1000 Hz. EEG was processed offline with custom MATLAB scripts (version R2020b) and the EEGLAB toolbox (Delorme and Makeig, 2004). The EEG recording was filtered offline in the 0.5–70 Hz band. The signals were visually inspected, and noisy channels were spherically interpolated. An average of 0.05 ± 0.15 channels were interpolated. The recording was then re-referenced to the average of all electrodes. Subsequently, we applied the Independent Component Analysis (ICA), an effective method largely employed to remove EEG artifacts. In particular, we removed the EEG recording segments corrupted by noise through visual inspection and then we removed all the independent components containing artifacts clearly distinguishable by means of visual inspection from brain-related components. An average of 3 ± 3.7 independent components were removed for each participant. Since we were interested in connectivity analysis, cortical source activity was reconstructed from pre-processed EEG signals. To this aim, intracortical current densities were estimated using the Matlab toolbox Brainstorm (Tadel et al., 2011). Firstly, to solve the forward problem, a template head model based on realistic anatomical information (ICBM 152 MNI template) was used. The model consists of three layers representing the scalp, the outer skull surface, and the inner skull surface, and includes the cortical source space discretized into 15,002 vertices. The forward problem was solved in OpenMEEG software (Gramfort et al., 2010) via the Boundary Element Method. sLORETA (standardized Low-Resolution Electromagnetic Tomography) algorithm was used for cortical sources estimation. sLORETA is a functional imaging technique belonging to the family of linear inverse solutions for 3D EEG distributed source modeling (Pascual-Marqui, 2002). Specifically, this method computes a weighted minimum norm solution, where localization inference is based on standardized values of the current density estimates. The solution provided is instantaneous, distributed, discrete, linear with the property of zero dipole-localization error under ideal (noise-free) conditions. Constrained dipole orientations were chosen for sources estimation, modeling each dipole as oriented perpendicularly to the cortical surface. Hence, for each participant, we reconstructed the resting-state time series of standardized current densities at all 15,002 cortical vertices. Then, the cortical vertices were grouped into cortical regions according to the Desikan–Killiany atlas (Desikan et al., 2006) provided in Brainstorm, which defines 68 regions of interest (ROIs). The activities of all vertices belonging to a particular ROI were averaged at each time point, obtaining a single time series representative of the activity of that cortical ROI. It is worth noticing that, by considering the average behavior at the ROIs level, it was possible to mitigate some possible inaccuracies in source reconstruction at single vertex level, due to the use of a template head model for all participants (instead of subject-specific head models). Table 1 lists the 68 Desikan-Killiany ROIs and provides the mapping of individual ROIs to each lobe. The approximate mapping of the “Desikan-Killiany” ROIs to the lobes. ROI Label Lobe ROI Label Lobe Banks of Sup. Temp. Sulcus BK Temporal Parahippocampal PH Temporal Caudal anterior cingulate cAC Frontal Pars opercularis pOP Frontal Caudal middle frontal cMF Frontal Pars orbitalis pOR Frontal Cuneus CU Occipital Pars triangularis pTR Frontal Entorhinal EN Temporal Pericalcarine PCL Occipital Frontal pole FP Frontal Postcentral POC Parietal Fusiform FU Temporal Posterior cingulate PCG Parietal Inferior parietal IP Parietal Precentral PRC Frontal Inferior temporal IT Temporal Precuneus PCU Parietal Insula IN Parietal Rostral anterior cingulate rAC Frontal Isthmus cingulate IST Parietal Rostral middle frontal rMF Frontal Lateral occipital LO Occipital Superior frontal SF Frontal Lateral orbitofrontal lOF Frontal Superior parietal SP Parietal Lingual LG Occipital Superior temporal ST Temporal Medial orbitofrontal mOF Frontal Supramarginal SMG Parietal Middle temporal MT Temporal Temporal pole TP Temporal Paracentral PAC Frontal Transverse temporal TT Temporal The Desikan–Killiany atlas comprises 34 ROIs in each hemisphere. The mapping proposed by FreeSurfer (https://surfer.nmr.mgh.harvard.edu/fswiki/CorticalParcellation) was used as a reference. The only difference between our mapping and the reference resides in the mapping of the insula, which was not ascribed to any lobe in FreeSurfer. We assigned the insula to the parietal lobe. Once the time waveform in each cortical ROI was estimated (as described above), for each participant k (k = 1,…,40) we evaluated the connectivity among the ROIs. To this aim, we adopted Granger Causality (GC) (Granger, 1969; Geweke, 1982; Ding et al., 2006; Bressler and Seth, 2011; Stokes and Purdon, 2017) which provides directional metrics of connectivity, and is based on the autoregressive (AR) modeling framework as described in the following. Let’s indicate with x[k,i][n] and x[k,j][n] two temporal series representing the activity of two distinct cortical ROIs (ROI[i] and ROI[j]) for participant k, where n is the discrete time index. The Granger Causality quantifies the causal interaction from ROI[i] to ROI[j] as the improvement in predictability of x[k,j][n] at time sample n when using a bivariate AR representation, including both past values of x[k,j] and past values of x[k,i], compared to a univariate AR representation, including only past values of x[k,j]. Mathematically, the following two equations hold for the univariate and bivariate AR model, respectively. xk,j⁢[n]=∑m=1pak,j⁢[m]⁢xk,j⁢[n-m]+ηk,j⁢[n] xk,j⁢[n]=∑m=1pbk,j⁢[m]⁢xk,j⁢[n-m] +∑m=1pck,j⁢i⁢[m]⁢xk,i⁢[n-m]+εk,j⁢[n] Index m represents the time lag (in time samples), and p (model order) defines the maximum time lag, i.e., the maximum number of lagged observations included in the models. Thus, in Eq. 1, the current value of x[k,j] (at time sample n) is predicted in terms of its own p past values (at time samples n−1,n−2,…,n−p), while in Eq. 2 prediction is made also in terms of the p past values of x [k,i]. a, b, c are the model’s coefficients (dependent on time lag), and the time series η[k,j][n] and ε[k,j][n] represent the prediction error of the univariate and bivariate AR model, respectively. The prediction error variance quantifies the model’s prediction capability based on past samples: the lower the variance, the better the model’s prediction. The GC from x[k,i] to x[k,j] is defined as the logarithm of the ratio between the variances of the two prediction errors, i.e., The measure in Eq. 3 is always positive: the larger its value, the larger the improvement in x[k,j][n]prediction when using information from the past of x[k,i] together with the past of x[k,j], and this is interpreted as a stronger causal influence from ROI[i] to ROI[j]. Similarly, Granger Causality from x[k,j] to x[k,i], GC[k,ROI[j]→ROI[i]], is computed via the same procedure, building the AR models for the time series x[k,i]. For each participant k, we computed the two directed measures of GC for each pair of ROIs, overall obtaining 68×68 connectivity values (with all auto-loops equal to zero). In all cases, the order p of the AR models was set equal to 20, corresponding to 20 ms time span at 1000 Hz sampling rate (as in our data); thus, in this study, the functional interactions between nodes were evaluated within 20 ms time delay. This value for parameter p was determined based on a preliminary analysis where we tested different values for the order of the model, obtaining that GC results did not change substantially for p≥20. As previously reported by other authors (Deshpande et al., 2009; Sporns, 2018) the connectivity between the ROIs of a brain network can be described as a weighted graph, where the magnitude of the connectivity between two ROIs is represented as the weight of an edge, whilst the ROIs connected by the edge are the nodes of the graph. A most remarkable consequence of the adoption of this representation for the brain network is the introduction of several concepts and measures from Graph Theory, which allows us to achieve a better understanding of the network’s topology (van Wijk et al., 2010; Minati et al., 2013; Farahani et al., 2019). For this study, we focused on centrality indices that take into account the direction of connections, specifically authority, hubness, in degree, and out degree centralities. These indices, which will be detailed in the following, were specifically selected for their focus on the ROIs’ inputs and outputs, which we hypothesized could offer confirmatory evidence of connectivity patterns previously observed in individuals with low and high autistic traits (Tarasi et al., 2021). A graph is the mathematical abstraction of the relationships between some entities. The entities connected in a relationship are called “nodes” of the graph and are often represented graphically in the form of points. These nodes are connected by edges. While the simplest form of a graph is undirected (i.e., the edges do not have orientation), the graph we use to describe a brain network is a weighted directed graph (or digraph), i.e., it has oriented edges, each one with a weight representing the strength of the connection. To obtain the graphs, for each participant the connectivity matrix was normalized so that its elements provided a sum of 100 (i.e., each connectivity value was divided by the total sum of connections and multiplied by 100). Furthermore, the normalized 68×68 matrices (which we will be calling “complete” matrices for clarity) were turned into 68×68 sparse matrices by removing (i.e., setting to zero) any connection that was not significantly different between the High and Low AQ score Groups. In particular, a two-tailed Monte-Carlo testing was applied (5,000 permutations) and, based on its results, not significant connections were defined as having an uncorrected p-value greater than 0.05. Forty graphs (one per participant) were obtained both for the complete normalized and the sparse matrices. For each of these graphs, centrality indices were then computed. Although a preliminary investigation was performed on the complete matrices, our analysis is mainly focused on sparse matrices since by excluding “similar” connections we expect to better capture differences in the connectivity patterns and in graph indices between the two groups. Graph theory defines a multitude of indices and coefficients that allow describing the topology of a network from different points of view. Centrality indices are part of these. They measure the importance of a particular node in the network. The four centrality indices considered in this study (in degree, out degree, authority, hubness) quantify the importance of a node as a source or a sink for the edges. In the following, we will first introduce the in degree and out degree centralities; then, authority and hubness will be described, stressing on how they differ from in degree and out degree. In the following, A will always indicate a generic adjacency matrix (i.e., a matrix containing all edges’ weights). In particular, the element A[i,j] of the matrix will represent the weight of the edge connecting node i to node j. In degree is the sum of the weights of the edges entering into a node. Out degree is the sum of the weights of the edges exiting from a node. As a result of their direct dependence on the strength of input and output connections, in degree and out degree provide an immediate description of the nodes most involved in the transmission (out degree) and reception (in degree) of information. Authority and hubness centralities include a more refined concept compared with in degree and out degree centralities and have a distinctive feature of strict interdependence. Their mathematical formulation is the following one. Authority (x[i]) is proportional to the sum of the weights of edges entering a node, multiplied by the hubness of the node the edge originates from. Hubness (y[i]) is proportional to the sum of the weights of edges exiting from a node, multiplied by the authority of the node the edge points to. These indices were computed using the function provided by the Matlab’s libraries contained in the Category “Graph and network algorithms” (Matlab R2021a), particularly the command “digraph/ centrality.” This function sets both α and β equal to 1 and calculates authority and hubness via an iterative procedure. Similar to in degree and out degree, hubness and authority provide a measure about which nodes of the network are primarily involved in the transmission (hubness) and reception (authority) of information, but they also mutually account for the centrality of the receiving and sending nodes. In particular, since these two centrality indices point to each other (i.e., to compute authority, we use hubness, and vice versa), they imply that strong connections exist between nodes with high authority and nodes with high hubness, and these indices may be useful to further emphasize any existing directionality in the connectivity pattern. For each participant, starting from either the complete normalized or the sparse 68×68 matrix, the four centrality indices were computed at each of the 68 ROIs. Additionally, we computed the average complete and sparse connectivity matrix in the Low AQ score Group and in the High AQ score Group, and then their difference. Initially, we performed an analysis at the level of macro regions (englobing several ROIs) rather than at single ROI level. To this aim, we considered 8 regions corresponding to brain lobes (frontal, parietal, temporal, and occipital lobes, both left and right). Specifically, for each participant, the 68×68 connectivity matrix was transformed into an 8×8 connectivity matrix; the elements of the 8×8 matrix were filled in with the sum of all the connections going from one lobe to another. The elements of the 8×8 matrices were subsequently tested for statistical significance across the two groups of participants, by applying a two-tailed t-test (significance level 0.05, no correction), resulting in 64 comparisons. Furthermore, the 8×8 difference matrix was computed, by subtracting the 8×8 mean connectivity matrix of the Low AQ score Group from the 8×8 mean connectivity matrix of the High AQ score Group. Thus, the elements of the difference matrix greater than 0 represented stronger connectivity for the High AQ score Group, while elements of the difference matrix less than 0 represented stronger connectivity for the Low AQ score Group. Then, a more detailed analysis was performed at the level of each ROI. A first analysis was performed on the complete normalized connectivity matrix to understand the Granger flow in some key regions. Normalization of the connectivity matrix was necessary to avoid the presence of a few individuals with higher connectivity strongly affects the final results. In particular, we computed the authority and the hubness of each ROI in each individual subject, and evaluated the correlation between these centrality indices and the AQ score. In this way, we identified the ROIs which exhibit a significant correlation between the centrality indices (in particular authority and hubness) and the AQ score. The p-value is computed by transforming the correlation to create a t-statistic having N-2 degrees of freedom, where N is the number of data points. In the case of the sparse matrix, for each centrality index, we identified the ROIs that exhibited a significant statistical difference between the two groups. ROI’s significance was defined as a Bonferroni-corrected p-value less or equal to 0.05 where the p-value was obtained via Monte-Carlo testing. Then, both in case of the complete and sparse matrix, once the significant ROIs were identified for each index, the connectivity differences between the Low and High AQ Score Group were plotted for the significant ROIs only, separately for each index (in particular in case of the authority index and hubness index); this serves to evidence differences between the two groups in the pattern of connections entering into authority nodes and exiting from hub nodes. The present paper analyzes the differences in brain connectivity between two groups of non-clinical individuals who differ in the degree of autistic traits (low vs. high), as classified based on the Autistic Quotient (Baron-Cohen et al., 2001) score. Results have two main important aspects of interest. First, we confirm that autistic traits can be observed within a wide spectrum encompassing both clinical and non-clinical populations. Specifically, the degree of autistic traits clearly differs in the non-clinical population between low and high AQ scores. Second, we show that these differences can be quantified as alterations in brain connectivity. In particular, we show that Granger Causality, computed from neuroelectric signals reconstructed in the cortex (Deshpande and Hu, 2012; Stokes and Purdon, 2017; Cekic et al., 2018), together with indices taken from the Graph Theory (van Wijk et al., 2010; Minati et al., 2013; Farahani et al., 2019), can represent a valuable tool to characterize differences in brain networks and deepen our analysis of the neurobiological bases of brain disorders. Further, we confirm a previous hypothesis (Tarasi et al., 2021, 2022) that individuals with higher autistic traits are characterized by more evident bottom-up mechanisms for processing sensory information. A critical point may be the selection of the threshold used to discriminate between the two classes. Despite the inherent arbitrariness of the choice, we used as a discriminative threshold the average AQ score obtained in a nonclinical population from the large-sample work of Ruzich et al. (2015), and this seems the most natural choice. Moreover, using this value, the present population of 40 subjects is subdivided in 19 and 21 subjects, i.e., the threshold we chose is quite proximal to the median of the considered population. It is worth noting that similar approaches of partitioning the sample around a threshold have been used previously in the literature (Alink and Charest, 2020). In the following, we will first analyze methodological issues, then the neurophysiological significance of the obtained results will be explored. Finally, limitations of the present study will be In this work, we have chosen temporal Granger causality as a tool to reconstruct brain connectivity from EEG data. This measure mathematically represents the impact that knowledge of an upstream signal can have on the prediction of a downstream temporal signal. Thus, it represents a causal directed index of connectivity. Indeed, Granger Causality is widely employed in neuroscience today (Deshpande and Hu, 2012; Seth et al., 2015; Stokes and Purdon, 2017; Cekic et al., 2018). Moreover, in a recent paper, using artificial signals produced by a neurocomputational model as ground truth, we demonstrated that the Granger Causality overcame other functional connectivity estimators in terms of accuracy and reproducibility (Ricci et al., 2021). This method has evident computational advantages compared with other suitable methods [such as Transfer Entropy, see Ursino et al. (2020)]. The analysis was initially performed (see Section “Analysis of the complete connectivity matrix”) on the complete normalized connectivity matrix, to show the main characteristics of the Granger flow in the two groups. Then, to improve the significance of the results, we considered only connections which exhibited a significant statistical difference between the two populations, thus working with a sparse matrix (i.e., all connections which did not show statistically significant differences between the two groups were set at zero). In other terms, the graphs in Section “Analysis on the sparse connectivity matrix” do not represent the overall connectivity patterns, but rather highlight the differences between the two populations. The connectivity matrices so obtained were then used to compute some indices taken from Graph Theory. Several studies using Graph Theory in ASD have appeared in recent years: most of them suggest that ASD individuals exhibit alterations in modularity (i.e., densely connected modules that are more segregated), in global efficiency (i.e., average path length required to go from one node to another), in betweenness (the capacity of a node to connect to other nodes) or in connection density (Rudie et al., 2012; Redcay et al., 2013; You et al., 2013; Keown et al., 2017; Chen et al., 2021). EEG and MEG connectivity studies using graph analysis generally report autism to be associated with sub-optimal network properties (less clustering, larger characteristic path, and architecture less typical of small-world networks) (Barttfeld et al., 2011; Tsiaras et al., 2011; Boersma et al., 2013; Peters et al., 2013; Leung et al., 2014; Takahashi et al., 2017; Soma et al., 2021). This, in turn, results in a less optimal balance between local specialization (segregation) and global integration (Sporns and Zwi, 2004). Although of particular significance, we think that these indices do not consider the fundamental problem of directionality in the processing pathway and the different importance that bottom-up and top-down connectivity plays in several brain processing. Accordingly, an essential novelty of the present study concerns the use of some specific centrality indices (in degree, out degree, and above all, hubness and authority) to characterize group differences in network directionality. The basic idea is that the directionality of the processing streams plays a major role in determining group differences (at least for what concerns autistic traits), rather than other indices like betweenness, path length, or clustering, more frequently adopted in the characterization of brain networks. In particular, by considering macro-regions and sparse connectivity matrices, these indices provided highly significant statistical differences and provided a precise scenario to distinguish the two groups. The connectivity analysis was performed at two levels. First, we concentrated on the connectivity among macro-regions (lobes) of the cortex, the frontal, parietal, temporal, and occipital zones, to discover the main traits of connectivity differences. This analysis confirms the result of a previous preliminary study (Tarasi et al., 2021), i.e., individuals with higher autistic traits exhibit stronger outgoing connections from the occipital regions and stronger incoming connections toward frontal areas (i.e., bottom-up) compared with those observed in individuals with lower autistic traits. In addition to confirming the results of our previous study, as a new significant result of the present study we propose that two other centrality measures, i.e., hubness and authority, allow for a finer discrimination of connectivity directionality. The reason for this improvement will be critically analyzed in the next section. If these two measures are used, significant statistical differences can be observed to characterize the directionality of the connections in High AQ score vs. the Low AQ score individuals. In particular, using sparse matrices statistically significant differences were evident between the hubness of the occipital regions in the two classes, with much stronger hubness for individuals with high autistic traits. Looking at authority, a significant increase in the authority of the frontal region was observed in the group with higher autistic traits. The same patterns were confirmed by computing (from the sparse matrices) the connectivity among the macro-regions and plotting only those which exhibited a significant statistical difference. As shown in Figure 5, increased bottom-up connectivity from occipital to frontal regions was evident in individuals with high autistic traits. Besides connectivity analysis at lobe level, we performed connectivity analysis at single ROI level. To this aim, centrality indices were computed by considering all the 68 ROIs in the Desikan–Killiany atlas. It is interesting that the results obtained on the overall connectivity matrix and on the sparse matrix provide similar indications, emphasizing the presence of bottom-up connections in the high-score group and left-right connections in the low-score group. However, analysis performed on the overall connectivity matrix did not reach a significant level, whereas a greater significance was obtained from sparse matrices. For this reason, in the following we will mainly refer to the results of sparse matrix. An important result of our study is that hubness and authority provided more significant differences compared with in degree and out degree, respectively; hence we suggest that these indices should be used to characterize the flow in a network of multiple ROIs. In particular, by comparing in degree vs. authority in Figure 6 we can observe that the results are quite similar for what concerns the High AQ score Group (authority produces just one more significant frontal node compared with in degree), whereas significant differences can be observed in the Low AQ score Group (no significant node is evident if in degree is used, compared with five nodes using authority). Consequently, authority allowed the detection of a clear left to right connectivity in the Low AQ score Group. Similarly, only moderate differences can be observed using hubness vs. out degree in the High AQ score Group (Figure 8, hubness detects two additional regions in the frontal cortex, allowing a better analysis of top-down influences). Also in this case, hubness provided a significant improvement compared with the out degree in the Low AQ score Group (nine significant ROIs are detected by hubness, mainly located in left and medial parietal and temporal regions, vs. no significant region by the out degree). These differences suggest that the overall graph is more complex in the Low AQ score Group compared with the High AQ score one, requiring more sophisticate indices for detecting the flow of transmitted information. To understand why authority and hubness are more powerful compared with in degree and out degree, we remind that authority does not only take into account the number and strength of the connections entering a node but also weights these connections by the hubness of the upstream nodes. Similarly, hubness does not only take into account the number and strength of connections exiting from a node but also weights these connections by the authority of the downstream nodes. Of course, these measures need to be computed together via recursive formulas, as illustrated in Eqs 6, 7. Briefly, the importance of the information exiting from a node (or the importance of the information entering into a node) is not simply the sum of its output connections (or the sum of the input connections), but also depends on the role played by the sending nodes (or by the receiving nodes). For instance, a connectivity of value 0.04 reaching an almost completely isolated node (one which does not send information to others nodes in the network) can be scarcely important compared with a connection of value 0.02, which reaches a crucial node. Hubness is able to quantify this difference compared with a simple sum of outgoing connectivity. Similarly, authority is more able to summarize the effective significance of the incoming flow compared with the simple sum of entering connections. Using these indices, we then mapped the stronger connections that exited from ROIs with higher hubness and entered into the ROIs with greater authority. These results computed on each ROI extend the lobe analysis to several aspects: (i) The main hubs for High AQ score individuals were located in the left and right PCL regions. A pattern of bottom-up connections emerging from these two regions seems to be the dominant feature that characterizes this group. Left and right PCL are the ROIs in which the primary visual cortex is located. These areas handle the transmission of incoming visual inputs from the thalamus to higher-order processing regions. The enhanced bottom-up signaling arising from this site resembles the pattern observed in individuals with clinical form of autism characterized by hyper-engagement of sensory regions (Jao Keehn et al., 2017, 2019) that could underpin the sensory and visuospatial peculiarities typically observed in ASD (Mottron et al., 2006; Samson et al., 2012). (ii) The leading authorities for High AQ score individuals were located in the frontal and prefrontal regions, particularly in the left and right lOF. These two ROIs encapsulate frontal sites involved in high-level mechanisms such as emotional regulation, decision-making and social cognition (Rolls, 2004). Crucially, these domains tend to be altered in ASD individuals. Excessive information inflow in brain areas related to emotional and social processing could be implicated in the difficulty to manage complex and multifaceted social interactions typically observed in this spectrum. This could also explain why ASD individuals tend to prefer less social-demanding environments as they are linked to a lower risk of over-stimulation. (iii) The previous connections were distributed bilaterally, from both PCLs to both homolateral and contralateral frontal hemispheres. (iv) Conversely, the pattern of connectivity in Low AQ score individuals exhibited a broader and less defined distribution, involving several connections in the temporal, parietal, and occipital lobes, with hubs mainly located in the left hemisphere and a direction from left to right. This suggests that the pattern of inter-areas communication in low-AQ individuals is more distributed and varied and not rigidly channeled into narrow pathways. We remind, however, that these connectivity patterns reflect differences between the two groups, hence a relative role in one population vs. the other, not the absolute impact that connections have on the overall brain network. In other words, it is possible that some strong connections did not appear in our graph since they were equally relevant in both populations, hence without significant difference (this is the reason why the overall connectivity matrix provides less significant results). Moreover, we remind that trials were performed at rest. Thus, the examined connectivity reflects differences in a resting state. In general, the present results support the findings obtained in our previous study on a smaller population (Tarasi et al., 2021), even though the exact position of the ROIs representing the increased bottom-up connectivity is not identical. In our previous study, we observed increased connectivity from the right PCL and the left LG (instead of the left PCL as found here). Still, these differences can be explained based on minor variances in source reconstruction and grouping among proximal voxels. Moreover, in our previous study, the bottom-up connectivity in High-AQ score individuals was especially evident in the right hemisphere (particularly toward the right rMF, a region that still plays e significant role among the authorities in the present study). In contrast, this connectivity seems to be more bilaterally distributed in the current results. These results support the idea that the brain network in individuals with higher autistic traits vs. individuals with lower autistic traits is not characterized by a general reduction in connectivity (as hypothesized in some theorizations) but rather that mixed patterns of under- and over-connectivity can be appreciated. Over-connectivity is evident in the fronto-posterior axis, involving bottom-up influences, whereas hypoconnectivity involves many tempo-parietal regions, especially in the left hemisphere. Several hypotheses on brain connectivity in ASD have been formulated in past years, with apparently contradictory outcomes: while some authors hypothesized more robust connectivity in ASD, others reported reduced connectivity (see Section “Introduction”). These contradictions, however, can be reconciled by thinking that differences between control and individuals within the autistic spectrum can especially reflect a directionality in the connections rather than the number and total strength of edges in the overall network. Furthermore, a mixed pattern of increased connectivity among some regions and decreased among others probably characterizes the autistic brain. Directionality in the connectivity patterns, in turn, may reflect a hierarchical organization of the processing stream, with bottom-up connections (especially from the occipital towards the frontal lobes) involved in sensory processing and top-down connections reflecting context modulation, and prior knowledge, planning, and attention. This connectivity organization agrees with the so-called predictive coding theory, which assumes that environmental and internal signals are joined together to form a unified model of reality. In particular, the predictive coding theory of ASD (Van de Cruys et al., 2014; Tarasi et al., 2022) hypothesizes that ASD people do not form accurate predictions of the external environment since sensory information supersedes the internal expectation. Our results support this theory, showing that differences in bottom-up connectivity (hence, in the impact that sensory input can have on the global internal model) are stronger in individuals with higher autistic traits, even within a population of healthy individuals. A limitation of the present study may be the limited sample size (19 vs. 21 participants). Actually, this number is in line with (and in many cases higher than) the sample employed in published works that use similar experimental procedures and investigate similar phenomena [see Carter Leno et al. (2018) and Harris et al. (2021)]. However, the complexity of the analysis performed and, in particular, the study accomplished on the complete connectivity matrix, reveal the necessity of a larger number of participants to achieve statistically more solid results. Hence, future studies on a large cohort can allow a more detailed comprehension of the problem. In this study, we did not include participants with a diagnosis of ASD, hence we cannot be confident that the present results would stand up also in a clinical population. However, the results obtained go exactly in the direction hypothesized by theoretical and empirical work on connectivity features in clinical ASD. Moreover, substantial behavioral (Alink and Charest, 2020), genetic (Bralten et al., 2018), and neural (Massullo et al., 2020) evidence suggests that ASD is a continuum of conditions ranging from trait-like expression to the diagnosed clinical form of autism. Of course, additional studies on a clinical population are required to definitely support the present initial results and definitely validate the hypothesis of a continuous spectrum ranging from normality to ASD. An interesting point concerns the relationship between the Granger connectivity, evaluated in this study, and the structural connectivity (i.e., the physical traits that connect brain regions, generally estimated by diffusion-weighted imaging). Some studies (e.g., Hermundstad et al., 2013) have shown that there is significant overlap between neuroanatomical connections and correlations of functional brain signals. Conversely, other recent studies of our group, using neural mass models as a ground-truth, showed that in some conditions the two aspects may differ, as a consequence of non-linear phenomena (Ursino et al., 2020, 2021; Ricci et al., 2021). Hence, it is still unclear how the brain network interacts during specific tasks or at rest, accounting for all structural and functional aspects in terms of causality, given the many nonlinear dynamics that characterize brain functioning. Moreover, the present results show some connections crossing the midline. Regarding this point, although the connections traveling through the corpus callosum typically connect homotypic areas, a substantial number of traits connecting heterotypic areas in the two cerebral hemispheres have been observed (e.g., De Benedictis et al., 2016). Of course, without structural data, it remains difficult for the current study to formulate more precise hypotheses about this Finally, in the present study we have observed differences in bottom-up and top-down connectivity in the two groups. Works in the literature emphasize that these connections can be implicated in sensory processing, especially in multisensory conditions (Choi et al., 2018) or after sensory deprivation (Yusuf et al., 2022). Furthermore, several studies suggest that atypical sensory processing is a common characteristic of ASD and that sensory traits have important implications in the developmental phase of this pathology (Marco et al., 2011; Robertson and Baron-Cohen, 2017). The present experiments were performed in a resting condition, so it would be difficult to make strong inferences about sensory processing from the current data. Further studies, examining the response to sensory stimuli, are required to test whether these neural signatures of autistic traits (more bottom-up processing in high AQ score, more top-down processing in low AQ score) have an impact at the behavioral level, for example to explain the observed differences in sensory profile.
{"url":"https://www.frontiersin.org/journals/systems-neuroscience/articles/10.3389/fnsys.2022.932128/xml/nlm","timestamp":"2024-11-06T08:17:15Z","content_type":"application/xml","content_length":"168531","record_id":"<urn:uuid:543af297-c31a-4415-a248-efaa31af4e3e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00376.warc.gz"}
On the second day of , we talked about the group law on an elliptic curve based on our old script (in German) called " Das Gruppengesetz auf Elliptischen Kurven ". Today, we explained to some people how elliptic curves can be used to factorize the product of two primes, i.e. attack weak RSA keys. When we were done, we let the flipchart paper taped to the wall and sat down. Few minutes passed before two people in passing showed an unusual interest in our notes. A few words into the conversation, I bluntly asked them about their mathematical background, which was met with an amused > I'm a math professor, does that suffice? These people turned out to be Tanja Lange Daniel Bernstein , two scientists who are rather big shots in mathematical cryptography. I am ashamed to say that I did not even know that, but I certainly understood that they knew a more about elliptic curves than me and they were willing to share. To be precise, they were really friendly, and good at explaining it. That's rare. Of course, I eagerly listened as they began elaborating on the advantages of Edwards coordinates on elliptic curves . The two of them had attended the talk by Edwards introducing the concept in 2007 and observed the cryptographic potential: Basically (and leaving out some details), it's all about the fact that \[ x^2+y^2 = 1+dx^2y^2 \] defines an elliptic curve, with all relevant points visible in the affine plane; check out the picture . Choosing $(0,1)$ as your neutral element, a group law on this curve is given by \[ (x_1,y_1) \oplus (x_2,y_2) := \left( \frac{x_1y_2+x_2y_1}{1+dx_1x_2y_1y_2}, \frac{y_1y_2+x_1x_2}{1+dx_1x_2y_1y_2}\ right). \] With this shape, you never have to mention projective coordinates, the group law can be explained in a very elementary kind of way — and then it turns out that this shape also yields faster algorithms for point multiplication. In short: If you're into curves, cryptography or both, I encourage you to check out their summary page about Edwards coordinates
{"url":"https://blag.nullteilerfrei.de/tag/edwards/","timestamp":"2024-11-13T02:34:51Z","content_type":"text/html","content_length":"32084","record_id":"<urn:uuid:37362b87-f69c-4c48-b987-e1fe3b7ff09e>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00454.warc.gz"}
Random 4 Digit Number Generator in Excel: 5 Easy Methods In everyday situations, casual Excel users write numbers manually in the cells when they need to create random four-digit numbers. It takes so much time and effort. Yet there are faster ways to fill thousands of cells with random numbers. In this guide, we will discuss five techniques as a random 4-digit number generator in Excel for you. There are many circumstances when you may need to create random numbers. Suppose you must send a file to someone that includes confidential information. But you do not want the recipient to see some specific values. What do you do? You replace those cells with randomly generated four-digit numbers. This example is just one use of this number generator in Excel. Even for some testing purposes or data analysis, knowing how to generate four-digit numbers pays off massively. We understand this necessity, so we are creating this guide for you. Read each method carefully and practice all the steps as you see in the demonstrations. For a better understanding, use the practice workbook that we are sharing with you in this guide. Let’s jump in! Random 4 Digit Number Generator in Excel: 5 Quick Approaches Not many methods work most efficiently when talking about four-digit random number generators in Excel. However, we do not include anything in our guides unless it offers the best possible outcome. In this article, too, we are maintaining this idea. Here, we will explain the five best methods for generating random 4-digit numbers in Excel. We will attach pictures for each step in all the methods. And whenever we use a formula, we will break it down with proper explanations. The goal here is to ensure a seamless learning experience for you. Let’s start! Method 1: With the RANDARRAY Function As the first technique for generating 4-digit random numbers in Excel, we will show how you can use the RANDARRAY function in the best possible way. It is one of the latest functions added in Excel to make work easier. The RANDARRAY function works wonders when you want to generate four-digit numbers in Excel. The function returns random numbers in an array based on your input. The formula for the RANDARRAY function is =RANDARRAY([rows],[columns],[min],[max],[integer]). • RANDARRAY: The function. • rows: Optional argument. The number of rows you want to fill with random numbers. • column: Optional argument. The number of columns you would like the function to fill up with random numbers. • min: Another optional argument. It refers to the minimum number or digit you would like RANDARRAY to return. • max: Optional argument. It is about the maximum number you would be returned. The min and max argument works as the formula’s lower and upper limits. • integer: Boolean value for whether you want to get a whole or decimal number from your operation. Now let’s use the RANDARRAY formula and generate an array of random four-digit numbers in Excel! 1. Select a cell in an empty column and then start writing the formula. The “rows” argument determines how many cells in a column you want to fill up with random numbers, and “columns” specifies the number of columns in your array to be filled up. Write the values you want. 2. If you want to make your random digits be any four-digit numbers, then for the “min” argument, write “1000”. Then for the “max” argument, put “9999”. Next, select “TRUE” for the “integer” In this demonstration, we wanted 12 random four-digit numbers in 1 column. And we limited the random numbers to below 4300 and above 4234. Therefore, all the random four-digit numbers we will get in this example will be between 4234 and 4300. 3. After completing the formula, close the parenthesis and press the Enter button. 4. The cells (or rows in the selected column) have been filled with random numbers. You can see our example in the picture below. 5. If you want to randomize the numbers, select the cells and press F9. Every single time you hit the F9 button, the numbers will randomize. It is helpful if you are unsatisfied with the output and want new numbers by randomizing them even further. Easy-peasy! However, this dynamic RANDARRAY function is available only in MS Excel 365 and Excel 2021. If your Excel program is older than that, follow the methods we are discussing next. Method 2: By Combining RAND and ROUND Functions Combining RAND and ROUND functions allows you to generate random 4-digit numbers in Excel. Implementing this powerful combo ensures that you get your random numbers despite being in earlier versions of Excel. Anything from Excel 2007 and onward editions lets you work with these functions. Generally, the RAND function returns a random decimal number between 0 and 1. However, there is a way to generate random whole numbers with this RAND function. The formula is =RAND()*(b-a)+a. Here, “b” refers to the max number, and “a” is the min number. You can put values according to your needs. And finally, the ROUND function will help us eliminate the fraction numbers from the result. Therefore, we are nesting RAND within ROUND for this method. When combined for a completely random four-digit number, the formula stands =ROUND(RAND()*(9999-1000)+1000,0). The ROUND formula’s arguments are =ROUND(number,num_digits). We are putting the RAND formula, =RAND(), in the “number” argument of the ROUND formula. And we put “0” for the number of digits to remove the fractions. We will play with the max value and min value in this demonstration. Let’s see the process below. 1. Write the formula for this method: =ROUND(RAND()*(9999-1000)+1000,0). For this demonstration, we wanted to keep the numbers limited between 2000 and 4000. Therefore, we put “4000” as the “b” or max value and “2000” as the “a” or min value. 2. After writing the formula, press Enter. You will see the output of the formula. Click and drag the Fill Handle in the corner of your cell to fill up the column with the formula. 3. You will see that the column has been filled with random four-digit numbers. We wanted our numbers to be between 2000 and 4000, so our random digits are from that range. For any random four-digit outputs, make sure to write “9999” and “1000” respectively, for “b” and “a” values. 4. To randomize those values further, select the range of cells and press F9. There you go! It wasn’t difficult, was it? Method 3: Using the RANDBETWEEN Function The RANDBETWEEN function is excellent for generating four-digit random numbers in Excel. It is much simpler than what we have discussed so far. The reason behind its less popularity is Excel’s version compatibility. It works only in Excel 2021 and Excel 365. The RANDBETWEEN function generates a range of random numbers between the values mentioned. The formula’s syntax for this function is =RANDBETWEEN(bottom, top). The “bottom” argument here refers to the range’s minimum value or lower limit. And the “top” argument is the function’s maximum value or upper limit. If you want to generate purely random four-digit numbers, put “1000” in the bottom argument and “9999” in the top argument. In that case, the formula will be =RANDBETWEEN(1000,9999). However, in this demonstration, we will generate four-digit random numbers between 1200 and 1300. We are doing it to show that you can limit your range any way you like! Let’s see how to do it, then. 1. Write the formula and then press Enter. 2. You will see that a random number has been generated. Again, if you want your numbers to be any random four-digit number, make sure to put “1000” as the “bottom” argument and “9999” as the top argument in the formula. 3. Fill the range of cells with the formula based on your needs using the Fill Handle. 4. If you want to randomize the numbers further, select the cells with the RANDBETWEEN formula and press the F9 key. You can keep pressing the key until you are satisfied with your random numbers. That’s it! Method 4: Using the Analysis ToolPak Add-in If you like avoiding formulas in your work as much as possible, this method is especially for you! You will need to load an add-in called Analysis ToolPak within Excel first. Afterward, you can use it to generate random numbers. In two parts, we will show how to use Analysis ToolPak in this method. First, we will demonstrate the process for loading or activating this add-in. Then we will explain how it is used to generate random 4-digit numbers. Enabling Analysis ToolPak Add-in Go to File in your Excel, and then click Options. You may also use Alt + F + T key combinations to open the Excel Options window. And when this window opens up, click Add-ins in the left column. Now, on the right, under Inactive Applications Add-ins, click the Analysis ToolPak option, and then click the Go button as marked (3) in the picture below. A new mini-window will pop open. Make sure to check the checkbox beside the Analysis ToolPak option and then click OK. You will be taken back to the previous window. Click OK there as well to complete the process. You should be able to use the Analysis ToolPak add-in now in your Excel program! Using Analysis ToolPak to Generate 4-Digit Random Numbers Now we will show how to use the Analysis ToolPak add-in to generate random numbers. In this case, it is going to be four-digit random numbers. 1. First, go to the Data ribbon and click the Data Analysis option in the Analysis group. 2. In the list of Analysis Tools, find the Random Number Generation option, select it, and then click OK. 3. A new mini-window with options will pop open. Before you do anything else, click on the drop-down menu for Distribution and select Uniform from the list. 4. Now some options will change in the window. Time to input necessary values in all the options in this window. First, see the image below and then follow the explanations given next. We will explain all the options in the window one by one. • Number of Variables: It refers to the number of columns where you want to generate your random numbers. Put “1” if you are going to use a single column. • Number of Random Numbers: The number of cells or rows in your column(s). It may also simply mean the number of cells you want to occupy with randomly generated numbers. • Distribution: The option for this has been selected in the previous step. • Parameters: It refers to the range of numbers from where you want this add-in to generate your random numbers. If you wish to get any four-digit number, put “1000” in the first box and “9999” in the second box. We put “1200” and “1300” in those boxes to generate four-digit numbers between 1200 and 1300. • Output Range: The array or range of cells where you intend to put those randomly generated numbers. Click on the box beside it, and select your cell range. After completing this process, click the OK button. 5. Now you should see that the cells you selected in your Output Range in the previous step have been filled with four-digit numbers! However, the numbers may have decimals in them. In that case, select all those cells with randomly generated numbers and click the right button on your mouse. Then select the Format Cells option. Now, click on the Number tab in the just-opened Format Cells window. Then choose Number from the list under the Category. Put “0” on the Decimal places option and then click OK. Now your numbers will not have any decimal values in them. You have your ideally generated four-digit random numbers! Pretty cool, right? Method 5: With Visual Basic for Applications (VBA) Visual Basic for Applications, or VBA, offers an excellent way to generate 4-digit numbers in Excel randomly. It is quick and so effective. You don’t need to write anything at all. Copy the formula that we will share with you in this method and then paste it into your VBA editor to be done with it! The best part of this method is that you can generate hundreds or thousands of numbers with the same code. You need to select the cells where you want to create four-digit numbers, go to the VBA editor, paste and run the code, and witness the ultimate sorcery! Let’s see the process below. 1. First, select the cells where you want to generate random four-digit numbers. Throughout this guide, we used 12 cells as an example. So, we also decided on the same 12 cells for this After selecting the cells, head to the Developer tab in your Excel. Then, in the Code group, click Visual Basic. Suppose you like to use shortcuts or don’t see the Developer tab. In that case, you may press Alt + F11 buttons on your keyboard to open Visual Basic. 2. Now, in the VBA window, click the Insert menu, and then from the list, click on Module. 3. A Module window will open. Copy the code from below. Sub RandomFourDigitGeneratorForSelectedRange() Dim MySelection As Range, nmbr As Integer For Each MySelection In Selection nmbr = Int((9999 – 1000) * Rnd() + 1000) MySelection = nmbr End Sub And now, paste it into the Module editor. Finally, click the little green play button marked (2) in the picture above to Run the code. Or you can press the F5 button to run it. Then close the window to return to your worksheet. 4. You will see that the cells you initially selected before going into the VBA editor have been filled with randomly generated four-digit numbers. 5. You can generate any amount of four-digit numbers. To do it, you must select the cells first, even if they are in thousands. Then go to the VBA editor and paste the code. Finally, run it by clicking the green Run button or pressing F5. After returning to your worksheet, you will see that four-digit random numbers have occupied your selected cells. So simple yet highly effective! Additional Tip: Remember that the more cells you select, the longer VBA may take to complete the operation. It also depends on your computer’s processing power. Your Excel program may stop working temporarily. But do not lose patience and wait for the VBA to fill up your cells with random four-digit numbers. Final Thoughts In this guide, we discussed the best five methods as random 4-digit number generators in Excel. Out of those five methods, Method 1 and Method 3 require the user to have Excel 365 or Excel 2021 program. The other three methods work flawlessly in Excel 2007 and later versions. Although we talked about five different methods in this guide, they are all unique. Method 1 uses a dynamic array formula to generate random four-digit numbers. Method 2 may seem a bit complicated, but it is for everyone, irrespective of their Excel versions. Method 3 utilizes a newly-added function in Excel, and Method 4 uses a handy add-in. Last but not least, Method 5 offers you the best way to generate four-digit numbers in Excel randomly. The idea of using Visual Basic for Applications (VBA) may seem intimidating to some. However, you can copy and paste the code from our guide, and your work will be done in under a few seconds! We encourage everyone to practice the methods as they read through the guide. Be sure to use the practice workbook we shared in this guide. With some effort, you will soon become proficient at generating four-digit random numbers in Excel. Happy Excelling! Similar Post: Leave a Comment
{"url":"https://xyologic.com/4-digit-number-generator-in-excel/","timestamp":"2024-11-09T17:14:54Z","content_type":"text/html","content_length":"135293","record_id":"<urn:uuid:5946bbe8-8ff6-4860-98a1-a620b1714a2e>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00619.warc.gz"}
utkaje fu'ivla x[1] and x[2] are path-linked by directed binary predicate x[3] (ka) via intermediate steps x[4] (ordered list; ce'o) in graph x[5], such that (in the graph x[5]) both (A) no other node exists to which x[2] is connected in/by the same way/direction/relation and (B) no other node exists to which x[1] is connected in/by the opposite/(anti)symmetric/reversed way/direction/relation. Equivalent to "x[1 \, x] fo x[4 \, x] .utkaro fi x[3] gi'e se .utkaro fi x[3^-1]", where x[3^-1] is binary relation/predicate x[3] with the order of its two arguments exchanged (basically: "se "-converted). Multiple paths may connect x[1] and x[2]. There may be peripheral branches extending from x[2] which are acyclic (or cyclic) such that they contain a node which has a directed distance from x[2] which exceeds that of any path from x[1] to x[2]; it is just the case that for any path connecting x[1] and x[2] in either direction, x[1] and x[2] are root/leaf nodes thereof.
{"url":"https://vlasisku.lojban.org/utkaje","timestamp":"2024-11-12T20:41:05Z","content_type":"text/html","content_length":"5600","record_id":"<urn:uuid:d827d783-2ca7-423a-b34e-b24324af926e>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00015.warc.gz"}
(PDF) Determining Dutch dialect phylogeny using bayesian inference Author content All content in this area was uploaded by Peter Dekker on Apr 01, 2017 Content may be subject to copyright. Determining Dutch dialect phylogeny using bayesian inference Peter Dekker Bachelor’s thesis (7.5 ECTS) BSc Artificial Intelligence, Universiteit Utrecht Supervisors: Alexis Dimitriadis and Martin Everaert July 31, 2014 In this thesis, bayesian inference is used to determine the phylogeny of Dutch dialects. Bayesian inference is a computational method that can be used to calculate which phylogenetic tree has the highest probability, given the data. Dialect data from the Reeks Nederlandse Dialectatlassen, a corpus of words in several Dutch dialects, serves as input for the bayesian algorithm. The data was aligned and converted to phonological features. The trees generated by bayesian inference were evaluated by comparing them with an existing dialect map by Daan and Blok. 1 Introduction 2 1.1 Bayesianinference .......................... 2 1.2 Earlierresearch............................ 3 1.3 Applying bayesian inference to the Dutch dialects . . . . . . . . . 3 2 Method 4 2.1 Data.................................. 4 2.2 Alignment............................... 4 2.3 Phonological mapping . . . . . . . . . . . . . . . . . . . . . . . . 6 2.4 Bayesianinference .......................... 6 2.5 Evaluation............................... 7 3 Results 7 3.1 Netherlandic dialects . . . . . . . . . . . . . . . . . . . . . . . . . 8 3.1.1 Equal rate variation model . . . . . . . . . . . . . . . . . 8 3.1.2 Gamma-distributed rate variation model . . . . . . . . . . 14 3.2 Comparison: Belgian dialects . . . . . . . . . . . . . . . . . . . . 15 4 Discussion 18 5 Conclusion 20 6 Literature 20 7 Appendix 22 7.1 Tree of Netherlandic dialects – equal rate variation . . . . . . . . 22 7.2 Tree of Netherlandic dialects – gamma rate variation . . . . . . . 25 7.3 Tree of Belgian dialects – equal rate variation . . . . . . . . . . . 28 7.4 Tree of Belgian dialects – gamma rate variation . . . . . . . . . . 30 7.5 Dialect map by Daan and Blok (1969) . . . . . . . . . . . . . . . 32 1 Introduction How are languages related? Languages are genetically related if they share a single ancestor from which they derive (Campbell, 1998). To prove a common ancestor, an array of methods can be applied. The phylogeny, or evolutionary relationship, of languages can be viewed as a tree, where a branching shows that two languages Band Cderived from an ancestor A. 1.1 Bayesian inference In recent years, computational methods have seen their advent in historical linguistics. One of them is bayesian phylogenetic inference. This method is inspired by Bayes’ law: the probability of a hypothesis Hxfor a certain phe- nomenon Ecan be given using the probability of the phenomenon given the P(Hx|E) = P(Hx)·P(E|H x) k=1 P(Hk)·P(E|Hk) In our case, a phylogenetic tree is the hypothesis. A tree is a tuple ω= (τ , υ, φ) with (Larget and Simon, 1999): •a tree topology τ •a vector of branch lengths υassociated with topology τ. Each branch length in υrepresents the distance between two adjacent nodes in τ. •a substitution model φ, which determines the probability that a certain element of a language changes into a certain other element. The tree topology τis defined as: •a set of vertices V •a set of edges E∈V×V •the graph that E describes on V is strongly connected •there are no cycles. Branch lengths show how much distance there is between languages: a longer branch means that more substitutions have been made between the input strings. The substitution model determines how likely the change from a character in the input string to a certain other character is. The question is: what is the most probable tree to describe the linguistic data X? Our application of Bayes’ law becomes (Ronquist and Huelsenbeck, 2003): f(ω|X) = f(ω)·f(X|ω) f(ω) is the prior distribution, containing the a priori probabilities of the different trees. f(X|ω) is the likelihood function, which returns the probability that the data has been generated by a tree. f(X) is the total probability of the data. f(ω|X) is the posterior distribution, containing the probabilities of all the trees. Assumptions have to be made about the prior probabilities of trees, because they are generally unknown. Calculating the posterior probability means summing over all of the trees, whose posterior probabilities have not been calculated yet, and integrating over all the possible combinations of τ,υand φ. The posterior probability distribution cannot be calculated directly (Huelsenbeck et al., 2001). To address this issue, bayesian algorithms use a technique called Markov Chain Monte Carlo (MCMC) sampling. MCMC is an approximation of Bayes’ law, with a number of simplifications. The prior probability distribution is deter- mined by a Dirichlet distribution, inferring the prior probability of a tree from the data itself (Ronquist et al., 2011). The algorithm starts off with a ran- dom tree. Every generation, a small random change of the parameter values is proposed. It is accepted or rejected with a probability given by the Metropolis- Hastings algorithm (Larget and Simon, 1999). Every sgenerations (where s is the sample frequency), the accepted tree of the current generation is saved to the posterior sample. After a number of generations, the posterior sam- ple should approximate the real posterior probability distribution (Huelsenbeck et al., 2001). From this distribution, a best tree can be drawn, according to the desired criteria. In principle, the input for the bayesian method could be any kind of linguistic data. When applied to languages, it is common to use Swadesh lists, a list of words which are unlikely to be borrowed. A linguist manually classifies each word in a dialect to be in a certain cognate class. Crucial in the cognate classi- fication is the Neogrammarian hypothesis, which states that sound changes are regular. Regularity means that if a sound in a word changed into another sound, it will do so in every other word. Two words can only be cognate, if the common ancestor can be reached from both cognate candidates by a number of known sound changes (Campbell, 1998). Generally, in phylogenetic linguistics, a string of the cognate classifications of every word in a language serves as the input for the bayesian algorithm. For dialects, words in the different dialects are likely to be cognates (Dunn, 2008). If cognacy is assumed, manual classification can be omitted and an objective measure of distance between the phonological forms in different dialects can be used. 1.2 Earlier research In earlier research, the bayesian method has been applied to Bulgarian dialects (Proki´c et al., 2011). The focus in the research was on vowel change. The consonants were dropped and the vowels were classified into a limited number of classes to reduce computational cost. A broader approach will be used in this thesis, because consonants may also amount to important distinctions between dialect groups. 1.3 Applying bayesian inference to the Dutch dialects I would like to evaluate the bayesian method when used on the Dutch dialects. My research question is: how well is bayesian inference suited to determine the phylogeny of Dutch dialects? 2 Method The input for the bayesian inference is a string for every dialect, which uniquely describes that dialect. A corpus of words and their translations into different dialects were used as the basis for the input string. Each word was aligned with its counterparts in different dialects. All the aligned words for a dialect were concatenated. The concatenated strings were converted to phonological features. The resulting feature strings were used as input for the bayesian 2.1 Data The dialect data was taken from the Reeks Nederlandse Dialectatlassen (RND). This is a corpus of transcribed speech in Dutch and Frisian dialects in the whole Dutch language area: The Netherlands, a neighbouring area in Germany, Flanders and the north of France. The corpus was recorded between 1925 and 1982. A selection of 166 words and 363 dialects has been made from this corpus and digititalized (Heeringa, 2001). An interesting addition to the digital version is the Plautdietsch dialect from Protasovo, Siberia. This dialect descended from 16th century Mennonites who migrated via Eastern Europe to Siberia. It maintained its Dutch character in Slavic surroundings (Nieuweboer, 1998). The data was written in X-SAMPA, an ASCII version of the International Pho- netic Alphabet (IPA). The data was converted to IPA, represented in Unicode, using the cxs2ipa script (Theiling, 2008)1. The dialect data was split into two subsets: one set of 269 Netherlandic (and neighbouring German) dialects and one set of 94 Belgian (and neighbouring French) dialects. It is interesting to use the parameters from the Netherlandic data set on the Belgian data, to see whether the setting of the parameters is generally applicable to different data 2.2 Alignment Figure 1: Alignment of translations of the lemma are. The sound classes (colors of the phones) enable comparable sounds to be matched. In order to compare which sounds differ in the translation of a lemma in different dialects, the words need to be aligned. Comparable sounds are put in the same 1The script was modified in order to convert the æ properly as well. column (Figure 1). Making an alignment assumes that the words in different dialects are cognates. For most, but not all, words in the RND, this is the case. For example, the lemma chickens has entries which look like Dutch kippen and entries that look like German H¨uhner. Aligning these with each other is less informative (Figure 2). For some dialects, there was more than one Figure 2: Alignment of translations of the lemma chickens. The entries are not cognate, there are entries which look like kippen and entries that look like H¨uhner. Still, they are aligned based on their phonological characteristics. translation for a certain lemma. In these cases, the first one was chosen as the only translation. There are also lemmas where the alignment may have been distracted by morphological rather than phonological differences. For example, the lemma sore throat has items which look like keelpijn (throat-sore) and other entries which look like pijnindekeel (sore-in-the-throat). The sounds are roughly the same, only the order of stems is different. This is not fully reflected in the alignment (Figure 3). Figure 3: Alignment of translations of the lemma sore throat. The phonological similarity is not fully reflected in the alignment, because of the morphological difference in order. The part pin in both words is however aligned. The alignment is done using LingPy (List and Moran, 2013). This is a program for multiple sequence alignment, which means that all words are aligned with each other at the same time. LingPy matches phones, by classifying them into a number of sound classes (List, 2012). Phones in the same sound class have the highest probability of matching. Before the alignment, the data was tokenized: phones were grouped with dia- critic signs to form one token. The list of possible tokens was based on Hoppen- brouwers and Hoppenbrouwers (1988). It supplies a list of IPA tokens and maps those to phonological features. The list is based on the RND data. Still, some combinations of vowels/consonants and diacritics that were used in my RND data, were missing in the list. These tokens were omitted, because this means there is no phonological mapping for these tokens as well. The result is that the omitted diacritic signs are shown as a ?in the alignment, which means it can be aligned with a random phone. It seems this has not decreased the quality of the alignment heavily. Standard Dutch k Ip - @mEi n b l u m @- Standard German h y - n @mAi n b l u: m@n Midsland h E-n: - m i- - b l u m @n Figure 4: Concatenation of the lemmas chickens,my and flowers for three dialects. The real concatenated strings are far longer, they contain 166 lemmas. Figure 5: The feature strings which serve as input for the bayesian inference. This is the result of converting the concatenations from figure 4 to phonological After the alignment, all aligned words for a dialect were concatenated with each other, resulting in a long string of all words for that dialect (Figure 4). 2.3 Phonological mapping A possiblity would be to directly use the concatenated string of all aligned words as the input string for a dialect. The positions in the alignment, the phones, would then be the features, on the basis of which a dialect can be compared with other dialects. However, the symbol alphabet of all phones used in the RND is too big to be computationally feasible. Furthermore, it would be nice if the algorithm also takes into account that phones that are phonologically close to each other can change more easily than phones that are further from each other. For these two reasons, the aligned phone strings were converted to an array of phonological features. Hoppenbrouwers and Hoppenbrouwers (1988) provide a mapping from each character to 21 binary phonological features, which was used. The result is a long string of 0’s and 1’s for every dialect (Figure 5). 2.4 Bayesian inference The program used to execute the bayesian inference was MrBayes (Ronquist and Huelsenbeck, 2003). As described in the introduction, an MCMC analysis starts from a randomly chosen tree and proposes small random changes to this tree. MrBayes runs two different MCMC analyses at the same time, starting from two different randomly chosen trees. By calculating the convergence between the two analyses, it is possible to get an indication whether a stable posterior probability distribution has been reached. The algorithm was run for 1,000,000 generations. The sample frequency was set to 20, which means that every 20 generations, the most probable tree is saved. The likelihood is determined by two parameters: the substitution model and the rate variation model. Together they provide the probability of the data, given a certain tree. The substitution model determines the chance that a certain character changes into another character. We have only two characters (0 and 1), so the only state changes are 0 →1 and 1 →0. A substitution model with equal probability for every state change was used. The rate variation model determines the chance that a certain feature changes state. For example, a rate variation model could state that letters at the end of a word have a higher chance of changing than letters in the middle (Proki´c et al., 2011). Two rate variation models were tried: an equal rate variation model and a gamma-distributed rate variation model. In an equal rate variation model, every feature has the same chance of changing. In a gamma-distributed rate variation model, the bayesian algorithm infers from the data which features change more often than others. It categorizes the features in rate classes of higher or lower probability of change, according to a gamma distribution. 2.5 Evaluation A dialect map by Daan and Blok (1969) is used as the gold standard to evaluate the results of the bayesian analysis. The map is based on the perception of speakers. In a questionnaire, people from villages in the Dutch language area were asked which dialects from other villages were (almost) the same. Arrows could be drawn between villages with roughly the same dialect. Daan and Blok’s map is based on this arrow method, combined with some linguistic knowledge, in cases where the arrow method did not match the known insights (Daan and Blok, 1969). Finally, the bayesian algorithm with the same settings was applied to the Belgian dialects. The results were also compared with Daan and Blok’s map. 3 Results The output of the bayesian inference is a set of trees, each with their own probability. The consensus tree is a tree that tries to reconcile all trees. If the branching is contradictory between trees, the consensus tree places the branch at a lower level (Dunn, 2008). The trees were shown graphically using the FigTree (Rambaut, 2013) program. 3.1 Netherlandic dialects 3.1.1 Equal rate variation model The consensus tree that was outputted correctly shows groups of dialects that are connected locally, but does not generally show higher-order grouping be- tween the local groups. This probably happens because the consensus tree could not decide between two specific branchings and places dialects at a lower level. The MCMC analysis has also not converged optimally, even after 1,000,000 Hardly any false groupings are made. Dialects that are grouped in the tree, are generally also in one group on Daan’s map. Sometimes dialects are linked with a dialect that is just across the border of a different group on the map, but still geographically close. This is visible in the grouping of the dialects of Zeeuws- Vlaanderen with some neighbouring dialects from Noord-Brabant (Figure 6). No strange groupings between different parts of the country are made. The price for this accuracy is that a lot of dialects remain ungrouped or are only connected with their direct neighbours (Figure 7). Figure 6: Equal model. The dialects Clinge, Lamswaarde and Groenendijk from Zeeuws-Vlaanderen have been grouped together. They all belong to the Zeeuws group on the map. The geographically close Zundert, Roosendaal and Ossendrecht dialects have been grouped together in the tree, although they belong to the different Noord-Brabant group on the map. Some distinguishing groups can be seen, which correspond with groups on Daan and Blok’s map. The dialects of Groningen (Figure 8) and southern Dutch Limburg (Figure 9) form groups which correspond with Daan and Blok’s map. The dialects of northern Noord-Holland and the islands of Texel and Vlieland form one group, as the map would predict (Figure 10). There is a branch of the tree that splits into Frisian dialects and Frisian city dialects (Figure 11). It is good to see this clear division but still close connection between Frisian dialects and Frisian city dialects. The Frisian city dialects are dialects which originate from Frisian, but have been influenced by the dialects from Holland in the 16th century (Jansen, 2002). It is clear that Daan’s Utrecht-Alblasserwaard group is not well-visible in the tree. Many dialects are clustered with other groups. Utrecht and Amersfoort are unresolved (Figure 7). Dialects from eastern Noord-Brabant have been connected, but are not con- nected with dialects from the west of Noord-Brabant, which from one group on Daan’s map. The dialects are however closely connected with two dialects from Zuid-Gelderland, a related, but different group on the map (Figure 12). Figure 7: Equal model. A lot of dialects have not been grouped: dialects from the Utrecht-Alblasserwaard group like Utrecht and Amersfoort are on the same level as eastern dialects like Beilen and Emmen. Other dialects have been clustered into small groups: for example Goirle, Oirschot and Loon op Zand. Figure 8: Equal model. The dialects of Groningen have been grouped according to Daan’s map. Figure 9: Equal model. The dialects of southern Dutch Limburg form a well- divided group that is coherent with the map. Figure 10: Equal model. The dialects of northern Noord-Holland are grouped with the dialects from the islands of Texel and Vlieland, as Daan predicts. The group is on the same level with a totally different, but also coherent group, that of Zeeland. The long branch length of Protasovo is remarkable and signifies a large distance compared to the other dialects. Figure 11: Equal model. The dialects of Friesland. The Frisian dialects (red) and Frisian city dialects (green) are related, but it is clear that there is a division. Figure 12: Equal model. Dialects from the eastern side of the Noord-Brabant group (red) have been grouped with dialects from the river region (green). Al- though these groups are related, it is remarkable that the Noord-Brabant di- alects match with dialects from a different group, whereas they do not match with dialects from the western part of the same Noord-Brabant group. The tree lacks some higher-order grouping. The dialects of the southern Nether- lands are shown as a family in Daan’s map using red shades. The Low-Saxon dialects of the eastern and northern Netherlands are shown as a family using green shades. These higher-order groupings are however not visible in the tree (Figure 13). Figure 13: Equal model. The red group is a mix of dialects from the Utrecht- Alblasserwaard group (Oudewater, Soest, Driebergen, Polsbroek) and Zuid- Holland (Berkel, Wateringen, Nieuwveen, Langeraar, Warmond, Zoetermeer). Maybe the border between these groups is not really clear-cut, as Daan and Blok (1969) state. A second observation is that there is no sufficient higher- order grouping. The red group of western-central dialects is at the same level as the two blue groups of northeastern (Low-Saxon) dialects. These two blue groups would be expected to be on a different level, together with other Low- Saxon dialects. The Protasovo (Plautdietsch) dialect has a very long branch, which shows that it differs a lot from the other dialects (Figure 10). This seems reasonable, given that it is a form of Dutch that has not been in contact with other Dutch dialects for centuries. Concludingly, the dialects that have been grouped together form groups that are coherent with Daan and Blok. The groups have however not been grouped in higher-order groups that show relations between dialect regions. This makes the explanatory power of the tree smaller. 3.1.2 Gamma-distributed rate variation model The consensus tree of the gamma-distributed rate variation model shows the same pattern as the consensus tree of the equal rate variation model. There are some differences in the groupings, sometimes these are improvements, sometimes these are degradations. It is not really clear whether these small differences are caused by different rate variation models. Differenes across different executions of the same rate variation model also occurred. The groups of southern Dutch Limburg and Groningen are also salient in this consensus tree. Some groupings from Daan’s map are better under the gamma model. The dialects of Twente have been grouped together under this model, whereas they were spread across different groups in the equal model (Figure 14). The dialects groups of Noord-Holland and Zuid-Holland are related, this is shown to a greater extent in this tree (Figure 15). Figure 14: Gamma model. The dialects of Twente (and the directly neighbour- ing places in Germany) form one group under the gamma model, whereas they were spread across several groups in the equal model. The distinction between Frisian dialects and Frisian city dialects is still shown, but this time the Frisian dialects are shown as a subgroup of the Frisian city dialects (Figure 16). This is not correct from a historical point of view, since the Frisian city dialects split off the Frisian dialects. However, from a distance point of view, it is less remarkable. The Frisian city dialects are closer to the Figure 15: Gamma model. Dialects of Noord-Holland and Zuid-Holland have been combined as one group under the gamma model. other Dutch dialects at the root of the tree, because they have been influenced by the dialects from Holland. An interesting result is that the dialect of Katwijk aan Zee, a coastal place in Zuid-Holland, is grouped with the dialects of Zeeland, a different group further to the south (Figure 17). Apparently there are some shared characteristics between these coastal areas. There are also dialects that were grouped in the consensus tree of the equal model, but are unresolved under the gamma model. Examples are the places Oldemarkt and Steenwijk. It is hard to say whether the gamma or the equal model is better. The gamma model shows a few interesting groups that the equal model does not show, but it also leaves dialects ungrouped which the equal model grouped. Furthermore, some differences can occur across different executions of the same model and are not caused by the model choice. 3.2 Comparison: Belgian dialects The Belgian data was kept apart to see whether the method works for a different data set as well. The Belgian data was processed in the same way as the Netherlandic data and the bayesian algorithm was run with the same parameters (1,000,000 generations, sample frequency 20). Again, an equal rate variation model and a gamma-distributed rate variation were tried. In the equal rate variation model, three important groups are seen. Only a few small groups are available and few dialects remain unresolved. This could mean Figure 16: Gamma model. The Frisian dialects are shown as a subgroup of the Frisian dialects. Figure 17: Gamma model. Katwijk aan Zee, a coastal place in the Zuid Holland dialect region, is grouped with dialects from the Zeeland group, further to the there is less contradiction between the different trees than in the results of the Netherlandic data set. Also, the convergence between the runs is better. The first group contains places from the east of Belgium (Figure 18). Most dialects belong to the Limburg group on Daan’s map, two belong to the Brabant group and one belongs to the group of dialects between Brabant and Limburg. This group in the tree seems to be a coherent group of Limburg dialects with some other dialects which are geographically very close. Figure 18: Equal model. This subtree contains dialects from the east of Belgium. The red dialects belong to the Limburg group, the green dialects belong to the Brabant group, the yellow dialect belongs to the group of dialects between Brabant and Limburg. The second big group consists solely of Brabant dialects (Figure 19). There is a division into subgroups that are geographically close to each other. There are only two small groups of Brabant dialects that are not included in this big group and are connected separately in the tree. The grouping is stronger than in the Netherlandic tree. Figure 19: Equal model. This subtree consists of the Brabant dialects. The third group consists of dialects from the west of Belgium and northern France. The dialects are roughly in three groups from Daan’s map: Western Flemish, Eastern Flemish and dialects between Western and Eastern Flemish. As can be seen in figure 20 the tree is nicely subdivided into these three groups. Figure 20: Equal model. This subtree contains dialects from the west of Bel- gium. The red dialects belong to the Western Flemish group, the green dialects belong to the Eastern Flemish group, the yellow dialects belong to the group of dialects between the Western and Eastern Flemish dialects. The consensus tree under the gamma model shows the same three main groups. The only difference is that some dialects have split from the bigger groups and formed a smaller group. 4 Discussion The data from the RND which was used seems to have given a reliable set of basic words. However, there is no guarantee that the words are used as often in one area as in another area. The closest approximation to a list of words that is used in every area would be a Swadesh list. The alignment has been done using a system of sound classes, which gives good results. The quality of the alignments could possibliy become even higher. To focus on phonology and filter out morphological effects, the stems of composed words could all be put in the same order. Furthermore, lemmas which contain words from different cognate sets, could be split in several lemmas: one for every cognate sets. Finally, more combinations of phones and their diacritics could be added to the token list. The diacritics that were not listed in (Hoppenbrouwers and Hoppenbrouwers, 1988) were not taken into account in the alignment now. As a summary of the tree sample, consensus trees were used. A characteristic of the consensus tree is that it is not guaranteed to be a real tree from the sample, but a reconciliation of the trees. Contradicting branchings are solved by placing a branch at a lower level. Other tree summaries are the the maximum probability tree (Nichols and Warnow, 2008) and the maximum clade credibility tree (Dunn, 2008). Both methods pick a tree which exists in the tree sample: the tree with the highest probability or the tree with the highest sum of probabilities of the branchings respectively. The phylogenetic program (MrBayes) that was used, was not accustomed to the creation of these trees. It was possible to fetch a maximum probability tree topology, but without branch lengths. Furthermore, it was possible to create a maximum clade credibility tree with an external program, but this did not succeed. For these reasons, only consensus trees were used in my analysis. Although bayesian inference is a quantitative method, which draws conclusions from large amounts of data, the evaluation of the method in this thesis was done qualitatively. Ideally, an objective measure of distance between a bayesian inference tree and Daan’s dialect map would be used. Both representations would then have to be converted to the same format. Zhang and Shasha (1989) proposes an algorithm for edit distance between trees. Implementing this al- gorithm and processing the tree data in such a way that it could be read by the algorithm could be a direction for future research. It would also have to be assessed whether the edit distance between language trees coincides with a linguistic feeling of similarity between language trees. In earlier quantative dialect research (Heeringa and Nerbonne, 2006), the eval- uation of the tree was also done qualitatively. However, new methods are being applied to visualize the data in such a way that it is easier to do a human com- parison with the gold standard. Nerbonne et al. (2011) present the Gabmap package, which has, among other features, the possibility to project a dialect tree onto a map. Gamma-distributed and equal rate variation models were evaluated. The dif- ferences in the resulting trees of the models were not very large. It seems that for the current input format, long strings of phonological features, the choice of the rate variation model is not of utmost importance. The results for the bayesian inference on the Belgian dialects were better than the results on the Netherlandic dialects. The bayesian analyses for the Belgian dialects had better convergence rates than the analyses for the Netherlandic dialects. It must be noted that the number of Belgian dialects was smaller than the number of Netherlandic dialects (94 vs. 269 dialects), but they ran the same number of generations. It may be that the Netherlandic analyses should have run for more generations, to compensate for the large number of dialects. This is however made inattractive by the long running times of the algorithm. There could also be other reasons for the better performance on the Belgian data set. For example, it could be the case that the Belgian data set had clearer divisions between the dialects, making it easier to generate a tree. 5 Conclusion The application of bayesian inference on the Netherlandic dialects performed well on local groups. Distinctive groups from common linguistic theory were visible. The grouping was very accurate, hardly any groupings were made that were not coherent with the dialect map by Daan and Blok. However, many dialects remained ungrouped. Also, local groups were not grouped with other groups in order to get higher-order families. This limited the explanatory power of the results. The bayesian inference for the Belgian dialects gave suprisingly good results. Almost all dialects were grouped and there was higher-order grouping appar- ent. There were a few large groups, which contained dialects from a bounded geographical area, eg. the east of Belgium. These large groups were divided into smaller groups, which mostly followed the dialect groups from the dialect map by Daan and Blok. All in all, bayesian inference seems to be a good addition to the tools used to determine the phylogeny of dialects. The performance of the method is not constant enough to use it as the only method to create a dialect tree. In this thesis, the method performed better on the Belgian dialects than on the Netherlandic dialects. However, once the results have been validated using a dialect map for the researched area, insights from bayesian inference can be used to get a full image of dialect kinship. For example, even if no higher-order groupings are returned in a bayesian inference tree, local groupings (as in Figure 17) can give interesting clues about relationships between dialects. 6 Literature Campbell, L. (1998). Historical linguistics: An introduction. MIT press. Daan, J. and Blok, D. (1969). Van Randstad tot Landrand: Toelichting bij de kaart: Dialecten en Naamkunde. Bijdragen en mededelingen der Di- alectenkommissie van de Koninklijke Nederlandse Akademie van Wetenschap- pen. Noord-Hollandsche Uitgevers Maatschappij. Dunn, M. (2008). Language phylogenies (in press). Routledge hand- book of historical linguistics. http://pubman.mpdl.mpg.de/pubman/ Heeringa, W. (2001). De selectie en digitalisatie van dialecten en woorden uit de reeks nederlandse dialectatlassen. TABU, Bulletin voor Taalwetenschap, 31, number 1/2:61–103. Heeringa, W. and Nerbonne, J. (2006). De analyse van taalvariatie in het nederlandse dialectgebied: methoden en resultaten op basis van lexicon en uitspraak. Nederlandse Taalkunde, 11(3):18–257. Hoppenbrouwers, C. and Hoppenbrouwers, G. (1988). De featurefrequen- tiemethode en de classificatie van nederlandse dialecten. TABU, Bul- letin voor Taalwetenschap, Jaargang 18, nummer 2, 1988. Retrieved from http://urd.let.rug.nl/nerbonne/papers/inferring-sound-changes-Prokic- et-al-2011-Diachronica.pdf on 15-06-2014. Huelsenbeck, J., Ronquist, F., Nielsen, R., and Bollback, J. (2001). Bayesian inference of phylogeny and its impact on evolutionary biology. Science, Jansen, M. (2002). De dialecten van ameland en midsland in vergelijking met het stadsfries. Us Wurk. Tydskrift foar frisistyk, 51:128–152. Retrieved from http://depot.knaw.nl/9683 on 25-06-2014. Larget, B. and Simon, D. (1999). Markov chain monte carlo algorithms for the bayesian analysis of phylogenetic trees. Molecular Biology and Evolution, List, J.-M. (2012). Multiple sequence alignment in historical linguistics. a sound class based approach. In Proceedings of ConSOLE XIX, pages 241–260. List, J.-M. and Moran, S. (2013). An open source toolkit for quantitative histor- ical linguistics. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, August 4-9, Sofia, Bulgaria., pages 13–18. Nerbonne, J., Colen, R., Gooskens, C., Kleiweg, P., and Leinonen, T. ((2011)). Gabmap – a web application for dialectology. Dialectologia, Special Issue Nichols, J. and Warnow, T. (2008). Tutorial on computational linguistic phy- logeny. Language and Linguistics Compass, 2.5:760–820. Nieuweboer, R. (1998). The altai dialect of plautdiitsh (west-siberian mennonite low german). Master’s thesis, University of Groningen. Proki´c, J., Gray, R., and Nerbonne, J. (2011). Inferring sound changes using bayesian mcmc. Submitted to Diachronica, 1/2011. Retrieved from http://urd.let.rug.nl/nerbonne/papers/inferring-sound-changes-Prokic- et-al-2011-Diachronica.pdf on 15-06-2014. Rambaut, A. (2013). Figtree 1.4.1. tree figure drawing tool. Ronquist, F. and Huelsenbeck, J. (2003). Mrbayes 3: Bayesian phylogenetic inference under mixed models. Bioinformatics, 19:15721574. Ronquist, F., Huelsenbeck, J., and Teslenko, M. (2011). Draft mrbayes version 3.2 manual: Tutorials and model summaries. Retrieved from http://mrbayes.sourceforge.net/mb3.2 manual.pdf on 25-06-2014. Theiling, H. (2008). cxs2ipa. an x-sampa to ipa converter. Retrieved from http://www.theiling.de/ipa/ on 16-06-2014. Zhang, K. and Shasha, D. (1989). Simple fast algorithms for the editing distance between trees and related problems. SIAM journal on computing, 18(6):1245– 7 Appendix 7.1 Tree of Netherlandic dialects – equal rate variation The tree spans two pages. 7.2 Tree of Netherlandic dialects – gamma rate variation The tree spans two pages. 7.3 Tree of Belgian dialects – equal rate variation 7.4 Tree of Belgian dialects – gamma rate variation Nukerke Ronse 7.5 Dialect map by Daan and Blok (1969) The first page shows the map, the second page shows the legend.
{"url":"https://www.researchgate.net/publication/315735189_Determining_Dutch_dialect_phylogeny_using_bayesian_inference","timestamp":"2024-11-05T07:43:38Z","content_type":"text/html","content_length":"548560","record_id":"<urn:uuid:63f83df9-a198-48e8-a3b5-95d707914c2a>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00733.warc.gz"}
Displacement, Time, Velocity: Unit 1: Kinematics - Motion in One Direction Displacement, Time, Velocity Study Guide Unit 1: Kinematics - Motion in One Direction "Kinematics - Motion in One Direction" is unit one in a Physics study guide written by Mr. Roger Twitchell, a retired high school teacher from Western Maine. Mr. Twitchell used this textbook for several years in his own classroom as a supplement to the published Physics textbook. He has graciously permitted this site to publish his work for other teachers to use. Questions from the study guide are supplemented by some additional problems written by the site administrator. Some text, formulas, and diagrams have been reformatted and edited for web display. Click here to read more about the study guide Kinematics is that branch of physics that deals with a description of motion. It makes no effort to explain the causes of motion, indeed the causes cannot be studied meaningfully until the motion can be correctly described. In this chapter, motion along a straight line will be studied. A good understanding of the principles in this chapter is necessary before attempting to study either the causes of motion or more complex motions. Displacement and Position Before attempting to describe the motion of an object it is necessary to be able to describe its position. This is usually done by reference to a Cartesian coordinate system, either a two or three dimensional one. For the type of motion studied in this chapter it is only necessary to refer to a number line consisting of an origin, the set of positive numbers running one way and the negative numbers going the other. If the object whose motion is being studied is moving on the surface of the earth, the number line will be a horizontal one. If the object is moving up or down the line will be a vertical one. The conventional number line used in algebra has positive numbers to the right of the origin and negative numbers on the left. When a vertical line is used it is conventional to place positive numbers above the origin. However, in physics the sign conventions are often reversed if by doing so a problem is simplified. Examples of where this is convenient will be given later. It is usually best to take the origin of the number line at the point where the motion starts. Often the number line (usually called the reference system) is not explicitly referred to. In this case the problem solver must have a clear picture of the reference system being used in his or her mind. It is necessary to make a clear distinction between the terms position and displacement. Position refers to where an object is on the reference system, while displacement refers to the change in position if an object is moved. Refer to Figure 1.1.1. ^Figure 1.1.1 The position of point A is +4 and the position of point B is +14. If an object is moved from A to B its displacement is +10. If an object is moved from point B to point A its displacement is -10. The displacement can be written a X= -10. The symbol Δ, is the Greek letter "delta" and stands for "the change in . . . " Be sure to watch signs, both on your values of position, X, and displacement, ΔX. Frequently you will be measuring displacement from the origin in which case ΔX=X, but it is best to still refer to it in your equations as ΔX. Time Intervals In order to describe motion it will be necessary to measure time. Time is usually measured in seconds. It is necessary to distinguish between time, t, and a time interval, Δt. Time can refer to the time of day or the time elapsed since the start of a certain clock. The time interval, Δt, refers to the time elapsed between two events. If we start our clock when the motion starts, Δt will be equal to t. However, the two quantities are not always equal. Average Speed Average speed is defined as the ratio of the displacement to the time interval, There are a number of important facts to note in relation to the use of this equation. If an object is moving at a constant speed, then the average speed is equal to the actual speed at any instant during the interval. If the object is not moving at a constant speed, the average speed may bear little resemblance to the actual speed at points during its motion. If the speed of the object is changing at a constant rate the average speed is equal to the actual speed at the midpoint of the motion. If the value of the time interval is very short, the average speed is close to the actual speed at each point of the interval. As the time interval approaches zero the ratio approaches a limit which is defined as the instantaneous speed at that point. To demonstrate the limitations of eqn 1 consider the following problem: A family on vacation travels 240 miles in 6 hours. Find their location and speed 3 hours after leaving home. By dividing 240 miles by 6 hours we obtain an average speed of 40 mph. We might then be tempted to answer the problem by saying that after three hours they are 120 miles from home (40 mph times 3 hrs) moving at a speed of 40 mph. However, these answers are not necessarily correct. The family undoubtedly did not travel at a constant speed for the entire trip. Three hours into the trip they may be stopped at McDonald's for lunch in which case their speed would be zero. Depending on traffic conditions they might have traveled more or less than one half of the trip. There is no way to answer the questions from the information given. Do not attempt to calculate an average speed by averaging several speeds, it may give a wrong answer. For example, if a car moves at a speed of 10 mph for 100 miles and at a speed of 100 mph for 100 miles the average speed is not =55 mph. It is the total distance traveled, 200 miles, divided by the total time required, 11 hours. This comes out to be 18.2mph, not 55 mph (the average of 10 and 100). A car has an initial position of -12 miles, and a final position of 15 miles. What is ΔX? A car has an initial position of 32 km, and a final position of 20 km. What is ΔX? Explain the difference between displacement and position A car can travel 200 miles in 4 hours. What is the car's average speed? What does the Greek letter Δ mean? Describe the motion of an object with a velocity of 0 Explain instantaneous velocity in your own words. Mr. Physics walks 8 miles at 4 miles per hour, and then walks an additional 12 miles at 3 miles per hour. What is his average speed for the entire trip? If a car is traveling at 88 , how many miles can it travel in two hours? A car had an average velocity of 40 mph over the course of a four hour trip. For the first hour, the average velocity was 20 mph. What was the average velocity for the remaining 3 hours? A car travels 1200 miles in 24 hours. What can you conclude about how far the car had traveled after 12 hours. Explain your answer. A car travels at the constant speed of 30mph for 20 miles, at a speed of 40mph for the next 20 miles, and then travels the final 20miles at 50mph. What was the average speed for the trip? Assign this reference page Click here to assign this reference page to your students. Blogs on This Site Reviews and book lists - books we love! The site administrator fields questions from visitors.
{"url":"https://www.theproblemsite.com/reference/science/physics/study-guide/kinematics/displacement-time-velocity","timestamp":"2024-11-06T13:41:23Z","content_type":"text/html","content_length":"29412","record_id":"<urn:uuid:5c644283-f655-4530-8a8c-3791a3eff37f>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00826.warc.gz"}
Styling Bar Charts in ggplot2 This article is designed to guide you in the process of creating bar charts using ggplot2. Starting with the basics, we'll guide you through each step to evolve a simple bar chart into one that meets academic standards. Effective data visualization requires both accuracy and aesthetics, a balance that can be achieved with the use of the R package: ggplot2. This package is the best practice and a versatile tool for custom styling of your visualizations. We'll explore customizing colors, themes, and labels, and provide practical tips for academic presentations. This includes handling standard errors, black-and-white formatting, and effective data grouping. Additionally, we'll cover saving charts in high-quality PNG or PDF formats, ensuring they're ready for academic publication. Data Visualization Best Practices Explore, our article on Data Visualization, here we describe the most common chart types and conclude with best practices for plotting. Also look at this article to explore the inner workings of ggplot2, called the "Grammar of Graphics". Data Retrieval In this article we will use the PIAAC dataset to examine the wage premium of obtaining a higher education level. We will illustrate the variance in wage across different education levels and later on between genders in the Netherlands. Let's first load the required packages and download the data. # Install the tidyverse package, which includes dplyr and ggplot2 # Load the tidyverse package into the R session # Load data data_url <- "https://github.com/tilburgsciencehub/website/tree/master/content/topics/Visualization/data-visualization/graphs-charts/piaac.rda" load(url(data_url)) #piaac.rda is loaded now Step-by-Step Guide: Crafting Publishable Bar Charts Step 1: Data Manipulation A bar plot effectively displays the interaction between numerical and categorical variables. The preparation of our dataset includes several steps: • Using mutate() and factor() to order the categorical variable. • Applying group_by() to group the data by the categorical variable, facilitating summary statistics. • Calculating mean, standard deviation (SD), count (N), and standard error (SE) of the numerical variable for each group. # Using dplyr for data manipulation data_barplot <- data %>% # Organizing the categorical variable mutate("Categorical Variable" = factor("Categorical Variable", levels = c("A", "B", "C"))) %>% # Grouping data by the categorical variable group_by("Categorical Variable") %>% # Summarizing statistical measures for the numerical variable mean = mean("Numerical Variable"), sd = sd("Numerical Variable"), N = length("Numerical Variable"), se = sd / sqrt(N) # Calculating Standard Error (SE) for error bars The code above transforms a specified categorical variable into a categorical data type, orders its levels, and then groups the data by this variable. It calculates summary statistics (mean, standard deviation, count, and standard error) for a numerical variable within each category (level). These steps prepare the dataset for creating a bar plot with error bars, useful for visualizing differences between categories. Refactoring Categorical Variables Refactoring categorical variables is important for bar plot visualizations, as it allows you to decide in which order particular labels appear. • Using the fct_relevel() function. This function allows for reordering factor levels using character strings, ensuring the categorical data is displayed in a preferred sequence. • Utilizing Base R's factor() Function: Our approach involves factor(), where the levels argument sets the desired order Step 2: Creating your first barchart With the data in place, let's visualize it using a bar chart. The code snippet below demonstrates how to create a bar chart with error bars using ggplot2: data_barplot %>% ggplot(aes("Categorical Variable", "mean")) + # Basic ggplot setup with x-axis as 'Categorical Variable' and y-axis as 'mean' geom_col(aes(fill = "Categorical variable")) + # Creating bars for each category and filling them based on the categorical variable geom_errorbar(aes(ymin = mean - se, ymax = mean - se), width = 0.85) # Adding error bars to each bar Key Points: - geom_col() vs. geom_bar(): We use geom_col() here, as it's suitable when bar heights need to directly represent data values. In contrast, geom_bar() is used when you want to count cases at each x position, as it employs stat_count() by default. - Adding Error Bars: geom_errorbar() is utilized to add error bars to the bar chart. This function takes ymin and ymax aesthetics, calculated here as mean - se and mean + se, respectively. The width parameter controls the width of the error bars. - Error bars are essential in reporting the "confidence" of particular estimates (e.g., whether they are closer or further away from zero). - Aesthetics: The fill aesthetic within aes() is set to Categorical Variable to color the bars based on the categorical groups. Step 3: Enhancing Aesthetics The initial visualization is a solid starting point, but to meet publication standards, we need to refine it further. The primary issues are: • Redundant x-axis text: The legend duplicates information already conveyed by the x-axis labels. • Non-Descriptive Axis Titles: Axis titles need to be more informative for clarity. • Lack of Contextual Information: The plot lacks a title. • Color Scheme: The default color palette does not meet the academic rigor. Let's address these issues with the following enhanced visualization: data_barplot %>% # [Insert previous code here] scale_fill_manual(values = c("COLOR1", "COLOR2", "COLOR3")) + # Customizing bar colors scale_y_continuous(limits = c(0, 40), expand = c(0, 0)) + # Adjusting y-axis limits theme_minimal() + # Applying a minimalistic theme x = "Description of Categorical Variable", y = "Description of Numerical Variable", title = "Descriptive Title" Let's go over these changes. - scale_fill_manual: This function customizes bar colors to specified ones. - theme_minimal(): It removes the grey background panel, creating a cleaner look. - scale_y_continuous: Adjusts the y-axis scale. - limits sets the y-axis limit. Since we still have to add the p-values, we need more space above the highest bar. - expand sets the axis to start precisely at 0. - labs() Function: Adds informative titles for axes, and the plot itself. Step 4: Enhancing Bar Chart Aesthetics with Theme Customization After the initial setup, further customization using the theme() function in ggplot2 can significantly enhance the visual appeal and clarity of your bar chart. Here's how you can apply these data_barplot %>% # [Insert previous code here] plot.title = element_text( size = 15, # Increase title font size for visibility face = "bold", # Make title text bold for emphasis margin = margin(b = 35), # Add a bottom margin to the title for spacing, especially for p-values hjust = 0.5 # Horizontally center the plot title plot.margin = unit(rep(1, 4), "cm"), # Set uniform margins of 1cm around the plot for balanced white space axis.text = element_text( size = 12, # Set axis text size for readability color = "#22292F" # Define axis text color for clarity axis.title = element_text( size = 12, # Ensure axis titles are sufficiently large hjust = 1 # Horizontally align axis titles to the end axis.title.x = element_blank(), # Remove x-axis title for a cleaner look axis.title.y = element_blank(), # Remove y-axis title to avoid redundancy axis.text.y = element_text( margin = margin(r = 5) # Add right margin to y-axis text for spacing axis.text.x = element_blank(), # Remove x-axis text for simplicity legend.position = "top", # Place the legend at the top of the plot legend.title = element_blank() # Remove the legend title for a cleaner appearance Now the barchart is closer to the academic standards. In the code the function theme() allows to adjust: - Plot Title: Enhanced with a larger font size, boldface, and added bottom margin for clarity and spacing. - Plot Margins: Uniform margins added around the plot to create balanced spacing. - Axis Text and Titles: Changed text size and color for better readability; additional margins for x-axis text and removal of the y-axis title to simplify the plot. - Legend Customization: Removed the redundant legend title and repositioned the legend to the top for a cleaner layout and easier Step 5: Visualizing Statistical Significance in Bar Charts Suppose you've analyzed how education levels and gender influence mean hourly wages and found significant differences. To effectively communicate these results in your bar chart, it's essential to visualize the statistical significance. Automating statistical tests and visualization with ggpubr streamlines the process, making it more efficient, as explained here. However, customizing visualizations directly in ggplot2 offers greater flexibility and control over the final output. Let's dive into the latter. Creating Data for Confidence Bounds First, we need to set up data points to draw lines indicating significance. These lines typically connect the relevant categories with a central peak to denote significance. # General code p_value_one <- tibble( x = c("A", "A", "B", "B"), y = c("Value 1", "Value 2", "Value 2", "Value 1")) # Example code p_value_one <- tibble( x = c("Low", "Low", "Medium ", "Medium"), y = c(20, 22, 22, 20) This setup creates a line that starts at the center of the 'Low' bar at y = 20, peaks at y = 22 (indicating significance), travels across to the 'Medium' bar, and descends back to y = 20. Adding Significance Lines and Annotations** Next, we add these lines and annotations (like asterisks) to highlight the significance levels found in your statistical analysis. # [Insert previous ggplot() + geom_() code here] geom_line(data = significance_data, aes(x = x, y = y, group = 1)) + annotate("text", x = "1.5", y = 23.5, label = "***", size = 8, color = "#22292F") + geom_text(aes(label = round(mean, 2)) , vjust = -1.5) + # Add the average value for more information # [Insert previous customization code here] In this code: - geom_line() uses the p_value_one data to draw the line. Setting group = 1 ensures it's treated as a continuous line. - annotate() adds asterisks (***) at a specified point (here, x = "1.5", y = 23.5) as a common symbol for statistical significance. - geom_text() adds the mean value of each category in the plot. Remember, the positions and labels are adjustable based on your specific data and results. Experiment with the x and y values in the annotate function to achieve the best placement in your bar chart. This approach provides a clear, customized way to denote significant findings in your visualization. Saving Your Plots Transferring plots via copy-paste can degrade their quality, but our article on about saving plots in R provides a solution. It discusses how to use use ggsave() from the ggplot2 package to save your visuals in high-quality formats. Discover how to maintain clarity in presentations and publications by customizing plot dimensions and resolutions. Advanced Techniques for Multi-Group Bar Charts in ggplot2 When working with ggplot2 to visualize complex datasets with multiple categorical variables, creating faceted plots can pose unique challenges. This section focuses on the effective use of p-value annotations in such scenarios. Visualizing Wage Differences by Education and Gender Imagine plotting the mean hourly wage by education level, segmented by gender. Our goal is to highlight significant differences between education levels using p-values. Step 1: Grouping the Data Begin by organizing your data using group_by() to focus on the interaction between the chosen categories. grouped_barplot <- data %>% group_by("Categorical Variable 1 ", "Categorical Variable 2") %>% mean = mean("Numerical Variable"), sd = sd("Numerical Variable"), N = length("Numerical Variable"), se = sd / sqrt(N) # Calculating SE for error bars Step 2: Faceted Visualization with facet_wrap Employ facet_wrap() to create distinct plots for each gender, allowing for a clear comparison across categories. grouped_barplot <- ggplot(grouped_data, aes(x = edlevel3, y = mean, fill = edlevel3)) + geom_col() + facet_wrap(~ gender_r) # Faceting by gender Step 3: Preparing P-Value Data for Annotations To annotate your plot with p-values, prepare a separate dataset containing these values along with the corresponding coordinates and the faceting variable. # Example p-value data p_value_one <- tibble( x = c("Low", "Low", "Medium", "Medium"), # Education levels y = c(20, 22, 22, 20), # Y-coordinates for annotations label = c("**","***"), # P-value annotations gender_r = c("Male", "Male", "Male", "Male")) # Corresponding to facets # Example Annotation data annotations <- tibble( x = c(1.5, 2), y = c(21, 29), label = c("**", "***"), gender_r = c("Male", "Male") Step 4: Adding Annotations in ggplot2 Incorporate these annotations into your ggplot2 chart, ensuring they align accurately with the respective facets. geom_line(data = p_value_one, aes(x = x, y = y, group = 1)) + geom_line(data = p_value_two, aes(x = x, y = y, group = 1)) + geom_text(data = annotations, aes(x = x, y = y, label = label, group = gender_r), size = 7) + This article uses ggplot2 for effective bar chart styling, crucial in academic data presentation. - Bar charts in ggplot2 are ideal for categorical data, showcasing groups and their quantitative measures. - Key functions covered include ggplot() for initial plot creation and geom_col() for constructing bar charts. - Advanced customization is achieved using geom_errorbar()for error bars and scale_fill_manual() for color themes. - Showcases how to add p-values inside your ggplot. Interested in the source code used in this analysis? Download it here.
{"url":"https://tilburgsciencehub.com/topics/visualization/data-visualization/graphs-charts/styling-bar-charts-in-ggplot2/","timestamp":"2024-11-11T08:53:20Z","content_type":"text/html","content_length":"89892","record_id":"<urn:uuid:4c6859da-8730-4ed3-a97e-da45d68d1701>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00173.warc.gz"}
Problem solving with angles | Oak National Academy Hello there. You made a great choice with today's lesson. It's gonna be a good one. My name is Dr. Robson and I'm gonna be supporting you through it. Let's get started. Welcome to today's lesson from the unit of angles. This lesson is called Problem Solving with angles, and by the end of today's lesson we'll be able to use our knowledge of angles to solve problems. Here are some previous keywords that may be useful during today's lesson, so you may want to pause the video if you want to remind yourself what any of these words mean and then press play when you're ready to continue. This lesson is broken into two learning cycles where we'll be looking at problems involving angles in each learning cycle. In the second learning cycle, these problems will be more complex than in the first one, but let's start off with our first set of problems with angles. When solving a problem involving angles, you might not be able to find the angle you want straight away. Sometimes you may need to find other unknown angles before you can find the one that you are asked for. But the good news is that the more angles you find, the easier it can be to find other angles. You may need to bear in mind that you may need to use multiple angle facts in order to solve a problem and the angle facts that you use may vary between problems. It all depends on what information you are presented with in the first place. Sometimes when you see a problem, you might see something that looks like a line segment. In other words, something that looks straight, but, just because something looks like a line segment doesn't necessarily mean it is one. It might not be exactly straight, it might be two line segments joined together. An angle that is very close to 180 degrees, but not quite. Usually a problem will clarify whether or not this is a case somewhere within its information. Throughout this lesson you may assume that anything that looks like a line segment is a line segment. In other words, if something looks straight, it is in this lesson. Let's take a look at an example of a problem together. Now let's find the value of X in this diagram. It contains an isosceles triangle, which we can see because it has a hash mark on two of its sides and then we can see that there's an angle coming off one of the vertices, which is an exterior angle at that point, and that angle is 62 degrees. How could we find the value of X and which angle facts would help us solve this problem along the way? It may require a couple of steps in order to get to our final solution and we may need to find some other angles within that diagram before we can find the value of x. Pause the video while you think about how we might approach this and then press play when you're ready to continue. Let's take a look at this together. Now we could start with the fact that we have that 62 degree angle on a straight line and angles forming a straight line sum to 180 degrees. So we could find that angle on the inside of that triangle by doing 180, subtract 62 to get 118 degrees. Now we've done that, we can focus more so on the triangle itself, it's an isosceles triangle, which means that base angles in isosceles triangle are equal. In other words, the X angle and the other angle, we don't know, they're equal to each other. We also know that angles in a triangle sum to 180 degrees, so we could do 180, subtract the angle we know, 118, and then divide it by two to get 31 as our answer for the missing angle. And that means our answer for X is also 31. Problems can be made more complex by including more shapes. For example, here we can see the figure from the previous question, but with an additional quadrilateral attached to it, we can also see that there are a pair of arrows that indicates that the top side of the quadrilateral is parallel to the bottom side of the isosceles triangle. This is important because it may affect which angle facts we can use. Let's find the value of Y in this diagram. Pause the video, while you think about how we might approach this and which angle facts might be useful this time. Are there any other angles you could find before you can find a value of y? Pause video while you think about it and press play when you're ready to continue. Well, let's think about this together. Now the angle which is marked Y is part of a quadrilateral and we know that angles in a quadrilateral sum to 360 degrees, but we can't use that fact straight away because there is another unknown angle in that quadrilateral. If we can work out that unknown angle in the top left corner, then we could use the fact about angles in a quadrilateral to work out the value of why. So how can we work out that unknown angle in the top left corner? Well, we know that angles in parallel lines which are alternate are equal and can we see which angle is alternate to the one we just highlighted? The angle in the bottom right corner of the isosceles triangle, which is 31 degrees, is alternate to the one that we're trying to find here in the top left corner, and they are between parallel lines, so that means they must be equal. The angle is 31 degrees. So now we know that now we can use the fact that angles are quadrilateral sum to 360 degrees. We can make an equation by summing together all the angles that we know as well as the Y put it equal to 360. We could then simplify that equation by adding together the angles we know and then we could subtract 235 from both sides to get Y equals 125. Let's check what we've learned. Here, we have a diagram containing an isosceles triangle. Please find the value of A and justify your answer with reasoning. Write down which angle facts you use along the way. Pause video while you do this and press play when you're ready for an answer, A is equal to 66. We can get to it by first using the fact that angles form in a straight line sum to 180 degrees and then we can do 180, subtract 114 to get 66. Let's make this problem a bit more complicated now. Let's find the value of B. Justify your answer with reasoning. Pause why you do it and press play when you're ready for an answer. Answer is 66. The base angles of an isosceles triangle are equal so we know that the angle labelled B is equal to angle labelled 66 degrees. Let's now find another angle in this diagram. Let's find this one. What is the value of C? Justify your answer with reasoning. Pause the video while you do it and press play when you're ready for an answer. The answer is 48. We can use the fact that angles in a triangle sum to what 180 degrees and then we can do C is equal to 180, subtract two lots of 66, which gives 48. Let's now find another angle by making this question a little bit more complex, like so. What is the value of D? Justify your answer with reasoning. Pause the video while you do this and then press play when you're ready for an answer. Answer is 48. Alternate angles in parallel lines are equal and that's what we can see with this diagram here. That top line we just added is parallel to the bottom line of the bottom side of the isosceles triangle. Let's make this diagram a bit more complex again by joining it up to make a quadrilateral. What is the value of E now? Justify your answer with reasoning. Pause the video while you do this and press play when you are ready for an answer. The answer is 107. Angles in a quadrilateral sum to 360 degrees so we can do 360, subtract the angles that we know. Now, you've just done five short questions, but this question could have been posed as it is now where we need to find the value of E and all those other unknown angles that you have previously found could be missing. The way you solve that problem is to break it down the way we've just done, work out one angle and then use that to work out another angle. Then use that to work out another angle and write down which facts to use along the way. Well done. Okay, it's over to you now for task A. This task contains one question and here it is, find the value of each unknown represented by a letter and justify your answers with reasoning. In other words, don't just find out the value. Also write down which angle facts you use along the way. Pause the video while you do this and press play when you're ready for answers, Let's go through some answers. Question one, we defined A, B, and C here. Well, A is equal to 53 because angles that form a straight line sum to 180 degrees. B is equal to 53 as well, because base angles in an isosceles triangle are equal and C is equal to 74 because angles in a triangle sum to 180 degrees and we done 180, subtract two lots of 53. And then we have D is equal to 51 here because base angles in an isosceles triangle are equal. E is equal to 78 because angles in a triangle sum to 180 degrees. So we subtract the angles we know from 180. F is 102 because angles that form a straight line sum to 180 degrees, so we can do 180, subtract the 78 which just worked out and G is equal to H, which is equal to 39 because angles are triangle sum to 180 degrees and base angles in isosceles triangle are equal, which is why G is equal to H. And then we have this one. There's only one unknown here, but we can't work it out straight away. We need to work out some other angles. This angle is 62 degrees because it's vertical opposite the other angle and vertical opposite angles are equal. This angle is 51 degrees because alternate angles in power lines are equal and it's alternate to the 51 at the top and now we can work out I, I is 67, because angles are triangles sum to 180 degrees. And in this question we need to work out the value of J, but we need to work out other angles before we can get to it. These two angles are 53 degrees because angles in a triangle sum to 180 degrees and then base angles in isosceles triangle are equal. This angle is 127 degrees because angles form a straight line sum to 180 and 180 subtract 53 gives you 127, and now we have enough information to work out the value of J. J is equal to 132 because angles in a quadrilateral sum to 360 degrees. So far so good. Now let's move on to the next part of this lesson where we're gonna look at some more complex problems. Some angle problems may provide you with information about the relationships between sizes of different angles within the figure. For example, here we have two angles that are on a straight line, but we don't know how big one angle is is compared to the other. These relationships may be multiplicative relationships and they may be expressed in the following ways. They may be expressed as a multiplication. We may be told explicitly that one angle is equal to two times the other angle or it may be expressed as a ratio. We may be told that the ratio between the angles is equal to two to one, that says the same thing. It still says that one angle is two times the other angle or may be expressed using algebra. One angle may be labelled X and of all may be labelled 2X. It means the same thing. Again, that one angle is two times the other angle. In each of these cases, the same equation can be formed. We know that one angle is equal to double the other angle and we know that these two angles sum to 180 degrees, so we can write the equation X plus 2X equals 180. Let's apply this now to a problem that looks a bit like this. We have a Pentagon where we are given about one angle is 90 degrees, but we're not told what the other angles are. However, we are provided with some information that tells us how big one angle is compared to another. Let's use this information to find the size of angle B, C, D. Pause the video while you think about what steps we might take and then press play when you're ready to look at this together. Well, we could start by calling angle BCD X degrees and then we could use the information provided to us at the bottom of this diagram to express the other unknown angles in terms of X. For example, we're told that angle EAB is equal to two times angle, B, C, D, so angle EAB must be 2X and we're told that angle ABC is equal to two times the angle we've just written down, angle EAB so angle ABC must be two times 2X, which is 4X and we're told that angle CDE is equal to angle EAB, so that means they both must be equal to 2X. Now we've done that and we know that one of the angles is 90 degrees and the other angles are written in terms of X. We can think about the fact that angles and append against sum two 540 degrees and that way we can use this to make an equation that looks a bit like this. The sum of all those angles equals 540 degrees. We can then simplify this equation, subtract 90 from both sides and divide each side by nine to get X equals 50. And let's just double check which angle is we're looking for. We're looking for angle BCD, which we have labelled just with X, not 2X or 4X. So that means the size angle BCD is equal to 50 degrees. Now this same question could have also been posed using ratios. So rather than explicitly saying that one angle is two times another angle, we could write it like this. The ratio between those three angles is two to four to one. It still tells the same thing. We can still see the angle EAB is two times angle BCD, because we have a two on the first part of the ratio and a one on the third part of the ratio. And we can use all those same relationships by labelling the angle we want at the start X and then writing all the other ones as 2X and 2X and 4X to get to the same point we had earlier, where we can create an equation and find the angle we want. Here we have another problem where we have a pair of parallel lines and isosceles triangle in between them. We're not at told the size of any angles, but we are told that the ratio for angle EFD to angle DBC to angle BFG is equal to one to two to six. Let's use this information to find the value of angle EFD. Pause the video while you think about what steps we might take in order to solve this problem. Even if the only thing you can think about at this point is how we might get started and then we'll work through it together in a second. Pause while you think and press play to continue. Okay, let's start by labelling angle EFD as X degrees and then we can use that ratio to write the other two angles that are part of that ratio in terms of X, like so. We have angle DBC is equal to 2X and angle BFG is equal to 6X. And then we need to think about how we can use angle facts in order to write other angles in terms of X until we get to a point where we have a relationship that sums to something or where one thing is equal to another thing and then we can use that to find the value of X itself. Hmm, Let's see what we can do. We have a point that is in between these two parallel lines and sometimes when we see this, we have a point in between two parallel lines and it's joined by a zigzag like shape, C to B to D to F to It can be helpful to draw an additional parallel line like so. And then we can think about how alternate angles in parallel lines are equal, because we have two pairs of alternate angles here. We have these two pairs here. One of those angles which is below our new line is alternate to the X and the angle's above that additional line is alternate to the 2X, so they must be equal. That means that angle altogether in that isosceles triangle is 3X degrees and we know that base angles in isosceles triangle are equal. So that angle sums to 3X degrees, so does the other angle. And now we have three angles at a point on a straight line x, 3X and 6X. Can you think about what fact we can use next? Angles form in a straight line sum to 180 degrees. So we can write this as an equation where we've got the sum of those, which is 10X equals 180, and then we can divide both sides by 10 to get X equals 18. And let's check what angle we're trying to work out. We're trying to work out angle E, F, D and that one is labelled as X, not 2X or 3X, so that means that angle EFD is equal to 18 degrees. Let's check what we've learned by doing a problem together now and we'll go break this problem down into lots of smaller problems. What do the interior angles of this polygon sum to? Pause while you write it down and press play when you're ready for an answer. The answer is 720 degrees. It's got six sides, it's a hexagon. So now you're told this information. Angle FAB is equal to 2X. Angle CDE is equal to 1. 5 times angle FAB, and angle ABC is equal to two times angle FAB. And what you need to do is use this information to write an expression in terms of X for angle CDE, and an expression in terms of X for the size of angle ABC. Pause the video while you do that and press play when you're ready for answers. Your answers are 3X for angle CDE, because that's 1. 5 times two and then you've got 4X for angle ABC because that's two times two and we have X as well. So now you have all this information, find the value of x. Pause the video while you do this and press play when you're ready for an answer. The answer is 50. You can get to it by making an equation based on adding these angles together and then you can simplify the equation to get X equals 50. Now let's find an actual angle. You know that X is 50, find the size of each of these angles, pause video while you do this and press play when you're ready for answers. Angle FAB is 100 degrees. That is by doing two times 50 angle. ABC is 200 degrees. That's by doing four times 50. And angle CDE is 150 degrees. That's by doing three times 50. Okay, it's over to now for task B. This task contains four questions and here are questions one and two. Pause video while you do these and press play when you're ready for questions three and four. And here are questions three and four. Pause the video while you do these and press play when you are ready for answers. Let's go through some answers. Question one, you're given this information and we find the value of X. Well, we could use information to write the other angles in terms of X and then we can use the fact that these angles sum to 720 degrees to make this equation here. We could simplify the equation, rearrange it, and solve to get X equals 40. And then question two, we've got a parallelogram with some line segments inside it and we need to find the value of X. Let's start by writing it down as many other angles in terms of X as we can until we get to the point where we can create an equation. The angle that is in the opposite diagonal corner to 4X is also equal to 4X degrees. That's angle DEA, because diagonal opposite angles in a parallelogram are equal, that 4X is inside an isosceles triangle. So we can work out an expression for the other two angles of isosceles triangle by subtracting 4X from 180 and then dividing by two to get 90, subtract 2X degrees for each of those angles. We can then work out this angle here, CDA, by using the fact that angles on a straight line sum to 180 degrees when they are adjacent to each other. That way we can do 180, subtract one of the angles we just worked out to get 90 plus 2X. That angle is also inside an isosceles triangle, so we can use the same process again to work out the other two missing angles. In the isosceles triangle we can do 180, subtract the angle we've just worked out and divide it by two to get 45 minus X degrees. And then let's look at the angle in the bottom right corner, angle BCD. We could write an expression for the entire that angle by summing together the two parts we've worked out and that'll give 3X plus 40 for the entire that angle. Now that angle BCD is co-interior to the angle which is labelled 4X at the start. Co-interior angles between parallel lines sum to 180 degrees. So we can make this equation 4X plus 3X plus 40 equals 180. We could simplify the equation and then rearrange it to solve the value of X as 20. And then question three, we have a parallelogram and we have two angles given to us as a ratio. However, neither of those two angles are equal to angle labelled X. So what we may need to do here is use another letter to help us work out some other angles first. For example, we could use one unit of this ratio and call it Y, which means that angle DEF would be equal to two Y degrees and angle BDE would be equal to three Y degrees. We could use this fact and that co-interior angles in parallel lines sum to 180 degrees to create this equation and solve to get the value of Y as 36. Now we know that Y is 36, we could work out some of those angles. For example, the angle on the bottom right corner angle DEF is two y. That would be two times 36 as 72. And we could use that to find other angles in this diagram because some of the angles are equal to that one. For example, in the top left corner of this parallelogram, that angle is diagonally opposite the one, the bottom right corner, so it must be equal. That one must be 72 degrees as well. And as part of an isosceles triangle, which means the other angle in the ISO lee triangle is also 72 degrees, and because we have a third angle in isosceles triangle, we could work that out to be 36 degrees and then we have X is on a straight line with the 36 degrees. So we could do the fact that they add up to 180 degrees to find the value of X. Then we have question four. We have a pair of parallel lines with an isosceles triangle between them. Let's start by drawing an extra line, which is parallel to the other two parallel lines. We can use alternate angles between parallel lines to write an expression for each of these. One is X degrees, the other is 2X degrees. The entire of that angle is equal to 3X degrees and it's isosceles, which means the angle in the top left corner of this isosceles triangle is equal also to 3X degrees. And then we could find the third angle of this isosceles triangle by using the fact that all three of those angles was sum to 180 degrees. So we could do 180, subtract the sum of 3X, 2X and X to get an expression 180, subtract 6X degrees. Now all three angles together on that straight line will sum to 180 degrees. 100 plus the angle we just worked out, 180 subtract 6X plus X will make 180 degrees. We could simplify this equation and then we could rearrange it to solve it to X equals 20. There are other ways we could have done that as well. We could have used co-interior angles with the a hundred degrees to create an equation at the top on that top parallel line instead. And sure you can probably find other ways as well. Great work today. Now let's summarise what we've learned during this lesson. Problems can be solved using a variety of angle facts. These may include facts about angles in parallel lines, interior and exterior angles and polygons, and many more. Problems can be solved and the solutions justified using these angle facts. Sometimes though, you might not be able to find the angle you want straight away and you may need to find other angles first. That's okay, we can do that. And finding unknown angles can make it easier to find other known unknown angles. So the more angles you find, the easier it is it can be to find the answer you want. Well done today. Have a great day.
{"url":"https://www.thenational.academy/pupils/programmes/maths-secondary-year-10-higher/units/angles/lessons/problem-solving-with-angles/video","timestamp":"2024-11-13T06:15:21Z","content_type":"text/html","content_length":"143745","record_id":"<urn:uuid:d969f95b-66ea-4aba-92d9-6edc34da3d2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00313.warc.gz"}
Research index for Jean-Baptiste Caillau Journal papers Edited volumes Proceedings and book chapters This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.
{"url":"http://caillau.perso.math.cnrs.fr/research/","timestamp":"2024-11-05T12:31:03Z","content_type":"text/html","content_length":"20609","record_id":"<urn:uuid:b4ff9ca3-2827-462f-b7a7-2d7521e4a871>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00890.warc.gz"}
C++ Program to Display Fibonacci Series using Loops, Recursion, Dynamic Programming - BTech Geeks C++ Program to Display Fibonacci Series using Loops, Recursion, Dynamic Programming In the previous article, we have discussed about C++ Program to Reverse a Number. Let us learn how to Display Fibonacci Series in C++ Program. Know different methods to print Fibonacci series in C++ explained step by step with sample programs. You can use the technique of your choice to display the Fibonacci series in no time. You can use this quick tutorial over here as a reference to resolve any doubts of yours. Methods to display Fibonacci series in c++ In this article, we understand how to display the Fibonacci series in c++. Methods that we will discuss are given below. Before understanding these methods let us understand first what is Fibonacci series is. The Fibonacci numbers are the numbers in the following integer sequence. 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, …….. Here we can see the next number in the series is the sum of the previous two numbers. Now we see different methods to display the series. Method 1-Using Loops In this method, we are given a number up to which we have to print the series. As we understand that in this series the next number is the sum of previous two numbers therefore we have to store first two number in a variable by ourself because these numbers are fixed and by these numbers, we can display the series. Let write the code for this. #include <iostream> using namespace std; int main() { int n=10, t1 = 0, t2 = 1, nextTerm; cout << "Fibonacci Series: "; for (int i = 0; i < n-2; i++) { if(i == 0) { cout << t1 << " "; if(i == 1) { cout << t2 << " "; nextTerm = t1 + t2; t1 = t2; t2 = nextTerm; cout << nextTerm << " "; return 0; Fibonacci Series: 0 1 1 2 3 5 8 13 21 34 Method 2: Using Recursion As we understand that in this series the next number is the sum of the previous two numbers, therefore, we have to build faith in our program that the sum of the last two-digit will give the next digit, and by having this faith we call the recursive function. Let write code for this. #include <iostream> using namespace std; int fibonacci(int n){ return 0; return 1; return fibonacci(n-1)+fibonacci(n-2); int main() { int n=10, t1 = 0, t2 = 1, nextTerm; for(int i=1;i<=n;i++) cout<<fibonacci(i)<<" "; return 0; Method – 3 Using Dynamic Programming When we analyze the recursive code of the Fibonacci series we will see that we call the recursive function for a number multiple times which is not a good practice. In this approach, we will see that instead of calling recursive function for a given number again and again we store the result for that number in an array so that if that number come again instead of calling recursive call for that number we will simply return the result of that number from the array. Let write the code for this. #include <iostream> using namespace std; int fibonacci(int n,int f[]){ return 0; return 1; if(f[n] != -1){ return f[n]; f[n] = fibonacci(n-1,f)+fibonacci(n-2,f); return f[n]; int main() { int n=10, t1 = 0, t2 = 1, nextTerm,f[10]; for(int i=0;i<=n;i++) for(int i=1;i<=n;i++) cout<<fibonacci(i,f)<<" "; return 0; So these are the methods to display the Fibonacci series in c++. Related Programs:
{"url":"https://btechgeeks.com/cpp-program-to-display-fibonacci-series/","timestamp":"2024-11-09T00:26:21Z","content_type":"text/html","content_length":"63156","record_id":"<urn:uuid:ac2d485a-d3ea-4801-97ba-63fcbbbf3759>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00836.warc.gz"}
What does the zero stand for? Zero is the integer denoted 0 that, when used as a counting number, means that no objects are present. It is the only integer (and, in fact, the only real number) that is neither negative nor positive. A number which is not zero is said to be nonzero. A root of a function is also sometimes known as “a zero of .” What is an example of a placeholder? noun. something that marks or temporarily fills a place (often used attributively): I couldn’t find my bookmark, so I put a coaster in my book as a placeholder. We’re using placeholder art in this mock-up of the ad layout. What is 2 to the power? A power of two is a number of the form 2n where n is an integer, that is, the result of exponentiation with number two as the base and integer n as the exponent….Powers of two whose exponents are powers of two. n 2n 22n (sequence A001146 in the OEIS) 4 16 65,536 5 32 4,/td> What divided by 3 gives you 2? Therefore, the answer to what divided by 3 equals 2 is 6. You can prove this by taking 6 and dividing it by 3, and you will see that the answer is 2. Tip: For future reference, when you are presented with a problem like “What divided by 3 equals 2?”, all you have to do is multiply the two known numbers together. What is 2 called? In math, the squared symbol (2) is an arithmetic operator that signifies multiplying a number by itself. The “square” of a number is the product of the number and itself. Multiplying a number by itself is called “squaring” the number. What is the definition of divided? 1a : separated into parts or pieces. b of a leaf : cut into distinct parts by incisions extending to the base or to the midrib. c : having a barrier (such as a guardrail) to separate lanes of traffic going in opposite directions a divided highway. What is a placeholder short answer? Placeholder is also called as dummy text or filler text. It is a character, word, or string of characters that temporarily holds the place to the final data. Example: In the below screenshot, Email or phone is a placeholder. What does 6 to the power of 2 mean? If you are asked to take 6 and multiply it by 2, you are really doubling 6. In other words, 6 times 2 is like saying you have two 6’s. When you take 6 and square it (raise it to the power of 2), you are taking 6 and multiplying it by itself. So, 62= 6*6 = 36. What is two as a number? 2 (two) is a number, numeral and digit. It is the natural number following 1 and preceding 3. It is the smallest and only even prime number. What is the definition of Division give an example why can’t you divide by zero? In ordinary arithmetic, the expression has no meaning, as there is no number which, when multiplied by 0, gives a (assuming a ≠ 0), and so division by zero is undefined. Since any number multiplied by zero is zero, the expression 00 is also undefined; when it is the form of a limit, it is an indeterminate form. Is dividing by zero acceptable? There is no number that you can multiply by 0 to get a non-zero number. There is NO solution, so any non-zero number divided by 0 is undefined. Why is 2 the best number? Two is the smallest even number, usually with the meaning of ‘double’, ‘twinned’ and ‘again’. It is an auspicious number in Chinese culture, because Chinese people believe that ‘good things come in What does this 3 mean? :3 is an emoticon which represents a “Coy Smile.” The emoticon :3 is used to indicate a coy smile. :3. What is the definition of placeholder? 1 : a person or thing that occupies the position or place of another person or thing The bill would empower the governor to appoint a placeholder to a vacant U.S. Senate seat, to serve through the next general election cycle.— Is zero just a placeholder? 0 (zero) is a number, and the numerical digit used to represent that number in numerals. It fulfills a central role in mathematics as the additive identity of the integers, real numbers, and many other algebraic structures. As a digit, 0 is used as a placeholder in place value systems. What’s the definition of a divisor? A divisor is a number that divides another number either completely or with a remainder. A divisor is represented in a division equation as: Dividend ÷ Divisor = Quotient. On dividing 20 by 4 , we get 5. Here 4 is the number that divides 20 completely into 5 parts and is known as the divisor. What does zero is a placeholder mean? The zero is called a placeholder. It’s not worth anything on its own, but it changes the value of other digits. In this case zero changes the number 52 to a much larger number 502. If there are no units there will be a zero in the units column. For example, in the number 20 there is a zero in the units column. What is the definition of a remainder? Remainder means something which is ‘left over’ or ‘remaining’. This 1 is the remainder. When one number cannot divide another number completely, it le get a remainder. Does 1 count as a divisor? 1 and −1 divide (are divisors of) every integer. Every integer (and its negation) is a divisor of itself. A non-zero integer with at least one non-trivial divisor is known as a composite number, while the units −1 and 1 and prime numbers have no non-trivial divisors. What is the definition for Quotient? 1 : the number resulting from the division of one number by another. What is the purpose of a placeholder? Answer: A placeholder is a character, word, or string of characters that temporarily takes the place of the final data. Placeholders also be commented out to prevent the computer program from executing part of the code. What is O divided by 3? 0 divided by 3 is 0. Is zero a real number? What Are Real Numbers? Edit. Real numbers consist of zero (0), the positive and negative integers (-3, -1, 2, 4), and all the fractional and decimal values in between (0.4, 3.1415927, 1/2). Real numbers are divided into rational and irrational numbers. What is the smallest number that you know? What is 3/4 to the power? Examples: 3 raised to the power of 4 is written 34 = 81.
{"url":"https://www.joialife.com/writing-guides/what-does-the-zero-stand-for/","timestamp":"2024-11-08T00:49:07Z","content_type":"text/html","content_length":"44044","record_id":"<urn:uuid:9147ccdf-1be7-4dd1-9130-0972cac3a536>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00044.warc.gz"}
tcl: Testing in Conditional Likelihood Context An implementation of hypothesis testing in an extended Rasch modeling framework, including sample size planning procedures and power computations. Provides 4 statistical tests, i.e., gradient test (GR), likelihood ratio test (LR), Rao score or Lagrange multiplier test (RS), and Wald test, for testing a number of hypotheses referring to the Rasch model (RM), linear logistic test model (LLTM), rating scale model (RSM), and partial credit model (PCM). Three types of functions for power and sample size computations are provided. Firstly, functions to compute the sample size given a user-specified (predetermined) deviation from the hypothesis to be tested, the level alpha, and the power of the test. Secondly, functions to evaluate the power of the tests given a user-specified (predetermined) deviation from the hypothesis to be tested, the level alpha of the test, and the sample size. Thirdly, functions to evaluate the so-called post hoc power of the tests. This is the power of the tests given the observed deviation of the data from the hypothesis to be tested and a user-specified level alpha of the test. Power and sample size computations are based on a Monte Carlo simulation approach. It is computationally very efficient. The variance of the random error in computing power and sample size arising from the simulation approach is analytically derived by using the delta method. Draxler, C., & Alexandrowicz, R. W. (2015), <doi:10.1007/s11336-015-9472-y>. Version: 0.2.1 Depends: R (≥ 3.5.0) Imports: eRm, psychotools, ltm, numDeriv, graphics, grDevices, stats, methods, MASS, splines, Matrix, lattice, rlang Suggests: knitr, rmarkdown Published: 2024-09-16 DOI: 10.32614/CRAN.package.tcl Author: Clemens Draxler [aut, cre], Andreas Kurz [aut] Maintainer: Clemens Draxler <clemens.draxler at umit-tirol.at> License: GPL-2 NeedsCompilation: no CRAN checks: tcl results Reference manual: tcl.pdf Vignettes: Testing in Conditional Likelihood Context: The R Package tcl (source, R code) Package source: tcl_0.2.1.tar.gz Windows binaries: r-devel: tcl_0.2.1.zip, r-release: tcl_0.2.1.zip, r-oldrel: tcl_0.2.1.zip macOS binaries: r-release (arm64): tcl_0.2.1.tgz, r-oldrel (arm64): tcl_0.2.1.tgz, r-release (x86_64): tcl_0.2.1.tgz, r-oldrel (x86_64): tcl_0.2.1.tgz Old sources: tcl archive Please use the canonical form https://CRAN.R-project.org/package=tcl to link to this page.
{"url":"https://cloud.r-project.org/web/packages/tcl/index.html","timestamp":"2024-11-02T09:30:35Z","content_type":"text/html","content_length":"9008","record_id":"<urn:uuid:2d9ce9dd-28ce-401b-9721-62af2b1a3813>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00628.warc.gz"}
What Are Polynomial Models? Polynomial Model Structure A polynomial model uses a generalized notion of transfer functions to express the relationship between the input, u(t), the output y(t), and the noise e(t) using the equation: $A\left(q\right)y\left(t\right)=\sum _{i=1}^{nu}\frac{{B}_{i}\left(q\right)}{{F}_{i}\left(q\right)}{u}_{i}\left(t-n{k}_{i}\right)+\frac{C\left(q\right)}{D\left(q\right)}e\left(t\right)$ The variables A, B, C, D, and F are polynomials expressed in the time-shift operator q^-1. u[i] is the ith input, nu is the total number of inputs, and nk[i] is the ith input delay that characterizes the transport delay. The variance of the white noise e(t) is assumed to be $\lambda$. For more information about the time-shift operator, see Understanding the Time-Shift Operator q. In practice, not all the polynomials are simultaneously active. Often, simpler forms, such as ARX, ARMAX, Output-Error, and Box-Jenkins are employed. You also have the option of introducing an integrator in the noise source so that the general model takes the form: $A\left(q\right)y\left(t\right)=\sum _{i=1}^{nu}\frac{{B}_{i}\left(q\right)}{{F}_{i}\left(q\right)}{u}_{i}\left(t-n{k}_{i}\right)+\frac{C\left(q\right)}{D\left(q\right)}\frac{1}{1-{q}^{-1}}e\left(t\ For more information, see Different Configurations of Polynomial Models. You can estimate polynomial models using time or frequency domain data. For estimation, you must specify the model order as a set of integers that represent the number of coefficients for each polynomial you include in your selected structure—na for A, nb for B, nc for C , nd for D, and nf for F. You must also specify the number of samples nk corresponding to the input delay—dead time—given by the number of samples before the output responds to the input. The number of coefficients in denominator polynomials is equal to the number of poles, and the number of coefficients in the numerator polynomials is equal to the number of zeros plus 1. When the dynamics from u(t) to y(t) contain a delay of nk samples, then the first nk coefficients of B are zero. For more information about the family of transfer-function models, see the corresponding section in System Identification: Theory for the User, Second Edition, by Lennart Ljung, Prentice Hall PTR, Understanding the Time-Shift Operator q The general polynomial equation is written in terms of the time-shift operator q^–1. To understand this time-shift operator, consider the following discrete-time difference equation: $\begin{array}{l}y\left(t\right)+{a}_{1}y\left(t-T\right)+{a}_{2}y\left(t-2T\right)=\\ \text{}{b}_{1}u\left(t-T\right)+{b}_{2}u\left(t-2T\right)\end{array}$ where y(t) is the output, u(t) is the input, and T is the sample time. q^-1 is a time-shift operator that compactly represents such difference equations using ${q}^{-1}u\left(t\right)=u\left(t-T\ $\begin{array}{l}y\left(t\right)+{a}_{1}{q}^{-1}y\left(t\right)+{a}_{2}{q}^{-2}y\left(t\right)=\\ {\text{b}}_{\text{1}}{q}^{-1}u\left(t\right)+{b}_{2}{q}^{-2}u\left(t\right)\\ \text{or}\\ A\left(q\ In this case, $A\left(q\right)=1+{a}_{1}{q}^{-1}+{a}_{2}{q}^{-2}$ and $B\left(q\right)={b}_{1}{q}^{-1}+{b}_{2}{q}^{-2}$. This q description is completely equivalent to the Z-transform form: q corresponds to z. Different Configurations of Polynomial Models These model structures are subsets of the following general polynomial equation: $A\left(q\right)y\left(t\right)=\sum _{i=1}^{nu}\frac{{B}_{i}\left(q\right)}{{F}_{i}\left(q\right)}{u}_{i}\left(t-n{k}_{i}\right)+\frac{C\left(q\right)}{D\left(q\right)}e\left(t\right)$ The model structures differ by how many of these polynomials are included in the structure. Thus, different model structures provide varying levels of flexibility for modeling the dynamics and noise The following table summarizes common linear polynomial model structures supported by the System Identification Toolbox™ product. If you have a specific structure in mind for your application, you can decide whether the dynamics and the noise have common or different poles. A(q) corresponds to poles that are common for the dynamic model and the noise model. Using common poles for dynamics and noise is useful when the disturbances enter the system at the input. F [i] determines the poles unique to the system dynamics, and D determines the poles unique to the disturbances. Model Equation Description $A\left(q\right)y\left(t\right)=\sum _{i=1}^{nu}{B}_{i}\left(q\right){u}_{i}\ The noise model is $\frac{1}{A}$ and the noise is coupled to the dynamics model. ARX does not let you model ARX left(t-n{k}_{i}\right)+e\left(t\right)$ noise and dynamics independently. Estimate an ARX model to obtain a simple model at good signal-to-noise ARIX $Ay=Bu+\frac{1}{1-{q}^{-1}}e$ Extends the ARX structure by including an integrator in the noise source, e(t). This is useful in cases where the disturbance is not stationary. $A\left(q\right)y\left(t\right)=\sum _{i=1}^{nu}{B}_{i}\left(q\right){u}_{i}\ Extends the ARX structure by providing more flexibility for modeling noise using the C parameters (a moving ARMAX left(t-n{k}_{i}\right)+C\left(q\right)e\left(t\right)$ average of white noise). Use ARMAX when the dominating disturbances enter at the input. Such disturbances are called load disturbances. ARIMAX $Ay=Bu+C\frac{1}{1-{q}^{-1}}e$ Extends the ARMAX structure by including an integrator in the noise source, e(t). This is useful in cases where the disturbance is not stationary. Provides completely independent parameterization for the dynamics and the noise using rational polynomial $y\left(t\right)=\sum _{i=1}^{nu}\frac{{B}_{i}\left(q\right)}{{F}_{i}\left(q\ functions. Box-Jenkins right)}{u}_{i}\left(t-n{k}_{i}\right)+\frac{C\left(q\right)}{D\left(q\right)} (BJ) e\left(t\right)$ Use BJ models when the noise does not enter at the input, but is primary a measurement disturbance, This structure provides additional flexibility for modeling noise. Use when you want to parameterize dynamics, but do not want to estimate a noise model. Output-Error $y\left(t\right)=\sum _{i=1}^{nu}\frac{{B}_{i}\left(q\right)}{{F}_{i}\left(q\ Note (OE) right)}{u}_{i}\left(t-n{k}_{i}\right)+e\left(t\right)$ In this case, the noise models is $H=1$ in the general equation and the white noise source e(t) affects only the output. The polynomial models can contain one or more outputs and zero or more inputs. The System Identification app supports direct estimation of ARX, ARMAX, OE and BJ models. You can add a noise integrator to the ARX, ARMAX and BJ forms. However, you can use polyest to estimate all five polynomial or any subset of polynomials in the general equation. For more information about working with pem, see Using polyest to Estimate Polynomial Models. Continuous-Time Representation of Polynomial Models In continuous time, the general frequency-domain equation is written in terms of the Laplace transform variable s, which corresponds to a differentiation operation: In the continuous-time case, the underlying time-domain model is a differential equation and the model order integers represent the number of estimated numerator and denominator coefficients. For example, n[a]=3 and n[b]=2 correspond to the following model: $\begin{array}{l}A\left(s\right)={s}^{4}+{a}_{1}{s}^{3}+{a}_{2}{s}^{2}+{a}_{3}\\ B\left(s\right)={b}_{1}s+{b}_{2}\end{array}$ You can only estimate continuous-time polynomial models directly using continuous-time frequency-domain data. In this case, you must set the Ts data property to 0 to indicate that you have continuous-time frequency-domain data, and use the oe command to estimate an Output-Error polynomial model. Continuous-time models of other structures such as ARMAX or BJ cannot be estimated. You can obtain those forms only by direct construction (using idpoly), conversion from other model types, or by converting a discrete-time model into continuous-time (d2c). Note that the OE form represents a transfer function expressed as a ratio of numerator (B) and denominator (F) polynomials. For such forms consider using the transfer function models, represented by idtf models. You can estimate transfer function models using both time and frequency domain data. In addition to the numerator and denominator polynomials, you can also estimate transport delays. See idtf and tfest for more Multi-Output Polynomial Models For a MIMO polynomial model with ny outputs and nu inputs, the relation between inputs and outputs for the l^th output can be written as: $\sum _{j=1}^{ny}{A}_{lj}\left(q\right){y}_{j}\left(t\right)=\sum _{i=1}^{nu}\frac{{B}_{li}\left(q\right)}{{F}_{li}\left(q\right)}{u}_{i}\left(t-n{k}_{i}\right)+\frac{{C}_{l}\left(q\right)}{{D}_{l}\ The A polynomial array (A[ij]; i=1:ny, j=1:ny) are stored in the A property of the idpoly object. The diagonal polynomials (A[ii]; i=1:ny) are monic, that is, the leading coefficients are one. The off-diagonal polynomials (A[ij]; i ≠j ) contain a delay of at least one sample, that is, they start with zero. For more details on the orders of multi-output models, see Polynomial Sizes and Orders of Multi-Output Polynomial Models. You can create multi-output polynomial models by using the idpoly command or estimate them using ar, arx, bj, oe, armax, and polyest. In the app, you can estimate such models by choosing a multi-output data set and setting the orders appropriately in the Polynomial Models dialog box. See Also idpoly | ar | arx | bj | oe | armax | polyest Related Examples More About
{"url":"https://fr.mathworks.com/help/ident/ug/what-are-polynomial-models.html","timestamp":"2024-11-14T07:27:17Z","content_type":"text/html","content_length":"98477","record_id":"<urn:uuid:59f8563e-cb6a-4564-9de4-6be629220a47>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00080.warc.gz"}
Gibbs Sampling of Continuous Potentials on a Quantum Computer Gibbs Sampling of Continuous Potentials on a Quantum Computer Proceedings of the 41st International Conference on Machine Learning, PMLR 235:36322-36371, 2024. Gibbs sampling from continuous real-valued functions is a challenging problem of interest in machine learning. Here we leverage quantum Fourier transforms to build a quantum algorithm for this task when the function is periodic. We use the quantum algorithms for solving linear ordinary differential equations to solve the Fokker–Planck equation and prepare a quantum state encoding the Gibbs distribution. We show that the efficiency of interpolation and differentiation of these functions on a quantum computer depends on the rate of decay of the Fourier coefficients of the Fourier transform of the function. We view this property as a concentration of measure in the Fourier domain, and also provide functional analytic conditions for it. Our algorithm makes zeroeth order queries to a quantum oracle of the function and achieves polynomial quantum speedups in mean estimation in the Gibbs measure for generic non-convex periodic functions. At high temperatures the algorithm also allows for exponentially improved precision in sampling from Morse functions. Cite this Paper Related Material
{"url":"https://proceedings.mlr.press/v235/motamedi24a.html","timestamp":"2024-11-04T23:52:34Z","content_type":"text/html","content_length":"16099","record_id":"<urn:uuid:4bb9ef67-882f-4f0f-803c-b486d204e787>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00281.warc.gz"}
difference between a square and a rhombus - Sinaumedia difference between a square and a rhombus Understanding the Key Differences Between a Square and a Rhombus When it comes to geometry, two shapes that are often confused with each other are the square and the rhombus. At first glance, both shapes appear to be very similar since they both have four sides and look like squares. However, there are some key differences between them that set them apart. Definition of a Square A square is a four-sided polygon where all sides are equal in length and all angles are right angles (90 degrees). It can be considered a special type of rectangle, but with all sides of equal length. In other words, every square is a rectangle, but not every rectangle is a square. Definition of a Rhombus A rhombus is also a four-sided polygon but the sides are not necessarily of equal length. The defining feature of a rhombus is that all four sides are of the same length. Moreover, the opposite angles in a rhombus are congruent, but they are not necessarily right angles. Difference in Shape The most noticeable difference between a square and a rhombus is the shape. A square has four sides of equal length with right angles at each of the four corners. That is, all four corners are 90° angles. In contrast, a rhombus has four sides of equal length, but its corners are not right angles. Instead, its sides meet to form acute angles. Difference in Area While a square and a rhombus may have four sides of equal length, their areas are not necessarily equal. A square has the maximum area of any quadrilateral with the same perimeter, whereas a rhombus has an area that is slightly smaller. This is because, in a square, the area is maximized when all sides are different. In a rhombus, however, the area is maximized when the angles are acute. Difference in Diagonal Lengths Another notable difference between a square and a rhombus is the length of their diagonal. Since all sides of a square are equal, the diagonal of a square is also equal in length to its sides. In a rhombus, on the other hand, the two diagonals are not of equal length. The longer diagonal divides the rhombus into two congruent triangles. In conclusion, the square and the rhombus are two shapes that may look similar but are different in several ways. A square has four sides of equal length with right angles at each corner, while a rhombus has four sides of equal length but with acute angles at each corner. They also have different areas and diagonal lengths. By understanding these differences, you can easily identify the difference between a square and a rhombus. Table difference between a square and a rhombus Square Rhombus Definition A square is a quadrilateral with all sides equal in length and all angles equal to 90 A rhombus is a quadrilateral with all sides equal in length and opposite angles equal to each degrees. other. • All sides are equal in length • All sides are equal in length Properties • All angles are equal to 90 degrees • Opposite angles are equal to each other • Diagonals are equal in length and bisect each other at right angles • Diagonals bisect each other at right angles Shape A square has four equal sides and four right angles. Its shape is therefore a rectangle A rhombus has four equal sides and opposite angles that are equal. Its shape is usually with all sides equal. represented as a leaning square. • A chessboard square • A playing card diamond Examples • A post-it note • A street sign • A Rubik’s cube • A kite
{"url":"https://sinaumedia.com/difference-between-a-square-and-a-rhombus/","timestamp":"2024-11-08T04:07:23Z","content_type":"text/html","content_length":"139122","record_id":"<urn:uuid:759a7ea5-72fd-484d-88cc-ddd2260a1640>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00372.warc.gz"}
On May 17, 2018 the Faculty of Fundamental Sciences of Bauman Moscow State Technical University will held the second meeting ofthe Scientific Workshop "BMSTU Mathematical Colloquium". It addresses a broad mathematical audience including keen students. The workshop aims to give listeners a general view of various areas of modern mathematics. The workshop will be held on Thursday at 5:30 pm, room 1108, BMSTU Laboratory Building, 2/18, Rubtsovskaya embankment. Workshop instructors: · Manturov Vasiliy Olegovich · Fedorovskiy Konstantin Yurevich · Filinovskiy AlekseyVladislavovich Workshop secretary: · Gargyants Lidia Vladimirovna If you are going to come or with any questions, please write to e-mail: This email address is being protected from spambots. You need JavaScript enabled to view it. The workshop topic: Groups G_{n}^{k}, high-dimensional analogs of braids and invariants of topological spaces The report will be given by Professor of Russian Academy of Sciences, Doctor of Physical and Mathematical Sciences, BMSTU Professor Manturov Vassily Olegovich. The abstract of his report in PDF file.
{"url":"http://fn.bmstu.ru/en/workshops-fn-en/item/781-the-fourth-meeting-of-the-scientific-workshop-bmstu-mathematical-colloquium-fs-en","timestamp":"2024-11-09T22:41:15Z","content_type":"application/xhtml+xml","content_length":"45636","record_id":"<urn:uuid:7b343117-384c-40f0-be0a-4cd09d7ef580>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00569.warc.gz"}
Curl - Calculus 3 All Calculus 3 Resources Example Questions Example Question #1 : Curl Calculate the curl for the following vector field. Correct answer: In order to calculate the curl, we need to recall the formula. Now lets apply this to out situation. Thus the curl is Example Question #1 : Curl Calculate the curl for the following vector field. Correct answer: In order to calculate the curl, we need to recall the formula. Now lets apply this to out situation. Thus the curl is Example Question #1 : Curl Find the curl of the force field Possible Answers: None of the other answers Correct answer: Curl is probably best remembered by the determinant formula Example Question #1 : Curl Correct answer: Take any field, the curl gives us the amount of rotation in the vector field. The purpose of the divergence is to tell us how much the vectors move in a linear motion. When vectors are moving in circular motion only, there are no possible linear motion. Thus the divergence of the curl of any arbitrary vector field is zero. Example Question #1 : Curl Evaluate the curl of the force field Correct answer: To evaluate the curl of a force field, we use Curl Example Question #3 : Curl Correct answer: The curl of a vector is defined by the determinant of the following 3x3 matrix: For the given vector, we can calculate this determinant Example Question #7 : Curl Given that F is a vector function and f is a scalar function, which of the following operations results in a scalar? Correct answer: For each of the given expressions: The remaining answer is: Example Question #8 : Curl Given that F is a vector function and f is a scalar function, which of the following expressions is undefined? Correct answer: The cross product of a scalar function is undefined. The expression in the parenthesis of: is the cross product of a scalar function, therefore the entire expression is undefined. For the other solutions: Example Question #9 : Curl Compute the curl of the following vector function: Correct answer: For a vector function For this function, we calculate the curl as: Example Question #10 : Curl Compute the curl of the following vector function: Correct answer: For a vector function For this function, we calculate the curl as: Certified Tutor Texas State University-San Marcos, Bachelor of Science, Mathematics. Certified Tutor The University of Texas at Austin, Bachelor of Science, Mathematics.
{"url":"https://www.varsitytutors.com/calculus_3-help/curl","timestamp":"2024-11-13T21:23:25Z","content_type":"application/xhtml+xml","content_length":"180417","record_id":"<urn:uuid:d4b483cd-fabb-450b-9788-10af438ea521>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00835.warc.gz"}
Congruent Triangles Worksheet Answers - TraingleWorksheets.com Congruent Triangles Worksheet Answers Congruent Triangles Worksheet Answers – Triangles are among the fundamental shapes in geometry. Understanding the triangle is essential to learning more advanced geometric concepts. In this blog we will go over the different types of triangles, triangle angles, how to calculate the dimension and perimeter of the triangle, as well as provide examples of each. Types of Triangles There are three kinds of triangulars: Equilateral isosceles, as well as scalene. • Equilateral triangles have equal sides, and three angles of 60 degrees. • Isosceles triangles have two equal sides and have two equal angles. • Scalene triangles feature three distinct sides and three different angles. Examples of each type of triangle will be shown. Triangle Angles There are three angles in triangular shapes: acute right, and obtuse. • Acute angles are less that 90 degrees. • Right angles are precisely 90 degrees. • Obtuse angles exceed 90 degrees. Examples of each angle will be shown. Perimeter of Triangles The perimeter of a triangle is the total of lengths along its three sides. To calculate the perimeter of a triangle, you have to add up the lengths of the three sides. The formula to calculate the perimeter of the triangle is: Perimeter = Side A + Side B + Side C A few examples of how to determine the perimeter of a triangle will be provided using various kinds of triangles. Area of Triangles The size of a triangle is the amount of space that is contained within the triangle. To determine the dimensions of a rectangle, you have to know what the size of your base, as well as the diameter of the triangular. The formula to calculate the amount of space in a triangle: Area = (Base x Height) / 2 Examples of how to determine the area of a triangle will be provided using various triangles. Understanding triangles is an essential element of learning about geometry. Understanding the different types of triangles, triangle angles, as well as how to calculate the perimeter and area of a triangle will help you to solve more difficult geometric problems. By using our comprehensive triangle worksheets you can boost your knowledge of these fundamental concepts and elevate your geometry skills to the next level. Gallery of Congruent Triangles Worksheet Answers Basic Cooking Terms Worksheet Answers Cooking Basics Worksheets Honors Geometry Vintage High School Chapter 4 Test Study Guide Solved Exterior Angle Theorem And Triangle Sum Theorem Pl Db excel Leave a Comment
{"url":"https://www.traingleworksheets.com/congruent-triangles-worksheet-answers/","timestamp":"2024-11-13T12:43:53Z","content_type":"text/html","content_length":"84018","record_id":"<urn:uuid:9d545d75-7981-43c7-ab6c-89983c44a061>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00669.warc.gz"}
My Quantum Spin: Qubit Ordering and Operations Picking up from Learning Quantum Computing, I continue chronicling my learning journey through quantum computing. This post serves two purposes. First, I had a personal a-ha moment while manually working through the math to find the resulting statevector of a simple circuit. Namely, how to correctly and consistently compute tensor products of unitary matrices. Second, in association with that moment, I investigated the ramifications of how one may order the qubits in a circuit. In this post, I will be working with the two similar but slightly different circuits shown here: The difference resides in the order of the qubits. The first qubit, q0, is on top in the first diagram and on bottom in the second. Why is this important? In all of my learning resources, except one, the first qubit is on top. The exception insisted on putting it at the bottom. As a beginner, the situation bothered me. How could I reconcile this in my mind? Stay tuned. First Circuit Let's start by examining the first circuit. The circuit has no special significance (that I know of). It is simple enough to demonstrate what I learned. As stated, it is the qubit order I see in most learning resources and diagramming tools: top to bottom. In this diagram, q0 is the first qubit, q1 is the second, and q2 is the third. I want to find the resulting statevector of this circuit. The correct answer depends on correctly defining the intermediate unitary operations. I placed a barrier down the middle of the diagram to cleanly separate the operations. The operations before the barrier may be considered to be executed simultaneously. They consist of a Pauli-Z gate on q0, a Hadamard gate in q1, and no operation on q2. The "no operation" can be represented mathematically by an identity gate. To consider the effect of these simultaneous operations on the circuit we need to take a tensor (Kronecker) product of including Z, H, and I (the identity operation). One might go ahead and do this: This will eventually lead to the wrong statevector. The correct product is: How do we keep this straight? Remember that the system's initial state is |000> where the rightmost digit represents the first qubit (q0). The order of arguments in the tensor product is the same from right to left. This was my a-ha moment. How about the effective operator to the right of the barrier? This time we have a multi-qubit operation, the CNOT gate. The operation on that side of the barrier is: Based on the previous IHZ operation this seems to make sense, the operation on q0 is the identity matrix and it is the righthand argument. Following the math through from the initial state, we get a final statevector of: This is the correct result. The result makes intuitive sense with the only possible states being |000> and |110>. Qubit q0 only had a phase flip and qubits q1 and q2 are entangled in a Bell state. All we had to do was remember the correct ordering of the operators. Second Circuit Things get interesting when we reverse the numbered order of the qubits: Now q0 is on the bottom. Again we start from the initial |000> state. Let's look at the resulting statevector first before we look at the math that gets us there: There might be momentary shock due to the different result but again it is easy to see why this makes sense. The possible states of the system are |000> and |011. The first two qubits, q0 and q1, are in a Bell state while the third qubit q2 only experienced a phase flip. The result is different from the first circuit but consistent. Evaluating the circuit, we start to the left of the barrier just as we did previously. This time, the ZHI tensor product is the correct operator: Following the principle according to the ordering of qubits I discovered, we should expect the effective operator after the barrier to be: This looks good but wait, that isn't the same CNOT matrix we used in the first circuit. Here are the two CNOT matrices used in the first and second circuits, respectively: How am I going to remember which to use? The rule of thumb states that if the first qubit in CNOT is the control, use the first matrix. Otherwise, if the first qubit is the target, use the second matrix. Yuk. I wasn't satisfied. A formal mathematical derivation would be better. Moreover, what about the following situation? The CNOT operator to be applied after the barrier now involves all three qubits! There is no tensor product to express the operator. Of course, we may imagine more complex situations with more qubits To answer my questions, I found the following Stack Exchange threads to be helpful: Memorizing multi-qubit matrices will lead me astray. I need to formulate the correct matrix based on the situation. Strictly speaking, a practitioner may rely on simulators to derive statevectors and the operators needed to achieve them. I always want to know why something works so doing the math is valuable. I hope this post was thought-provoking.
{"url":"https://hamacher.cloud/my-quantum-spin-qubit-ordering-and-operations","timestamp":"2024-11-09T01:33:52Z","content_type":"text/html","content_length":"145428","record_id":"<urn:uuid:ebfc7500-806a-400e-a35a-6d14da3795de>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00638.warc.gz"}
Abstract Algebra 9-15 Feedback • Several of the definitions in the glossary have examples that are not actually examples for the word being defined. Someone should go through and correct these. • There are also several definitions that are completely missing examples and nonexamples. • In Hw1 Problem 14, if you get a sum greater than 7 when adding in $\mathbb{Z}_{7}$, you should show the subtraction. For example, $6+_74=10-7=3$. • In HW1 Problem 4, you should show work to justify your answers. Also, (a) is incorrect. • On many of the pages, periods are missing at the end of sentences. Don't forget your periods! • Nice use of red text on HW2 Problem 3! • When using sine and cosine in Latex, type [[$\sin \theta$]] so that the function name is upright ($\sin \theta$ versus $sin \theta$). • In the example for identity element, you want to show that $a*1=a$ and $1*a=a$. What you've stated just says that $1$ is a right identity element. • In HW1 Problem 1, when you are using induction, say that's what you are using. The case $n=0$ is your base case. By induction, you are able to say that there are $2\cdot 2^{k+1}$ subsets of a set with $k+1$ elements. • The example and nonexample for commutative could use a full sentence. What operation is or isn't commutative here? (Also, just because an operation is commutative on two given elements doesn't mean it's commutative on the entire set $S$.) • Please be careful not to take problems that have already been claimed on the To Do page.
{"url":"http://algebra2014.wikidot.com/9-15-feedback","timestamp":"2024-11-09T07:48:09Z","content_type":"application/xhtml+xml","content_length":"28987","record_id":"<urn:uuid:2bbd2c1f-7030-4ef8-84d7-308ced00d64c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00573.warc.gz"}
How Draw Ellipse How Draw Ellipse - Web an ellipse is an oval shape with the major and minor axis crossing it at the center. With the compasses' point on the center, set the compasses' width to half the width (major axis) of the desired ellipse. Web loop the string around the pencil’s kerf and slowly pass the pencil along the workpiece, starting outside of one nail and ending outside of the other nail. Place the thumbtacks in the cardboard to form the foci of the ellipse. The string length was set from p and. Web this videos demonstrates easy manner of how to draw an ellipse shape using a simple compass/rounder and basic geometry. Before we begin, let’s do a little analysis of what we are looking at when we see an ellipse. Web attach a rigid plywood arm to both bolts. Move the compasses' point to one end of the minor axis of the desired ellipse and draw two arcs across the major axis. 2 pens/pencils, 1 compass/rounder, 1 scale, paper. The video also explains how to shift an. Web you can draw it yourself. How To Draw Ellipse In Engineering Graphics FerisGraphics A + b, the length of the string, is equal to the major axis length pq of the ellipse. This is done by taking the length of the major axis and dividing it by two.. How to Draw Ellipse by four centre method in Engineering Drawing YouTube We can draw an ellipse using a piece of cardboard, two thumbtacks, a pencil, and string. Web learn all about ellipses in this video. Move the compasses' point to one end of the minor axis. Drawing an Ellipse The String Method THISisCarpentry Before we begin, let’s do a little analysis of what we are looking at when we see an ellipse. Web an ellipse is the set of all points \((x,y)\) in a plane such that the. Draw an ellipse with two given circles, easiest ways to draw a perfect An illustration of text ellipses. An ellipse has an eccentricity less than one, and it represents the locus of points, the sum of whose distances from the two foci is a constant value. Cut a. How To Draw An Ellipse Step By Step Attach a router to the plywood arm and then attach it to the two sliding blocks. The video also explains how to shift an. Put a loop of string around them, insert a pencil into. How to Hand Draw an Ellipse 12 Steps (with Pictures) wikiHow Draw The fixed points are known as the foci (singular focus), which are surrounded by the curve. The video also explains how to shift an. Cut a piece of string longer than the distance between the. Drawing Ellipses 3 Steps Instructables The major axis is the longest diameter of an ellipse. A + b, the length of the string, is equal to the major axis length pq of the ellipse. Web an ellipse is the locus. draw ellipse by four centre method!! ALL IN ONE YouTube The standard form for an ellipse centered at the origin is x²/a² + y²/b² = 1. Web construct an ellipse 1. (this is called the ellipse semimajor axis ). Web an ellipse is the locus. Ellipse Standard Equation This is done by taking the length of the major axis and dividing it by two. Web an ellipse is the set of all points \((x,y)\) in a plane such that the sum of their. How to draw Ellipse with Compass Easy Geometry YouTube Sketch the general shape of the ellipse. You can buy these at your art store. Place the thumbtacks in the cardboard to form the foci of the ellipse. Web tools for drawing ellipses. Create a. How Draw Ellipse Sketch the general shape of the ellipse. Web attach a rigid plywood arm to both bolts. Put a loop of string around them, insert a pencil into the loop, stretch the string so it forms a triangle, and draw a curve. Unlike the circle, an ellipse is oval in shape. Web you can draw it yourself. How Draw Ellipse Related Post :
{"url":"http://upload.independent.com/kudos/how-draw-ellipse.html","timestamp":"2024-11-12T12:02:57Z","content_type":"application/xhtml+xml","content_length":"22926","record_id":"<urn:uuid:7232454a-6b4a-4cbb-9ede-aa7cd9c211f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00216.warc.gz"}
What Is FSM? Write Mealy And Moore State Machine Using Verilog What is FSM? Write Mealy and Moore State Machine using Verilog by Tanya Bansal written by Tanya Bansal What is FSM? • A Finite State Machine, or FSM, is a computation model that can be used to simulate sequential logic, or, in other words, to represent and control execution flow. • Now, a sequential logic or a sequential circuit is the one that has a memory unit in it, unlike a combinational logic. It even has a clock. In simple terms, we can say it is a combinational logic along with a memory. • Common examples of sequential circuits include registers and flip-flops. • We have a fixed set of states that the machine can be in. • The machine can only be in one state at a time. It means the machine has to transition from one state to another in order to perform different actions. • A sequence of inputs is sent to the machine. • Every state has a set of transitions and every transition is associated with an input and pointing to a state. Let’s take a real-life example of a traffic light to better understand how FSM actually works- Traffic Light • States: Red, Yellow, Green • Transitions: After a given time, Red will change to Green, Green to Yellow, and Yellow to Red. The light will be only in one state at a time, and to perform the next action, it has to transition from one state to another. So, this is a case of FSM. Now, if there are a finite number of states. There are two types of FSMs i.e. the Mealy State Machine and the Moore State Machine. Their basic block diagrams are shown below- Mealy State Machine Moore State Machine Let’s understand both the types and write their Verilog codes, test bench and generate the output table and the waveform. For both Mealy and Moore state machine, while writing the test benches I have taken 2 cases. First, if we directly give a set of values in the test bench, and second, we generate a set of random test Mealy State Machine- It is based on both the present input and present state. • 0/0, 1/0, 1/1, 0/1 represent input/output • In the above figure, there are two transitions from each state based on the value of the input, x. Also read How to write a Simple RISC Assembly Program? Verilog Code for Mealy State Machine `timescale 1ns/1ns module mealy(clk,rst,inp,out); input clk, rst, inp; output out; reg out; reg [1:0] state; always @(posedge clk, posedge rst) out <= 0; state <= 2'b00; case (state) if (inp) begin state <=2'b01; out <=0; else begin state <=2'b10; out <=0; if(inp) begin state <= 2'b00; out <= 1; else begin state <= 2'b10; out <= 0; if(inp) begin state <= 2'b01; out <= 0; else begin state <= 2'b00; out <= 1; state <= 2'b00; out <= 0; Testbench for Mealy State Machine: giving direct test values // directly test values given `timescale 1ns/1ns `include "mealy.v" module mealy_tb; wire out; reg clk,rst,inp; reg [15:0] seq; integer i; mealy instance22(clk, rst, inp, out); initial begin #5 rst=0; always begin for( i = 0; i <= 15; i = i+1) inp = seq[i]; #2 clk=1; #2 clk=0; $display("state = ",instance22.state,"| input = ",inp,"| output = ",out); #100 $finish(); Output Table for Mealy State Machine: giving direct test values The table includes 3 columns. The first one tells the current state. We have three possible states 0, 1, and 2. The next column is for displaying input which can be either 0 or 1. These inputs are the ones given by us in the testbench as a sequence. The last is the output column for each specific input. The waveform for Mealy State Machine: giving direct test values Testbench for Mealy State Machine: generating random test values // randomly generated test values given `timescale 1ns/1ns `include "mealy.v" module mealy_tb_2; wire out; reg clk,rst,inp; integer i; mealy instance22(clk, rst, inp, out); initial begin #5 rst=0; always begin for( i = 0; i <= 15; i = i+1) inp = $random % 2; #2 clk=1; #2 clk=0; $display("state = ",instance22.state,"| input = ",inp,"| output = ",out); #100 $finish(); Output Table for Mealy State Machine: generating random test values Here, the inputs are generated randomly using a for loop. The waveform for Mealy State Machine: generating random test values Moore State Machine – It is based only on the present state and not on the present input. • There are two transitions from each state based on the value of the input, x. Verilog Code for Moore State Machine `timescale 1ns/1ns module moore(clk,rst,inp,out); output out; input clk, rst, inp; reg out; reg [1:0] state; always @(posedge clk, posedge rst) state <= 2'b00; 2'b00: begin if(inp) begin state <= 2'b01; else begin state <= 2'b10; 2'b01: begin if(inp) begin state <= 2'b11; else begin state <= 2'b10; 2'b10: begin if(inp) begin state <= 2'b01; else begin state <= 2'b11; 2'b11: begin if(inp) begin state <= 2'b01; else begin state <= 2'b10; default: begin state <= 2'b00; always @(posedge clk, posedge rst) if(rst) begin out <= 0; else if (state == 2'b11) begin out <= 1; else begin out <= 0; Testbench for Moore State Machine: giving direct test values // directly test values given `timescale 1ns/1ns `include "moore.v" module moore_tb; wire out; reg clk,rst,inp; reg [15:0] seq; integer i; moore instance22(clk, rst, inp, out); initial begin #5 rst=0; always begin for( i = 0; i <= 15; i = i+1) inp = seq[i]; #2 clk=1; #2 clk=0; $display("state = ",instance22.state,"| input = ",inp,"| output = ",out); #100 $finish(); Output Table for Moore State Machine: giving direct test values As in the case of the Mealy State Machine, here we give a 16-bit sequence for input values consisting of only 0’s and 1’s. The waveform for Moore State Machine: giving direct test values Testbench for Moore State Machine: generating random test values // randomly generated test values given `timescale 1ns/1ns `include "moore.v" module moore_tb_2; wire out; reg clk,rst,inp; integer i; moore instance22(clk, rst, inp, out); initial begin #5 rst=0; always begin for( i = 0; i <= 15; i = i+1) inp = $random % 2; #2 clk=1; #2 clk=0; $display("state = ",instance22.state,"| input = ",inp,"| output = ",out); #100 $finish(); Output Table for Moore State Machine: generating random test values In this case, values for input are randomly generated using the for loop. This is done 16 times to get a long sequence. The waveform for Moore State Machine: generating random test values This was all about FSM and its types for finite states, namely the Mealy and Moore State Machine. Leave a Comment Cancel Reply 0 comment 0 FacebookTwitterPinterestOdnoklassnikiRedditStumbleuponWhatsappTelegramLINEEmail previous post Make your own Responsive Navigation Bar for a website next post Basic Operations in MATLAB Related Posts
{"url":"https://scholarbasta.com/what-is-fsm-write-mealy-and-moore-state-machine-using-verilog/","timestamp":"2024-11-08T14:01:03Z","content_type":"text/html","content_length":"217744","record_id":"<urn:uuid:38fb6c5f-4f1c-458f-b096-e4a62cddd11f>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00656.warc.gz"}
What are the four methods of factoring? 22/12/202122/12/2021| | 0 Comment | 2:07 PM What are the four methods of factoring? The following factoring methods will be used in this lesson: • Factoring out the GCF. • The sum-product pattern. • The grouping method. • The perfect square trinomial pattern. • The difference of squares pattern. What happened to the pelican who stuck his head into the wall socket? However, there has been no recorded case of a pelican sticking its head into an electrical outlet. When a pelican attempts to stick its head into an electrical socket, nothing happens. Is 10x a polynomial? Not a Polynomial A polynomial is an expression composed of variables, constants and exponents with mathematical operations. Obviously, the expression 10x does not meet the qualifications to be a What are the classification of polynomials? Polynomials are classified according to their number of terms. 4×3 +3y + 3×2 has three terms, -12zy has 1 term, and 15 – x2 has two terms. As already mentioned, a polynomial with 1 term is a monomial. A polynomial with two terms is a binomial, and a polynomial with three terms is a trinomial. How do you classify the number of terms? Example: The polynomial is classified by the number of terms as: 1. Monomial – One term – 3x. 2. Binomial – Two Term – 7a-5. 3. Trinomial – Three Term – What does Y MX C mean? The general equation of a straight line is y = mx + c, where m is the gradient, and y = c is the value where the line cuts the y-axis. This number c is called the intercept on the y-axis. Key Point. The equation of a straight line with gradient m and intercept c on the y-axis is y = mx + c. What is the standard form of Y 1 4x 4? Answer: The standard form of this slope-intercept form is x + 4y = 16. What does the M in Y MX B stand for? What is standard form in math? Standard form is a way of writing down very large or very small numbers easily. 103 = 1000, so 4 × 103 = 4000 . So 4000 can be written as 4 × 10³ . This idea can be used to write even larger numbers down easily in standard form. What are the 3 types of polynomials? The three types of polynomials are: • Monomial. • Binomial. • Trinomial. What is standard form of a circle? The graph of a circle is completely determined by its center and radius. Standard form for the equation of a circle is (x−h)2+(y−k)2=r2. The center is (h,k) and the radius measures r units. What did the boy say to the girl tree? 4 Answers. I pine fir yew! How do you convert to standard form? To convert from slope intercept form y = mx + b to standard form Ax + By + C = 0, let m = A/B, collect all terms on the left side of the equation and multiply by the denominator B to get rid of the What is the Y-intercept in the equation y =- 2x 5? In this equation, y = 2x – 5, it is already in that form and so the cumber in front of the x is the slope (2) and the constant number is the y-intercept (-5) – it’s negative because of the minus sign in front of the number. So, the slope is 2 and the y-intercept is -5. How do you turn a polynomial into standard form? In order to write any polynomial in standard form, you look at the degree of each term. You then write each term in order of degree, from highest to lowest, left to right. Let’s look at an example. Write the expression \begin{align*}3x-8+4x^5\end{align*} in standard form. Is Y =- 7 6x a linear equation? This equation is linear, because each term is either a constant or the product of a constant and the first power of a single variable. Can a be negative in standard form? Standard Form of a Linear Equation A shouldn’t be negative, A and B shouldn’t both be zero, and A, B and C should be integers. What are the rules of standard form? The Standard Form for a linear equation in two variables, x and y, is usually given as Ax + By = C where, if at all possible, A, B, and C are integers, and A is non-negative, and, A, B, and C have no common factors other than 1. What do all of the functions of the form y MX B have in common? What do all of the functions of the form y = mx + b have in common? Since they all have the same basic relationship between x and y, they can be called a family of functions. What does F X mean? input value How do you do standard form in math? How to write numbers in standard form: 1. Write the first number 8. 2. Add a decimal point after it: 8. 3. Now count the number of digits after 8. There are 13 digits. 4. So, in standard form: 000 000 is 8.19 × 10¹³ What is the Y intercept of y =- 2x 3? The x -intercept is 1.5 , and the y -intercept is −3 . How do you find a degree of a polynomial? Explanation: To find the degree of the polynomial, add up the exponents of each term and select the highest sum. The degree is therefore 6. What is the standard form of 12345? The standard form of the number 12345 is equal to “1.2345 × “. What formula is y MX B? The equation of any straight line, called a linear equation, can be written as: y = mx + b, where m is the slope of the line and b is the y-intercept. The y-intercept of this line is the value of y at the point where the line crosses the y axis. What is the standard form of 200000? 200,000? 200,000 = 2 x 10^5. How do you do slope intercept form? Slope-intercept form, y=mx+b, of linear equations, emphasizes the slope and the y-intercept of the line. What is the standard form of Y =- 2x 5? So the problem “y=2x+5” is in standard form and no modification is necessary. For a parabolic equation, the standard form is y = a(x – h)^2 + k, from which direction (polarity of “a”) and axis of symmetry (value of “h”), etc. may be determined by inspection. Is y 4 a linear equation? As long as y never changes value, it is always c, then you have a solution. In that case, you will end up with a horizontal line. Example 3: Graph the linear equation y = 4. It looks like it fits the form y = c…. x y (x, y) 0 4 (0, 4) 1 4 (1, 4) 2 4 (2, 4)
{"url":"https://thegrandparadise.com/samples/what-are-the-four-methods-of-factoring/","timestamp":"2024-11-06T14:21:27Z","content_type":"text/html","content_length":"57256","record_id":"<urn:uuid:93146ab6-a415-4e7b-9f14-92e9e6601e01>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00402.warc.gz"}
Re: st: general statistical reasoning question in biomedical statistics [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: general statistical reasoning question in biomedical statistics (no Stata content) From Tim Wade <[email protected]> To [email protected] Subject Re: st: general statistical reasoning question in biomedical statistics (no Stata content) Date Thu, 11 Dec 2003 10:40:10 -0800 (PST) Here are my 2 cents: It is my understanding that now p-values are often discouraged from "Table 1" at least in part for the reason that you describe. I think in part this is a confusion of validity vs. What do you mean by your randomization is "solid"? Randomization *does not* (and can never) guarantee a completely balanced sample. So if there is convincing evidence, qualitiative or quantitaitve, that the two arms are not the same then it does call into question the equivalence of the two groups. A perfectly randomized study could produce by chance two very different groups (although it is not likely to do so) and thus call into question the validity of the results, but would not necessarily invalidate the results. Especially if numerous factors were compared in Table 1, there may have been a difference simply by chance, in two groups that are actually equivalent. --- "Christopher W. Ryan" <[email protected]> > Having read the Statalist FAQ, and previous > correspondence about general > statistical questions, I hope no one minds . . . . > Among my teaching duties in my medical school and > family practice > residency is "critical appraisal of the medical > literature." I try to > go over principles of good design and valid > analysis. A question > frequently comes up when we discuss randomized > controlled trials. In > these articles, there is almost always a "Table 1," > that describes the > baseline demographic and clinical variables of the > two arms (say, > placebo and active drug, for example.) There are > usually *a lot* of > baseline measurements. Each one is usually listed > with a "P value," > indicating whether the placebo and active drug > subjects differed on that > measurement. > Then the manuscript goes on to describe the rest of > the study, and the > results . . . > If the results show an advantage for the active > drug, readers (including > my students and residents) will often go back to > "Table 1" and say, "Oh > but look, the samples were not identical. Blah-blah > was significantly > higher in the placebo arm to begin with. Therefore > I can't accept these > results as valid." > I've never agreed with that. So I want to outline > my chain of reasoning > here and see if I've got it straight. > There are two premises in a randomized controlled > trial with two arms: > 1. The two samples are drawn randomly from the same > population > 2. The active drug actually has no effect (the null > hypothesis) > And then there are the results (R). > If 1 and 2 are both true, we can look at R and > calculate how likely we > were to see results that "extreme" or more so. > That's the P value. If > P < the conventional 0.05, we say, "Gee, if 1 and 2 > are both true, we > *might* have seen results R, but only 5% of the time > or less, and that's > pretty unlikely. But we *did* see R. Therefore > either 1 or 2 must be > untrue. And I'm confident my randomization was > solid. Therefore 2 must > be untrue, and the drug really does have an effect." > There is nothing this chain of reasoning that > requires the samples to be > indentical/indistinguishable. And for every 20 > baseline variables > compared, you'd *expect* about 1 of those baseline > variables to have a P > of < 0.05 The statistical techniques have > "built-in" accomodation for > this. This does not invalidate the conclusions. > It is a difficult concept for my learners to grasp. > Or maybe I've got > it wrong? > Thanks. > --Chris Ryan > * > * For searches and help try: > * > http://www.stata.com/support/faqs/res/findit.html > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ [email protected] Do you Yahoo!? Protect your identity with Yahoo! Mail AddressGuard * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"https://www.stata.com/statalist/archive/2003-12/msg00364.html","timestamp":"2024-11-09T12:38:37Z","content_type":"text/html","content_length":"12927","record_id":"<urn:uuid:2860c980-6594-46a2-965e-3ca36f2741ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00060.warc.gz"}
Challenge description Inventing hash functions based on elliptic curves is not easy, but now I found out that with the quadratic twist I can lift any x! This should be enough to create a secure one. Note: the challenge is running with SageMath 10.2 on the remote server. nc invention.challs.open.ecsc2024.it 38011 Table of contents We are given invention.sage and a connection to the server which is running it. The core of the program is the function elliptic_hash, which recieves a bytestring and returns two elliptic curve The protocol for communicating with the server is as follows: • You send your username • The server sends you a random token • You send a password which has to start with said token • The server hashes your password and saves it • The server registers an admin user and you are shown the token and password of the admin • Finally we get to log in, in order to get the flag we must log in as our user (hence it has to start with our token), but the password hash must be that of the admin. Essentially we must find a hash collision for the admins password with a set prefix (our token). The algorithm Let's look at how elliptic_hash is structured. Two different elliptic curves are used, and its quadratic twist which will be referred to as . These two curves have the generators and . We will not need any deep knowledge of elliptic curves to solve this, these are the main relevant principles: • If the point is a generator of then any point on can be written as for some integer . • If a given -coordinate does not correspond to a point on then it will correspond to a point on , and vice versa. When the program is initialized two random integers and are generated. These are then multiplied by the corresponding generator point of each curve to yield two new points and (Pu and PTu in the source code). We are shown and and can hence calculate both points ourselves. The message is first divided up into blocks of 20 bytes, which corresponds to the size of the -coordinate of a point on the curve. When referring to blocks from now on we are interested in their integer representation. The first two blocks are special and will be denoted and , the remaining blocks are . The first step is to calculate and (Ci and CTi in the source code). Afterwards, for each , we check if its value corresponds to the -coordinate of a point on . If that is the case we lift the -coordinate to the corresponding point, , and add it to (i.e we modify ). If the -coordinate is not on , and hence on , it is lifted to a point on and added to instead. The final hash is the tuple , which will equal: The final crux is that when we and the admin are first registering all the blocks we use are saved. Later, when logging in, we are only allowed to use blocks which were already used in the registration phase (from before we even know the admin's password). Since we are given the admin's password we can calculate the corresponding hash ourselves. The core problem is that we are forced to use the given token as our first block (i.e ). Since the hash is split up into two parts ( and ) let's first focus on the easier one: . CT - the easy part Since we are given full control of we can simply set it to be the same as in the admin's password. From there we grab all blocks from the admin's password which are and add them to our password as well. Our will now be identitcal to the admin's. We are allowed to do this since the admin's blocks were added to the allowed ones during registration. CE - the hard part Now for . The from the admin's password can be added to ours as well just as in the case, so we can ignore all but the first block. Let's call our given token and the admin token. Before we even gain control will equal . Recall that the result of the first admin block will be . We thus want to find such that , or equivalently If we rewrite each as some multiple of the generator we get It thus suffices to find such that This is a problem in integers which is a lot nicer to work with. Normally we could just set and be done, but recall that we had the restriction that only blocks which had been used during registration may be used now, and when registering we don't know yet. An important thing to note is that we may use one point several times by just repeating the block in the password, essentially multiplying the point by a small scalar. So if we predefine a set of points during registration we can then build up any linear combination of them later, which will be of the form for reasonably small (otherwise the password will be too long). There are several ways to choose a basis here. An easy approach would be to collect all powers of 2 times the generator, i.e . We could then build any multiple of by just looking at its binary representation. This will require as many points as there are bits in . An annoying detail A problem you will encounter is that due to implementation details in the source code our password is required to be valid utf-8, else the code will raise an error. You can get around this by for example brute-forcing pairs of points which sum to and are each valid utf-8, but from my experience this takes a looong time. We thus want to have as few basis points as possible since finding each one requires extensive brute force. My approach was to simply generate random multiples of and check if it decodes properly. I could then save the point together with that multiple, and after around 17 points I could reliably reach most points on the curve using quite small coefficients (in the base-2 case we restricted ourselves to coefficients of 0 and 1, we can now use any non-negative integer). Solving the instance Finding the such that is not completely trivial. A common method is to use a lattice reduction algorithms like LLL, but configuring it such that all can be a bit cumbersome (although definitely possible). If you're interested the lattice will resemble the ones presented here. I've encountered this problem several times before so I have made a utility which combines lattice reduction with some more exact linear programming methods. I was happy to be able to test it out so soon and it worked well for this purpose, you can check it out here. This covers all the ideas used in my solve script, read it for more details. This was a very fun challenge which combined several known primitives into an interesting problem, thanks to the author for writing it and I hope we will see more like this in the ECSC finals! 🇮🇹🤌🍕🍍
{"url":"https://ecsc2024.it/openECSC/round-2/writeups/invention","timestamp":"2024-11-08T01:41:20Z","content_type":"text/html","content_length":"599265","record_id":"<urn:uuid:13e0a999-6c48-4135-92bd-db44177eb399>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00359.warc.gz"}
Dataset for Quantifier Elimination and CAD examples in Maple Quantifier Elimination over the reals is a problem in Computer Algebra consisting of quantified boolean formulae of polynomial constraints. This dataset provides a collection of real world examples amalgamated from a range of sources, most not seen together before. This database is useful for the purposes of benchmarking and testing, and in particular is an accompaniment for the future to be released Maple package 'QuantifierElimination'. While the main contribution is the collection of Quantifier Elimination problems, auxillary functions are provided for various other purposes, including viewing and building the examples. Also, examples for pure CAD are provided, ie. just semi-algebraic formulae. Lastly, a document detailing references for sources of these examples is This dataset provides the following: - 'QE Example Database.mpl': a file that can be read into Maple that loads an interactive database of QE examples, along with functions to build and print them, - 'CADDatabase.mm': a file that can be read into Maple that loads a table of purely unquantified examples for CAD, 'CADExamples', with no auxillary functions. These examples are of various types, but are compatible with the input semantics of 'CylindricalAlgebraicDecompose' for the package 'QuantifierElimination' for Maple. - 'TarskiFormulaLaTeXTools.mpl': a file that can be read into Maple that allows Maple to better format Tarski formulae (type 'TarskiFormula' arising from the package 'QuantifierElimination') for LaTeX when passed into Maple's inbuilt function 'latex'. - 'Example Database Info.pdf': A pdf documenting reference and origin information about all examples from the databases included. All formulae or otherwise semi-algebraic sets produced by usage of these files are in 'RationalTarskiFormula' or 'TarskiFormula' type, for compatibility with 'QuantifierElimination'. They are amenable to usage with Maple packages 'RegularChains' or 'SyNRAC', after some conversion. - 'QuantifierEliminationConversionTools.mpl': a file that can be read into Maple that loads two functions for conversion of Tarski formulae from 'QuantifierElimination' format, 'convertQEtoRC', 'convertQEtoSyNRAC', and 'convertQEtoQEPCAD' which convert to format amenable to 'RegularChains', 'SyNRAC', or 'QEPCAD' respectively. 'QEPCAD' requires bespoke input, so one can write the produced string to a file before redirection into QEPCAD. More information about each file is in the metadata for each file. Quantifier Elimination, Cylindrical Algebraic Decomposition, Maple, Symbolic Computation, Computer Algebra Cite this dataset as: Tonks, Z., 2023. Dataset for Quantifier Elimination and CAD examples in Maple. Bath: University of Bath Research Data Archive. Available from: https://doi.org/10.15125/BATH-00746. A file that can be read into Maple loading a table of unquantified semi algebraic expressions called 'CADExamples', and nothing else. No auxillary functions are provided in this file. A file that can be read into Maple, which having done so will enable Maple to better produce the LaTeX for Tarski formulae, 'TarskiFormula' type enabled by the package 'QuantifierElimination'. Hence the Maple inbuilt 'latex' will produce better LaTeX for such formulae. A file that can be read into Maple, which loads three functions 'convertQEtoRC','convertQEtoSyNRAC', c'onvertQEtoQEPCAD' which will convert a QE example written in 'QuantifierElimination' format to that of RegularChains, SyNRAC, or QEPCAD respectively. Note that the Quantifier Elimination procedure in RegularChains is RegularChains:-SemiAlgebraicSetTools:-QuantifierElimination. Note that these functions require the original problem in prenex form. Note QEPCAD will accept redirection of a problem A file that can be read into Maple that loads tables containing explicit QE problems, and tables and functions from which QE examples can be built. Also loads auxillary functions that can be used to view or interact with such tables, and a "Help" function explaining how to use everything further. All QE problems produced are of format amenable to usage with the package 'QuantifierElimination', which enables the 'TarskiFormula' type. University of Bath Rights Holder Data processing and preparation activities: The CAD and QE examples contained in the relevant database come from a wide range of sources, documented in the contained pdf. Where appropriate, these may have been reformatted into Maple format, eg. from Mathematica format for the economics examples. Many of the examples arise from David Wilson's CAD database (reference found in the .pdf), and where appropriate they have been reformatted as the relevant QE problem for the QE database. Otherwise, they may have had some slight reformatting such that they form lists of relations, or sets of polynomials. Technical details and requirements: Maple 2019.0 or newer is sufficient to view the examples. Previous versions of Maple may suffice, although cannot be guaranteed. The examples are all written to be directly compatible for functions contained in the package 'QuantifierElimination' for Maple, to be released. Conversion for usage with Maple packages 'RegularChains' or 'SyNRAC' is available by the included conversion routines. A version of 'RegularChains' is included with any consumer version of Maple, but the newest version is also available online. 'SyNRAC' is only available online. Documentation Files Provides references for sources of all semi-algebraic formulae contained in the files in this repository. PhD Studentship, Hybrid Techniques in Quantifier Elimination PhD Studentship, Hybrid Techniques in Quantifier Elimination Related papers and books Tonks, Z., 2020. A Poly-algorithmic Quantifier Elimination Package in Maple. In: Communications in Computer and Information Science. Springer International Publishing, 171-186. Available from: https: Davenport, J. H., Nair, A. S., Sankaran, G. K., and Uncu, A. K., 2023. Lazard-style CAD and Equational Constraints. In: Proceedings of the 2023 International Symposium on Symbolic and Algebraic Computation. ACM. Available from: https://doi.org/10.1145/3597066.3597090. Davenport, J. H., Tonks, Z. P., and Uncu, A. K., 2023. A Poly-algorithmic Approach to Quantifier Elimination. In: 2023 25th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC). IEEE. Available from: https://doi.org/10.1109/synasc61333.2023.00013. Contact information Please contact the Research Data Service in the first instance for all matters concerning this item. Contact person: Zak Tonks Faculty of Science Computer Science
{"url":"https://researchdata.bath.ac.uk/746/","timestamp":"2024-11-05T12:35:16Z","content_type":"application/xhtml+xml","content_length":"52525","record_id":"<urn:uuid:13918857-f398-41c0-9d8f-5aa0cb1b0a37>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00817.warc.gz"}
Non-binary Treatment Specification Binary Treatment When creating a Design, handling binary treatment variables is straightforward. If the treatment variable is either numeric with only values 0/1, or is logical, then lmitt() will estimate a treatment effect of the difference between the outcome in the treated group (1 or TRUE) versus the control group (0 or FALSE). Missing treatment status In all cases (binary and non-binary), missing values are allowed and any units of assignment with missing treatment values are excluded from models fit via lmitt(). Non-binary Treatment However, the _design() functions can take in any (reasonable) form of treatment assignment. If the treatment variable is a numeric with non-binary values, it is treated as a continuous treatment effect and lmitt(y ~ 1, ... will estimate a single coefficient on treatment. If the treatment variable is a character, it is treated as a multi-level treatment variable and lmitt(y ~ 1, ... will estimate treatment effects against a reference category. The reference category is the first level defined according to R’s comparison of characters. factor and ordered objects are tricky to deal with, so while a Design can be created with factor or ordered treatment variables, lmitt() will refuse to estimate a model unless it is also provided a dichotomy (see below). Dichotomzing a Non-binary Treatment Studies may offer treatment to units at different times or provide treatment to units in varying intensities. Researchers may be interested in estimating treatment effects at different times or given a certain threshold of provided treatment, however. propertee accommodates these wishes by storing the time or intensity of treatment for treated units in the Design, then offering a dichotomy= argument to the weights calculation functions ett()/ate() and the assginment creation function assigned() A dichotomy is presented as a formula, where the left-hand side is a logical statement defining inclusion in the treatment group, and the right-hand side is a logical statement defining inclusion in the control group. For example, if dose represents the intensity of a given treatment, we could set a threshold of 200, say, mg: All units of assignment with dose above 200 are treated units, and all units of assignment with dose of 200 or below are control units. A . can be used to define either group as the inverse of the other. For example, the above dichotomy could be defined as either of dose > 200 ~ . . ~ dose <= 200 Any units of assignment not assigned to either treatment or control are assumed to have NA for a treatment status and will be ignored in the estimation of treatment effects. dose >= 300 ~ dose <= 100 In this dichotomy, units of assignment in the range (100,300) are ignored. An Example #> 50 100 200 250 300 #> 10 10 10 10 10 des1 <- rct_design(dose ~ uoa(uoa1, uoa2), data = simdata) #> Randomized Control Trial #> Structure Variables #> --------- --------- #> Treatment dose #> Unit of Assignment uoa1, uoa2 #> Number of units per Treatment group: #> Txt Grp Num Units #> 50 2 #> 100 2 #> 200 2 #> ... #> 2 smaller treatment groups excluded. #> Use `dtable` function to view full results. head(ate(des1, data = simdata, dichotomy = dose >= 300 ~ dose <= 100)) #> [1] 1.5 1.5 1.5 1.5 0.0 0.0 head(assigned(des1, data = simdata, dichotomy = dose >= 300 ~ dose <= 100)) #> [1] 0 0 0 0 NA NA
{"url":"https://benbhansen-stats.github.io/propertee/articles/non-binary-treatment.html","timestamp":"2024-11-08T17:38:10Z","content_type":"text/html","content_length":"15909","record_id":"<urn:uuid:55e7d390-8686-4d35-a773-bb40c9fe9e68>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00366.warc.gz"}
Estadística Matemática Organizadores: Florencia Leonardi (florencia@usp.br), Pamela Llop (lloppamela@gmail.com), Daniela Rodriguez (drodrig@dm.uba.ar) • Tuesday 14 15:00 - 15:45 Differentially private inference via noisy optimization Marco Avella Medina (Columbia University, Estados Unidos) We propose a general optimization-based framework for computing differentially private M-estimators and a new method for the construction of differentially private confidence regions. Firstly, we show that robust statistics can be used in conjunction with noisy gradient descent and noisy Newton methods in order to obtain optimal private estimators with global linear or quadratic convergence, respectively. We establish global convergence guarantees, under both local strong convexity and self-concordance, showing that our private estimators converge with high probability to a neighborhood of the non-private M-estimators. The radius of this neighborhood is nearly optimal in the sense it corresponds to the statistical minimax cost of differential privacy up to a logarithmic term. Secondly, we tackle the problem of parametric inference by constructing differentially private estimators of the asymptotic variance of our private M-estimators. This naturally leads to the use of approximate pivotal statistics for the construction of confidence regions and hypothesis testing. We demonstrate the effectiveness of a bias correction that leads to enhanced small-sample empirical performance in simulations. 15:45 - 16:30 Adjusting ROC curves for covariates: a robust approach Ana M. Bianco (Universidad de Buenos Aires, Argentina) ROC curves are a popular tool to describe the discriminating power of a binary classifier based on a continuous marker as the threshold is varied. They become an interesting strategy to evaluate how well an assignment rule based on a diagnostic test distinguishes one population from the other. Under certain circumstances the marker's discriminatory ability may be affected by certain covariates. In this situation, it seems sensible to include this information in the ROC analysis. This task can be accomplished either by the induced or the direct method. In this talk we will focus on ROC curves in presence of covariates. We will show the impact of outliers on the conditional ROC curves and we will introduce a robust proposal. We follow a semiparametric approach where we combine robust parametric estimators with weighted empirical distribution estimators based on an adaptive procedure that downweights outliers. We will discuss some aspects concerning consistency and through a Monte Carlo study we will compare the performance of the proposed estimators with the classical ones both, in clean and contaminated samples. 16:45 - 17:30 Adaptive regression with Brownian path covariate Karine Bertin (Universidad de Valparaíso, Chile) In this talk, we will study how to obtain optimal estimators in problems of non-parametric estimation. More specifically, we will present the Goldenshluger-Lepski (2011) method that allows one to obtain estimators that adapt to the smoothness of the function to be estimated. We will show how to extend this statistical procedure in regression with functional data when the regressor variable is a Wiener process \(W\). Using the Wiener-Ito decomposition of m(W), where \(m\) is the regression function, we will define a family of estimators that satisfy an oracle inequality, are proved to be adaptive and converge at polynomial rates over specific classes of functions. 17:30 - 18:15 A non-asymptotic analysis of certain high-dimensional estimators for the mean Roberto Imbuzeiro Oliveira (Instituto de Matemática Pura e Aplicada, Brasil) Recent work in Statistics and Computer Science has considered the following problem. Given a distribution \(P\) over \({\bf R}^d\) and a fixed sample size \(n\), how well can one estimate the mean \(\mu = \bf E_{X\sim P} X\) from a sample \(X_1,\dots,X_n\stackrel{i.i.d.}{\sim}P\) while only requiring finite second moments and allowing for sample contamination? It turns out that the best estimators are not related to the sample mean. In this talk we present a new analysis of certain approaches to this problem, and reproduce or improve previous results by several authors. • Wednesday 15 15:00 - 15:45 Least trimmed squares estimators for functional principal component analysis Holger Cevallos-Valdiviezo (Escuela Superior Politécnica del Litoral, Ecuador y Ghent University, Bélgica) Classical functional principal component analysis can yield erroneous approximations in presence of outliers. To reduce the influence of atypical data we propose two methods based on trimming: a multivariate least trimmed squares (LTS) estimator and its coordinatewise variant. The multivariate LTS minimizes the multivariate scale corresponding to \(h-\)subsets of curves while the coordinatewise version uses univariate LTS scale estimators. Consider a general setup in which observations are realizations of a random element on a separable Hilbert space \(\mathcal{H}\). For a fixed dimension \(q\), we aim to robustly estimate the \(q\) dimensional linear space in \(\mathcal{H}\) that gives the best approximation to the functional data. Our estimators use smoothing to first represent irregularly spaced curves in a high-dimensional space and then calculate the LTS solution on these multivariate data. The solution of the multivariate data is subsequently mapped back onto \(\mathcal{H}\). Poorly fitted observations can therefore be flagged as outliers. Simulations and real data applications show that our estimators yield competitive results when compared to existing methods when a minority of observations is contaminated. When a majority of the curves is contaminated at some positions along its trajectory coordinatewise methods like Coordinatewise LTS are preferred over multivariate LTS and other multivariate methods since they break down in this case. 15:45 - 16:30 Stick-breaking priors via dependent length variables Ramsés Mena Chávez (Universidad Nacional Autónoma de México, México) In this talk, we present new classes of Bayesian nonparametric prior distributions. By allowing length random variables, in stick-breaking constructions, to be exchangeable or Markovian, appealing models for discrete random probability measures appear. As a result, by tuning the stochastic dependence in such length variables allows to recover extreme families of random probability measures, i.e. Dirichlet and Geometric processes. As a byproduct, the ordering of the weights, in the species sampling representation, can be controlled and thus tuned for efficient MCMC implementations in density estimation or unsupervised classification problems. Various theoretical properties and illustrations will be presented. 16:45 - 17:30 Modelling in pandemic times: using smart watch data for early detection of COVID-19 Mayte Suarez-Farinas (Icahn School of Medicine at Mount Sinai, Estados Unidos) The COVID-19 pandemic brought many challengers to statisticians and modelers across all quantitative disciplines. From accelerated clinical trials to modelling of epidemiological interventions at a feverish pace, to the study of the impact of the pandemic in mental health outcomes and racial disparities. In this talk, we would like to share our experience using classical statistical modelling and machine learning to use data obtained from wearable devices as digital biomarkers of COVID-19 infection. Early in the pandemic, health care workers in the Mount Sinai Health System (New York city) were prospectively followed in an observational study using the custom Warrior Watch Study app, to collect weekly information about stress, symptoms and COVID-19 infection. Participants wore an Apple Watch for the duration of the study, measuring heart rate variability (HRV), a digital biomarker previously associated with infection in other settings, throughout the follow-up period. The HRV data collected through the Apple Watch was characterized by a circadian pattern, with sparse sampling over a 24-hour period, and non-uniform timing across days and participants. These characteristics preclude us from using easily derived features (ie mean, maximum, CV etc) with Machine learning methods to develop a diagnostic tool. As such, suitable modelling of the non-uniform, sparsely sampled circadian rhythm data derived from wearable devices are an important step to advance the use of integrated wearable data for prediction of health outcomes. To circumvent such limitations, we introduced the mixed-effects COSINOR model, where the daily circadian rhythm is express as a non-linear function with three rhythm characteristics: the rhythm-adjusted mean (MESOR), half the extent of variation within a cycle (amplitude), and an angle relating to the time at which peak values recur in each cycle (acrophase). The longitudinal changes in the circadian patterns can then be evaluated extending the COSINOR model to a mixed-effect model framework, allowing for random effects and interaction between COSINOR parameters and time-varying covariates. In this talk, we will discuss our model framework, boostrapped-based hypothesis testing and prediction approaches, as well as our evaluation of HRV measures as early biomarkers of COVID-19 diagnosis. To facilitate the future use of the mixed-effect COSINOR model, we implemented in an R package cosinoRmixedeffects.
{"url":"https://clam2021.cmat.edu.uy/sesiones/13","timestamp":"2024-11-03T12:23:31Z","content_type":"text/html","content_length":"26376","record_id":"<urn:uuid:d759801e-0d04-4dd0-a193-ef29e8e3b121>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00406.warc.gz"}
Movement on the Orrery may be described/perceived in terms of several “forms” of speed… Linear speed: this is the speed that links the distance travelled (a length) and the time taken to cover that distance. The average linear velocity is calculated over a journey from an initial position at the time of departure to a final position at the time of arrival. This speed is equal to the ratio of the distance travelled between the two positions divided by the duration between the two instants. On the Orrery, it is possible to take any pair of discs in an orbit to calculate this speed. The duration is then known. On the other hand, measuring the distance can pose a problem because the exact path between two discs is unknown (it is not a straight line), and the position should be the centre of the discs, but it needs to be (always) represented. There is, therefore, uncertainty in measuring the distance and calculating the velocity. In the case of an orbit, the mean linear velocity is often calculated with the perimeter divided by the period. Instantaneous linear velocity is the mathematical interpretation of velocity at a given moment. It is calculated as a limit of the average speed when time (or distance) tends towards zero. Therefore, It cannot be calculated this way on the Human Orrery. Assuming that the linear velocities of the planets are constant (which would be the case for circular orbits), the instantaneous linear velocity corresponds to the slope of the graph associating distance and time. This graph can be constructed on the Orrery using several pairs of discs in the same orbit (as proposed in the session on velocity). Angular velocity. To define this velocity, one needs to agree on a central point to describe an angle. This will logically be the Sun. The quantity observed is then the angle defined by the two half-lines [Sun, starting position] and [Sun, arrival position]. The angular velocity equals the ratio between this angle and the duration between the departure and arrival times. In the case of an orbit, the mean angular velocity is often calculated as 360° divided by the period. Finally, the speed felt by the body is probably instantaneous, but it is not obvious whether it is linear or angular or a comparison of speeds between two people or between a person and the The trio (speed, distance, duration) can therefore be associated with (linear speed, perimeter, period) or even (angular speed, complete revolution of 360°, period). The student’s reasoning will refer to one or other of these concepts of speed, and they will make comparisons by fixing one of the parameters out of the three (this is known as causal linear reasoning). All these difficulties suggest that the speeds should not be described too early in the use of the Orrery and that a separate session, along with some “preparatory or warm-up” exercises, should be devoted to this!
{"url":"https://elearning.aristarchusproject.eu/faq/","timestamp":"2024-11-12T03:10:52Z","content_type":"text/html","content_length":"64135","record_id":"<urn:uuid:39b76c00-af6d-4555-8df9-97e74847fba1>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00585.warc.gz"}
The cut locus of a C^2 surface in the Heisenberg group Socionovo, Alessandro The cut locus of a C^2 surface in the Heisenberg group. [Laurea magistrale], Università di Bologna, Corso di Studio in Matematica [LM-DM270] Documenti full-text disponibili: Documento PDF (Thesis) Disponibile con Licenza: Salvo eventuali più ampie autorizzazioni dell'autore, la tesi può essere liberamente consultata e può essere effettuato il salvataggio e la stampa di una copia per fini strettamente personali di studio, di ricerca e di insegnamento, con espresso divieto di qualunque utilizzo direttamente o indirettamente commerciale. Ogni altro diritto sul materiale è riservato Download (605kB) My master thesis deals with some fine properties of the natural Carnot-Carathéodory distance in the Heisenberg group. In particular, we are interested in properties related to the cut locus of a smooth surface (for us, the term smooth indicates a second order differentiability). The cut locus of a closed subset S of the n-dimensional Euclidean space, denoted by cut(S), is the set containing the endpoints of maximal segments that minimize the distance to S. In the Euclidean case many properties of the cut locus are well known if S is the smooth boundary of an open set. For example the cut loci of such surfaces are closed (this is the fact we will be most interested in). Such results are still valid if the Euclidean metric is replaced with any smooth Riemannian metric. Not all the known properties of the cut locus in Riemannian geometry are also known to hold in the sub-Riemannian case. So our goal is to generalize and prove some of them in the Heisenberg group, which has the simplest sub-Riemannian structure. In particular we are very interested in proving that the cut loci of surfaces which are the smooth boundary of open sets in the Heisenberg group are closed sets. At the moment we are not able to give a proof of the closure of the cut locus. So we are looking for new properties of the Carnot distance which may be related with the cut locus of a smooth surface in the Heisenberg group and that may be useful to prove its closure. Precisely, we are investigating in points conjugate to the surface S, since they are strictly connected with the cut locus in the Riemannian case. Just about conjugate points, we will show some new small results at the end of this work. My master thesis deals with some fine properties of the natural Carnot-Carathéodory distance in the Heisenberg group. In particular, we are interested in properties related to the cut locus of a smooth surface (for us, the term smooth indicates a second order differentiability). The cut locus of a closed subset S of the n-dimensional Euclidean space, denoted by cut(S), is the set containing the endpoints of maximal segments that minimize the distance to S. In the Euclidean case many properties of the cut locus are well known if S is the smooth boundary of an open set. For example the cut loci of such surfaces are closed (this is the fact we will be most interested in). Such results are still valid if the Euclidean metric is replaced with any smooth Riemannian metric. Not all the known properties of the cut locus in Riemannian geometry are also known to hold in the sub-Riemannian case. So our goal is to generalize and prove some of them in the Heisenberg group, which has the simplest sub-Riemannian structure. In particular we are very interested in proving that the cut loci of surfaces which are the smooth boundary of open sets in the Heisenberg group are closed sets. At the moment we are not able to give a proof of the closure of the cut locus. So we are looking for new properties of the Carnot distance which may be related with the cut locus of a smooth surface in the Heisenberg group and that may be useful to prove its closure. Precisely, we are investigating in points conjugate to the surface S, since they are strictly connected with the cut locus in the Riemannian case. Just about conjugate points, we will show some new small results at the end of this work. Altri metadati Statistica sui download
{"url":"https://amslaurea.unibo.it/18773/","timestamp":"2024-11-13T00:09:36Z","content_type":"application/xhtml+xml","content_length":"36446","record_id":"<urn:uuid:91bacc2f-8ec7-4f23-a553-78fc189047e9>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00793.warc.gz"}
fractional delay filter for non real time applications 4 years ago ●14 replies● latest reply 4 years ago 402 views Hi guys, I would like to create fractional delay filter that can be applied on a signal for non real time applications. My approach is to apply a linear phase shift via a multiplication of a ideal filter (rectangular window) with linear phase in the frequency domain. In the next step, I transform the signal back to the time domain using a symmetric ifft to obtain a real valued signal. The Matlab code looks like this: N = 1e3; % number of samples n0 = 0.3; % fractional delay in integer samples x = randn(1,N); % time domain signal X = fft(x); % frequency domain k = 0:N-1; % frequency index vector Y = X.*exp(-1i*2*pi*k/N*n0); % linear phase shift wrt to frequency y = ifft(Y,'symmetric'); % back to time domain Due to the rectangular window of the filter I will get ringing artifacts the frequency boundaries. Nevertheless, is this a valid approach to delay a signal by a fractional integer delay? [ - ] Reply by ●June 22, 2020 The ringing you refer to is because your approach creates a filter frequency response which has a discontinuity. I'm using the word discontinuity incorrectly because we are in sampled time, but the point is that your linear phase changes abruptly at the transition between the highest frequency and the zero frequency. For a non-fractional delay, the discontinuity is a multiple of 2 *pi which is OK since the phase is an angle and angles are normally mod 2*pi. Most people would place that discontinuity at the half-sampling frequency rather than at zero, but I'm not sure that isn't just a matter of taste. But discontinuity in the frequency domain always leads to a Gibbs phenomenon in the time domain and to avoid that you should probably apply a time domain window. That means that the signal you are delaying is the windowed signal. [ - ] Reply by ●June 22, 2020 does this mean, that the approach I showed leads to Gibbs phenomenon but yields perfect linear phase for the whole band with no group delay error? For my application I need a very precise delay filter (<10⁻4 wrt to the sample interval). The amplitude response isn't that important to me. Does this make sense? [ - ] Reply by ●June 22, 2020 I think you have to express your requirement more carefully. What exactly is a precise delay filter? If the input and the output of the filter have the same sampling rate then you need a precise definition of what you mean by a delay. Let's do a thought experiment. Say you have a random number sequence X that is a million samples long, where the assumed sampling rate is 1 microsecond. Now select every ten-thousandth sample, samples 0,10000,20000,..., and store it in another file Y. Imagine some process that creates a 100 sample output file Z which duplicates a different set of samples which is identical to the original million sample sequence X at samples 5,10005,20005,... Clearly you can't expect to do that with truly random samples. So instead, start the million sample sequence with a constrained set. One way to constrain it would be to take X's DFT and zero out most of the samples, keeping only DFT bins 0 to 49, and their mirror values -1 to -49 (mod 1 million). The inverse FFT is a million sample sequence X' with only frequency components between +/- 49 Hz so it can be sampled with 100 samples without violating the Nyquist sampling criterion. In other words it should be possible to recover the whole set of a million samples of X' from 100 samples Y'. You just need to put the 100 samples into a million spaces with all the other 999900 spaces having zero, and then convolve it with h(n)= (sin Nn)/ sin n. The result of the convolution is a million sample sequence Z' which matches X' everywhere. But you don't want it everywhere so you don't need to compute all the points of the convolution everywhere, only at points 5,10005,20005, etc. That's your baseline reference for what you mean by a delay. Now I think it is fair to assume that your original X and its reduced bandwidth X' were both real sequences. Real sequences have Fourier transforms with with real-part even and imaginary part odd. The scheme you proposed will give Z' with some imaginary part. That, at a minimum, represents an error introduced by your technique. By Parseval's theorem the total energy of the output is the total energy of the input so there must be another part of the error in the real part. How would you modify your scheme to eliminate any imaginary part? You have to multiply the transform by the transform of another real sequence, which has the required symmetry. In other words, let the linear phase function have its break at the folding frequency rather than at frequency 0. That eliminates the imaginary part, which is all error. But it's not going to reproduce Z' at the desired 100 points. To do that you need to preserve ALL the frequency components, and the Gibbs phenomenon will produce ringing which is strongest near the phase discontinuity at 50 Hz. [ - ] Reply by ●June 22, 2020 you need to zero-pad your x signal before the FFT. delaying the signal by linear phase-shifting the spectrum is exactly an example of using the FFT to do what we sometimes call "fast convolution". now this assumes non-real-time. for real-time, if you have memory for a large table, i might recommend using a Kaiser-windowed sinc() function and then linear interpolation between the subsamples. Farrow is okay for what it is, but it is limited to Lagrange interpolation, which is a specific polynomial-based interpolation. [ - ] Reply by ●June 22, 2020 This isn't accurate. The Farrow structure is generic and can implement any polynomial type and order. The coefficients determine which polynomial is implemented: Lagrange, hermite, spline or even custom designed to meet given time domain or frequency specifications. [ - ] Reply by ●June 22, 2020 You are absolutely correct. I was reverberating what I read at https://ccrma.stanford.edu/~jos/Interpolation/Farrow_Interpolation_Features.html and I didn't drill in enough to see that it could be any polynomial. But it does have to be a polynomial-based interpolation and I am not sure how easily you can jump over integer sample boundaries using this Farrow structure. As for me, since all of the applications I have ever done fractional delay could easily provide an 8K word lookup table, I just designed the bestest brickwall FIR filter I could with 32-taps and 512 phases (or fractional delays) and use linear interpolation between adjacent fractional delays. Reasonably efficient (64 MACs plus one more for linear interpolation) and clean as a whistle. And I could jump to whatever precision delay (as long as it was at least 15 samples) I wanted to every single output sample. [ - ] Reply by ●June 22, 2020 Farrow is just generic structure for implementing polynomials (including linear interp). Now the frequency response of polynomial based filter doesn't improve much for order >=3 so adding a fixed filter prior to farrow is a good idea instead of cranking up the order. From memory, good fixed filter up by 128x followed by linear interp was about the same as a good 8x followed by cubic (100dB ish image/alias attenuation). Looks like this is in-line with what you are What is best mostly depends on the hardware. [ - ] Reply by ●June 22, 2020 That's interesting Farrow has that limitation. Everyone calls out Farrow anytime decimation/interp factors are impractical or other reason fractional resampling is needed. It's like a reflex. I didn't hear about any drawbacks until now. Thanks. [ - ] Reply by ●June 22, 2020 As others have said, you can do it in the time domain. You could use a Farrow interpolator, or for better accuracy of the amplitude response, you could use a Fractional-delay FIR. See my article on Fractional Delay FIR at: [ - ] Reply by ●June 22, 2020 thanks Neil, that is very valuable for me! I checked out your scripts and had a look at the group delay error (magnitude of input group delay minus output group delay). The error lies at around 10⁻3 for a fractional delay of 0.123. The error increases with increasing fraction delay. Do you know how I could create a fractional delay filter that has an error below 10^-4? [ - ] Reply by ●June 22, 2020 Hi JGruber, You can try changing the dB ripple parameter "r" of the Chebychev window in my function. It is set at 70 dB -- you can improve accuracy by using a higher value, e.g.: win= chebwin(ntaps,80); For a given FIR order, as you increase r, the delay ripple decreases at the expense of reduced bandwidth of the filter. Another option would be to use a Farrow interpolator (piecewise polynomial interpolation), which would give flat, accurate group delay, although the amplitude response is not perfectly flat. [ - ] Reply by ●June 22, 2020 Thank you very much Neil. [ - ] Reply by ●June 22, 2020 fft then ifft is very expensive for that. I will just use farrow resampler. [ - ] Reply by ●June 22, 2020 If you need to vary the fractional delay real-time, farrow resampler is a good choice. If you can afford to recompute the coefficients for each different fractional delay, a sinc interpolation filter could also work. (essentially, the time domain equivalent of your FFT approach).
{"url":"https://dsprelated.com/thread/11518/fractional-delay-filter-for-non-real-time-applications","timestamp":"2024-11-12T16:15:21Z","content_type":"text/html","content_length":"60598","record_id":"<urn:uuid:2b203318-5d9e-4324-a64f-4473effb425e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00821.warc.gz"}
Algebra tiles worksheets Author Message Cbhin Posted: Friday 18th of Sep 11:06 Guys and Gals ! Ok, we’re doing algebra tiles worksheets and I was absent in my last algebra class so I have no notes and my teacher teaches stuff way bad that’s why I didn’t get to understand it very well when I went to our algebra class a while ago. To make matters worse, we will have our quiz on our next meeting so I can’t afford not to study algebra tiles worksheets. Can somebody please help me try to understand how to answer couple of questions about algebra tiles worksheets so that I can prepare for the examination . I’m hoping that someone could help me as soon as possible . From: Land of the IlbendF Posted: Saturday 19th of Sep 20:07 What in particular is your difficulty with algebra tiles worksheets? Can you give some more beating your difficulty with stumbling upon a tutor at an affordable cost is for you to go in for a suitable program. There are a range of programs in math that are to be had . Of all those that I have tried out, the best is Algebra Master. Not only does it solve the algebra problems, the good thing in it is that it explains every step in an easy to follow manner. This makes sure that not only you get the correct answer but also you get to be taught how to get to the answer. LifiIcPoin Posted: Sunday 20th of Sep 11:27 Hi , just a year ago, I was stuck in a similar situation. I had even considered the option of dropping math and selecting some other course . A colleague of mine told me to give one last try and sent me a copy of Algebra Master. I was at ease with it within few minutes . My grades have really improved within the last one month. From: Way Way Behind ovh Posted: Tuesday 22nd of Sep 07:27 It’s amazing that a software can do that. I didn’t expect something like that could be helpful in math . I’m used to be taught by a tutor but this really sounds cool. Do you have any links for this program? From: india Hiinidam Posted: Thursday 24th of Sep 07:29 Don’t worry dude . Just visit https://algebra-test.com/demos.html and see all the details about it. Good luck with your studies! CO, US
{"url":"http://algebra-test.com/algebra-help/relations/algebra-tiles-worksheets.html","timestamp":"2024-11-08T03:02:11Z","content_type":"application/xhtml+xml","content_length":"20211","record_id":"<urn:uuid:06037ce8-fadd-41a5-949c-0c6e40946a3e>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00038.warc.gz"}
Equilateral Triangle Equilateral Triangle Calculator Calculations at an equilateral triangle or regular trigon. This is the most simple regular polygon (polygon with equal sides and angles). Enter one value and choose the number of decimal places. Then click Calculate. h = √3 / 2 * a p = 3 * a A = a² * √3 / 4 r[c] = √3 / 3 * a r[i] = √3 / 6 * a Angle: 60° 0 diagonals Length, height, perimeter and radius have the same unit (e.g. meter), the area has this unit squared (e.g. square meter). Heights, bisecting lines, median lines, perpendicular bisectors and symmetry axes coincide. To these, the equilateral triangle is axially symmetric. They meet with centroid, circumcircle and incircle center in one point. To this, the equilateral triangle is rotationally symmetric at a rotation of 120°or multiples of this, so it has the order 3. perimeter p, area A heights h[a], h[b], h[c] incircle and circumcircle angles and bisecting lines median lines perpendicular bisectors The equilateral or regular triangle has three angles of 60 degrees each. The root of 3 appears in its height, incircle and circumference radius as well as the area, so all of these values are irrational. The equilateral triangle forms the sides of three of the five Platonic solids, these are tetrahedron, octahedron and icosahedron. With equilateral triangles, the plane can be tiled without any gaps, without creating a right angle at any point. Regular polygons get closer and closer to a circle as the number of corners increases. The equilateral triangle has the fewest corners of all these polygons, so it can be understood as the regular shape that differs most from the circle. Since circular objects can roll and polygonal shapes roll better as the number of corners increases, shapes based on the equilateral triangle are the regular objects that resist rolling the most and therefore offer the best protection against unwanted rolling. © Jumk.de Webprojects | Online Calculators
{"url":"https://rechneronline.de/pi/equilateral-triangle.php","timestamp":"2024-11-14T08:04:33Z","content_type":"text/html","content_length":"31863","record_id":"<urn:uuid:cfd1761a-bea5-4dc7-b17e-db7a89d8b41a>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00344.warc.gz"}
Computational Chemistry Highlights Eugene E. Kwan and Richard Y. Liu (2015) DOI 10.1021/acs.jctc.5b00856 This paper presents a novel way of computing vibrational effects on chemical shifts. Existing methods generally compute such effects by displacing the coordinates along the normal modes to create PESs, computing the vibrational wavefunctions, computing chemical shifts for the displaced coordinates, and computing the chemical shifts as expectation values. However, it can be challenging to use this approach on low frequency modes since they require relatively large displacements which tend to distort the molecule in unphysical ways. This problem can, in part, be solved by using internal coordinates but results can wary greatly depending on which internal coordinates one chooses. Kwan and Liu propose instead to perform a short (125 fs) B3LYP/MIDI! quasi-classical MD simulation "along" each normal mode and then compute an average chemical shift based on the trajectory. The MD simulation is "quasi-classical" in the sense that it is initialized based in the vibrational harmonic oscillator energy levels. The energy level is randomly selected from a Boltzmann distribution and 25 trajectories were found to be sufficient. Since the forces are computed at each point and used to determine the next displacement point unphysical distortions of the molecule are avoided. Another difference is that the chemical shifts are computed using a higher level of theory (B3LYP/cc-pVDZ) than that used to construct the PES. The various choices for method, basis set, simulation length, number of trajectories, etc is tested extensively in the supplementary materials, which also contains a more thorough description of steps in the algorithm (page 96). Here the authors also state that they "are in the process of developing a user-friendly package for carrying out these calculations, which will be reported in due Compared to the previously published methods for computing vibrational corrections this method is significantly more expensive in terms of energy and gradient evaluations. However, I wonder if the methods can be combined such that this method is used only for lower frequency modes that can problematic for the "displacement" methods. I also wonder if the method can be adapted to compute the anharmonic corrections to the enthalpy and entropy, which also can prove challenging for displacement-based methods. Liu, J.; Ravat, P.; Wagner, M.; Baumgarten, M.; Feng, X.; Müllen, K. Angew. Chem. Int. Ed. 2015, 54, 12442-12446 Contributed by Steven Bachrach Reposted from Computational Organic Chemistry with permission Feng, Müller and co-workers have prepared a bistetracene analogue 1.^1 This molecule displays some interesting features. While a closed shell Kekule structure can be written, a biradical structure results in more closed Clar rings, suggesting that perhaps the molecule is a ground state singlet biradical. The loss of NMR signals with increasing temperature along with an EPR signal that increases with temperature both support the notion of a ground state singlet biradical with a triplet excited state. The EPR measurement suggest as singlet-triplet gap of 3.4 kcal mol^-1. The optimized B3LYP/6-31G(d,p) geometries of the biradical singlet and triplet states are shown in Figure 1. The singlet is lower in energy by 6.7 kcal mol^-1. The largest spin densities are on the carbons that carry the lone electron within the diradical-type Kekule structures. singlet 1 triplet 1 Figure 1. B3LYP/6-31G(d,p) optimized geometries of the biradical singlet and triplet states of 1. (1) Liu, J.; Ravat, P.; Wagner, M.; Baumgarten, M.; Feng, X.; Müllen, K. "Tetrabenzo[a,f,j,o]perylene: A Polycyclic Aromatic Hydrocarbon With An Open-Shell Singlet Biradical Ground State," Angew. Chem. Int. Ed. 2015, 54, 12442-12446, DOI: 10.1002/anie.201502657. 1: InChI=1S/C62H56/c1-33-25-35(3)51(36(4)26-33)53-45-17-13-15-19-47(45)57-56-44-24-22-42(62(10,11)12)30-40(44)32-50-54(52-37(5)27-34(2)28-38(52)6)46-18-14-16-20-48(46)58(60(50)56)55-43-23-21-41(61 This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License. Törk, L.; Jiménez-Osés, G.; Doubleday, C.; Liu, F.; Houk, K. N. J. Am. Chem. Soc. 2015, 137, 4749-4758 Contributed by Steven Bachrach Reposted from Computational Organic Chemistry with permission Houk and Doubleday report yet another example of dynamic effects in reactions that appear to be simple, ordinary organic reactions.^1 Here they look at the Diels-Alder reaction of tetrazine 1 with cyclopropene 2. The reaction proceeds by first crossing the Diels-Alder transition state 3 to form the intermediate 4. This intermediate can then lose the anti or syn N[2], through 5a or 5s, to form the product 6. The structures and relative energies, computed at M06-2X/6-31G(d), of these species are shown in Figure 1. 17.4 -33.2 5a 5s -28.9 -20.0 Figure 1. M06-2X/6-31G(d) optimized geometries and energies (relative to 1 + 2) of the critical points along the reaction of tetrazine with cyclopropene. The large difference in the activation barriers between crossing 5a and 5s (nearly 9 kcal mol^-1) suggests, by transition state theory, a preference of more than a million for loss of the anti N[2] over the syn N[2]. However, quasiclassical trajectory studies, using B3LYP/6-31G(d), finds a different situation. The antipathway is preferred, but only by a 4:1 ratio! This dynamic effect arises from a coupling of the v[3] mode which involves a rocking of the cyclopropane ring that brings a proton near the syn N[2] functionality, promoting its ejection. In addition, the trajectory studies find short residence times within the intermediate neighborhood for the trajectories that lead to the anti product and longer residence times for the trajectories that lead to the syn product. All together, a very nice example of dynamic effects playing a significant role in a seemingly straightforward organic reaction. (1) Törk, L.; Jiménez-Osés, G.; Doubleday, C.; Liu, F.; Houk, K. N. "Molecular Dynamics of the Diels–Alder Reactions of Tetrazines with Alkenes and N[2] Extrusions from Adducts," J. Am. Chem. Soc. 2015, 137, 4749-4758, DOI: 10.1021/jacs.5b00014. 1: InChI=1S/C2H2N4/c1-3-5-2-6-4-1/h1-2H 2: InChI=1S/C3H4/c1-2-3-1/h1-2H,3H2 4: InChI=1S/C5H6N4/c1-2-3(1)5-8-6-4(2)7-9-5/h2-5H,1H2 6: InChI=1S/C5H6N2/c1-4-2-6-7-3-5(1)4/h2-5H,1H2 This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License. Chen, S.; Yamasaki, M.; Polen, S.; Gallucci, J.; Hadad, C. M.; Badjić, J. D. J. Am. Chem. Soc. 2015, 137, 12276 Contributed by Steven Bachrach Reposted from Computational Organic Chemistry with permission Badjic, Hadad, and coworkers have prepared ^1 an interesting host molecule that appears like two cups joined at the base, with one cup pointed up and the other pointed down. A slightly simplified analogue 1of the synthesized host is shown in Figure 1. The actual host is found to bind one molecule of 2, but does not appear to bind a second molecule. Seemingly, only one of the cups can bind a guest, and that this somehow deters a second guest from being bound into the other cup. Figure 1. B3LYP/6-31G* optimized geometry of host molecule 1. (Visualization of this molecules and the structures below are greatly enhanced by clicking on each image which will invoke the molecular viewer Jmol.) To address negative allosterism, the authors optimized the structure of 1 at B3LYP/6-31G* (shown in Figure 1). They then optimized the geometry with the constraint that the three arms in the top cup were ever more slightly moved inward. This had the consequential effect of moving the three arms of the bottom cup farther apart. They next optimized (at M06-2x/6-31G(d)) the structures of 1 holding one molecule of guest 2 and with two molecules of guest 2. These structures are shown in Figure 2. In the structure with one guest, the arms are brought in towards the guest for the cup where the guest is bound, and this consequently draws the arms in the other cup to be farther apart, and thereby less capable of binding a second guest. The structure with two guest shows that the arms are not able to get sufficiently close to either guest to form strong non-covalent interactions. Figure 2. M06-2x/6-31G(d) optimized structures of 1 with one or two molecules of 2. Thus, the negative allosterism results from a geometric change created by the induced fit of the first guest that results in an unfavorable environment for a second guest. (1) Chen, S.; Yamasaki, M.; Polen, S.; Gallucci, J.; Hadad, C. M.; Badjić, J. D. "Dual-Cavity Basket Promotes Encapsulation in Water in an Allosteric Fashion," J. Am. Chem. Soc. 2015, 137, 12276-12281, DOI:10.1021/jacs.5b06041. This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License. Sung, Y. M.; Oh, J.; Kim, W.; Mori, H.; Osuka, A.; Kim, D. J. Am. Chem. Soc. 2015, 137, 11856 Contributed by Steven Bachrach. Reposted from Computational Organic Chemistry with permission What is the relationship between a ground state and the first excited triplet (or first excited singlet) state regarding aromaticity? Baird^1 argued that there is a reversal, meaning that a ground state aromatic compound is antiaromatic in its lowest triplet state, and vice versa. It is suggested that the same reversal is also true for the second singlet (excited singlet) state. Osuka, Sim and coworkers have examined the geometrically constrained hexphyrins 1 and 2.^2 1 has 26 electrons in the annulene system and thus should be aromatic in the ground state, while 2, with 28 electrons in its annulene system should be antiaromatic. The ground state and lowest triplet structures, optimized at B3LYP/6-31G(d,p), of each of them are shown in Figure 1. ^11 ^12 ^31 ^32 Figure 1. B3LYP/6-31G(d,p) optimized geometries of 1 and 2. NICS computations where made in the centers of each of the two rings formed by the large macrocycle and the bridging phenyl group (sort of in the centers of the two lenses of the eyeglass). The NICS values for 1are about -15ppm, indicative of aromatic character, while they are about +15ppm for 2, indicative of antiaromatic character. However, for the triplet states, the NICS values change sign, showing the aromatic character reversal between the ground and excited triplet state. The aromatic states are also closer to planarity than the antiaromatic states (which can be seen by clicking on the images in Figure 1, which will launch the JMol applet so that you can rotate the molecular images). They also performed some spectroscopic studies that support the notion of aromatic character reversal in the excited singlet state. (1) Baird, N. C. "Quantum organic photochemistry. II. Resonance and aromaticity in the lowest ^3ππ* state of cyclic hydrocarbons," J. Am. Chem. Soc. 1972, 94, 4941-4948, DOI: 10.1021/ja00769a025. (2) Sung, Y. M.; Oh, J.; Kim, W.; Mori, H.; Osuka, A.; Kim, D. quot;Switching between Aromatic and Antiaromatic 1,3-Phenylene-Strapped [26]- and [28]Hexaphyrins upon Passage to the Singlet Excited State," J. Am. Chem. Soc. 2015, 137, 11856-11859, DOI: 10.1021/jacs.5b04047. 1: InChI=1S/C60H18F20N6/c61-41-37(42(62)50(70)57(77)49(41)69)33-23-8-4-19(81-23)31-17-2-1-3-18(16-17)32(21-6-10-25(83-21)35(29-14-12-27(33)85-29)39-45(65)53(73)59(79)54(74)46(39)66)22-7-11-26(84-22) 2: InChI=1S/C60H20F20N6/c61-41-37(42(62)50(70)57(77)49(41)69)33-23-8-4-19(81-23)31-17-2-1-3-18(16-17)32(21-6-10-25(83-21)35(29-14-12-27(33)85-29)39-45(65)53(73)59(79)54(74)46(39)66)22-7-11-26(84-22) This work is licensed under a Creative Commons Attribution-NoDerivs 3.0 Unported License. Albrecht Goez and Johannes Neugebauer J. Chem. Theory Comput., Article ASAP, DOI: 10.1021/acs.jctc.5b00832 Contributed by Christoph Jacob Fragment-based methods nowadays make it possible to perform quantum-chemical calculations for rather large biomolecules, for instance light-harvesting protein systems [1]. Such methods are based on the idea of splitting a protein into smaller fragments, such as its constituting amino acids. This leads to a linear scaling of the computational effort with the size of protein. Popular examples of fragment-based electronic structure methods include the fragment molecular orbital (FMO) method [2] and a generalization of the frozen-density embedding scheme (3-FDE) [3]. In a recent article in JCTC, Goez and Neugebauer from the University of Münster (Germany) address an additional bottleneck that appears in such calculations. Usually, it is necessary to include a solvent environment in the calculations, in particular if charged amino-acid side chains are present. The simplest way of doing so are continuum solvation models, such as COSMO or PCM. These models represent the solvent in terms of apparent charges on the surface of a cavity enclosing the protein. However, for proteins the number of apparent surfaces charges becomes rather large - for ubiquitin, a protein with only 78 amino acids, already 20,000 charges are needed. Updating these apparent surface charges involves solving a linear system of equations of size 20,000 x 20,000. When doing so in each SCF cycle for each of the fragments, the continuum solvation model will become the bottleneck of the calculation. To solve this problem, Goez and Neugebauer developed a local variant of the COSMO model (LocCOSMO). In each fragment calculation, they update only those apparent surface charges that are close to this fragment. This reduces the computational effort significantly, but because every fragment is updated at some point it will eventually result in the same final result. This is demonstrated by the authors for several test cases. They can reduce the computational time required for a 3-FDE calculation of ubiquitin in a solvent environment by a factor of 30, without compromising the quality of the result. The efficient combination of fragment-based quantum chemistry with continuum solvation models provides an important tool for studies of biomolecules. It will make such calculations more robust by alleviating convergence problems for charged amino acids and will allow for a more realistic inclusion of protein environments in studies of spectroscopic properties of chromophores in biomolecular [1] A. Goez, Ch. R. Jacob, J. Neugebauer, “Modeling environment effects on pigment site energies: Frozen density embedding with fully quantum-chemical protein densities”, Comput. Theor. Chem. 1040–1041, 347–359 (2014). [2] D. G. Fedorov, K. Kitaura, “Extending the Power of Quantum Chemistry to Large Systems with the Fragment Molecular Orbital Method”, J. Phys. Chem. A 111, 6904–6914 (2007). [3] Ch. R. Jacob, L. Visscher, “A subsystem density-functional theory approach for the quantum chemical treatment of proteins”, J. Chem. Phys. 128, 155102 (2008).
{"url":"http://www.compchemhighlights.org/2015/11/","timestamp":"2024-11-01T18:48:45Z","content_type":"application/xhtml+xml","content_length":"170434","record_id":"<urn:uuid:3c1f439a-e8bb-4006-909a-ef4e501bd6db>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00040.warc.gz"}
Stoichiometry Part I: Tackling standard A Level / Prelims examination questions Stoichiometry and mole concept have been covered in the O levels and is redone during the first few months of Junior College (JC) Chemistry during the first year. Therefore, many students usually assume that this topic is easy and thus do not spend much time revising it during the A levels Chemistry paper. Contrary to what many think, there are some Chemistry exam questions that came out during the past year A-level Chemistry or preliminary examinations, that are not so easy to solve. We visit one such question here. H2 Chemistry A levels 2010 Paper 3 Alcohol J, C[x]H[y]OH,is a volatile fungal metabolite whose presence when detected in air can indicate hidden fungal attack on the timbers of a house. When 0.1 cm^3 of liquid J was dissolved in an inert solvent and an excess of sodium metaladded, 10.9 cm^3 of gas (measured at 298 K) was produced. When 0.1 cm^3 of liquid J was combusted in an excess of oxygen in an enclosed vessel, the volume of gas (measured at 298 K) was reduced by 54.4 cm^3. The addition of an excess of NaOH(aq) caused a further reduction in gas volume of 109 cm^3(measured at 298 K). Use these data to calculate values for x and y in the molecular formula C[x]H[y]OH for J. There are 2 ways to solve this question. One of the methods is presented below: Method 1: There are two reactions in this question: A. Reaction of the alcohol J with sodium. B. Reaction of the alcohol J with oxygen. Reaction A: C[x]H[y]OH + Na → C[x]H[y]OHa + ½H[2] Moles of H[2] formed at RTP = 10.9 cm^3 = 4.542 x 10^-4 moles Thus, the moles of C[x]H[y]OH reacted = 2(4.542 x 10^-4) = 9.083 x 10^-4 moles Since the amount of alcohol J used in reactions A and B are the same, 9.083 x 10^-4 moles of alcohol J was used in Reaction B as well. Reaction B: C[x]H[y]OH (l) + ( ) O[2](g) → XCO[2] (g) + ( )H[2]O (l) Note that the question mentioned that O[2] is in excess. Thus at the end of the reaction, the gases left at RTP are CO[2] and excess O[2]. Since the volume of the gas was reduced by 54.4 cm^3, Initial volume of O[2-] (Volume of O[2] left + Volume of CO[2] formed) = 54.4 cm^3. (THIS IS THE PART MANY STUDENTS DO NOT KNOW HOW TO INTERPRET!) ⇒ Volume of O[2] reacted – Volume of CO[2] formed = 54.4 cm^3 …(1) Volume reduction after adding NaOH = 109 cm^3. Since only CO[2] in the final mixture reacts with NaOH, volume of CO[2] formed = 109cm^3 Substituting in equation (1): Volume of O[2] reacted = 109 + 54.4 = 163.4 cm^3 What we have gotten so far: Moles of C[x]H[y]OH reacted = 9.083 x 10^-4 Moles of O[2] reacted = = 6.808 x 10^-3 Moles of CO[2] formed = = 4.541 x 10^-3 Moles of C[x]H[y]OH : moles of CO[2] 1 : X 9.083 x 10^-4 : 4.541 x 10^-3 X = 5 Moles of C[x]H[y]OH : moles of O[2] 1 : 9.083 x 10^-4 : 6.808 x 10^-3 2(5) + = 2(7.495) + 1 y = 11 Thus J is C[5]H[11]OH Points of discussion: a. Some students will guess what the alcohol, J is after finding out that X = 5. A natural answer is y = 11, which is the formula for pentanol. However, take note that this is very dangerous, and y should be determined mathematically since y can be less than 11 if I. The alcohol is unsaturated II. The alcohol is alicyclic III. The alcohol is both alicyclic and unsaturated. Method 2 Question: Can we use volume ratio to solve the question, without once determining any number of reactants and products formed? Answer: Of course we can!! 1. Avogardro’s law states that equal volume of gases (irregardless of the type of gas) at the same temperature and pressure contains the same number of particles (or moles!!). 2. But how about alcohol J? Is its volume of 0.1cm^3 also proportional to the volumes of CO[2] formed in Reaction A and CO[2] formed (and O[2] used) in Reaction B? Is it not a liquid? For more insights into the 2nd method of solving this question without, even once, needing to calculate moles, find out more at www.FocusChemistry.com NOW!!
{"url":"https://www.focuschemistry.com/blogs/news/stoichiometry-part-i-tackling-standard-a-level-prelims-examination-questions","timestamp":"2024-11-10T14:56:08Z","content_type":"text/html","content_length":"151107","record_id":"<urn:uuid:f1729d7b-ea14-4d7a-ae7b-8815738f5805>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00158.warc.gz"}
Which of the following about a normal distribution is true? The sample mean equals the population... Which of the following about a normal distribution is true? The sample mean equals the population... Which of the following about a normal distribution is true? The sample mean equals the population mean if the sample and the population are both normally distributed The area under the curve to the right of mean is .50 It is a classical probability distribution where each outcome is equally likely. The smaller the standard deviation, the flatter the curve will be The mean (the perpindicular line down the center of the curve) of the normaldistribution divides the curve in half, so that 50% of the area under the curve is to the right of the mean and 50% is to the left. The area under the curve to the right of mean is .50 IItis a classical probability distribution where each outcome is equally likely Is true The sample mean equals the population mean if the sample and the population are both normally distributed. It is also true. It is a classical probability distribution where each outcome is equally likely. It is true. The smaller the standard deviation, the flatter the curve will be It is false. A larger standard deviation indicates that the data is spread out around the mean; the normal distribution will be flatter and wider.
{"url":"https://justaaa.com/statistics-and-probability/172294-which-of-the-following-about-a-normal","timestamp":"2024-11-05T23:43:02Z","content_type":"text/html","content_length":"42524","record_id":"<urn:uuid:ed4a3198-0dc2-4174-af3c-97b622f1154c>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00652.warc.gz"}
Kilobase Pair to Gigabase Pair Converter (kbp to Gbp) | Kody Tools 1 Kilobase Pair = 0.000001 Gigabase Pairs One Kilobase Pair is Equal to How Many Gigabase Pairs? The answer is one Kilobase Pair is equal to 0.000001 Gigabase Pairs and that means we can also write it as 1 Kilobase Pair = 0.000001 Gigabase Pairs. Feel free to use our online unit conversion calculator to convert the unit from Kilobase Pair to Gigabase Pair. Just simply enter value 1 in Kilobase Pair and see the result in Gigabase Pair. Manually converting Kilobase Pair to Gigabase Pair can be time-consuming,especially when you don’t have enough knowledge about DNA Length units conversion. Since there is a lot of complexity and some sort of learning curve is involved, most of the users end up using an online Kilobase Pair to Gigabase Pair converter tool to get the job done as soon as possible. We have so many online tools available to convert Kilobase Pair to Gigabase Pair, but not every online tool gives an accurate result and that is why we have created this online Kilobase Pair to Gigabase Pair converter tool. It is a very simple and easy-to-use tool. Most important thing is that it is beginner-friendly. How to Convert Kilobase Pair to Gigabase Pair (kbp to Gbp) By using our Kilobase Pair to Gigabase Pair conversion tool, you know that one Kilobase Pair is equivalent to 0.000001 Gigabase Pair. Hence, to convert Kilobase Pair to Gigabase Pair, we just need to multiply the number by 0.000001. We are going to use very simple Kilobase Pair to Gigabase Pair conversion formula for that. Pleas see the calculation example given below. \(\text{1 Kilobase Pair} = 1 \times 0.000001 = \text{0.000001 Gigabase Pairs}\) What Unit of Measure is Kilobase Pair? Kilobase pair is a unit of measurement for DNA length. Kilobase pair is a multiple of DNA length unit base pair. One kilobase pair is equal to 1000 base pairs. What is the Symbol of Kilobase Pair? The symbol of Kilobase Pair is kbp. This means you can also write one Kilobase Pair as 1 kbp. What Unit of Measure is Gigabase Pair? Gigabase pair is a unit of measurement for DNA length. Gigabase pair is a multiple of DNA length unit base pair. One gigabase pair is equal to 1e9 base pairs. What is the Symbol of Gigabase Pair? The symbol of Gigabase Pair is Gbp. This means you can also write one Gigabase Pair as 1 Gbp. How to Use Kilobase Pair to Gigabase Pair Converter Tool • As you can see, we have 2 input fields and 2 dropdowns. • From the first dropdown, select Kilobase Pair and in the first input field, enter a value. • From the second dropdown, select Gigabase Pair. • Instantly, the tool will convert the value from Kilobase Pair to Gigabase Pair and display the result in the second input field. Example of Kilobase Pair to Gigabase Pair Converter Tool Kilobase Pair to Gigabase Pair Conversion Table Kilobase Pair [kbp] Gigabase Pair [Gbp] Description 1 Kilobase Pair 0.000001 Gigabase Pair 1 Kilobase Pair = 0.000001 Gigabase Pair 2 Kilobase Pair 0.000002 Gigabase Pair 2 Kilobase Pair = 0.000002 Gigabase Pair 3 Kilobase Pair 0.000003 Gigabase Pair 3 Kilobase Pair = 0.000003 Gigabase Pair 4 Kilobase Pair 0.000004 Gigabase Pair 4 Kilobase Pair = 0.000004 Gigabase Pair 5 Kilobase Pair 0.000005 Gigabase Pair 5 Kilobase Pair = 0.000005 Gigabase Pair 6 Kilobase Pair 0.000006 Gigabase Pair 6 Kilobase Pair = 0.000006 Gigabase Pair 7 Kilobase Pair 0.000007 Gigabase Pair 7 Kilobase Pair = 0.000007 Gigabase Pair 8 Kilobase Pair 0.000008 Gigabase Pair 8 Kilobase Pair = 0.000008 Gigabase Pair 9 Kilobase Pair 0.000009 Gigabase Pair 9 Kilobase Pair = 0.000009 Gigabase Pair 10 Kilobase Pair 0.00001 Gigabase Pair 10 Kilobase Pair = 0.00001 Gigabase Pair 100 Kilobase Pair 0.0001 Gigabase Pair 100 Kilobase Pair = 0.0001 Gigabase Pair 1000 Kilobase Pair 0.001 Gigabase Pair 1000 Kilobase Pair = 0.001 Gigabase Pair Kilobase Pair to Other Units Conversion Table Conversion Description 1 Kilobase Pair = 1000 Base Pair 1 Kilobase Pair in Base Pair is equal to 1000 1 Kilobase Pair = 0.001 Megabase Pair 1 Kilobase Pair in Megabase Pair is equal to 0.001 1 Kilobase Pair = 0.000001 Gigabase Pair 1 Kilobase Pair in Gigabase Pair is equal to 0.000001 1 Kilobase Pair = 333.33 Amino Acid 1 Kilobase Pair in Amino Acid is equal to 333.33 1 Kilobase Pair = 333.33 Codon 1 Kilobase Pair in Codon is equal to 333.33
{"url":"https://www.kodytools.com/units/dna/from/kilobasepair/to/gigabasepair","timestamp":"2024-11-11T21:38:19Z","content_type":"text/html","content_length":"76393","record_id":"<urn:uuid:7c6894a1-c1f9-45b5-af07-344dbbf1c5de>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00572.warc.gz"}
1969 Apollo 11 used these 30 lines of assembly language code to calculate transcendental functions for navigation - Tech Startups1969 Apollo 11 used these 30 lines of assembly language code to calculate transcendental functions for navigation 1969 Apollo 11 used these 30 lines of assembly language code to calculate transcendental functions for navigation Posted On June 1, 2020 The successful launch of SpaceX Dragon over the weekend has sparked interest among young people and others. Many people are asking how Apollo computers evaluated the transcendental functions like sine, arctangent, log? Thanks to NASA for sharing the source code and posting it in a public domain at GitHub. The code is part of the source code for Luminary 1A build 099. It is part of the source code for the Command Module (CM) and here for the Lunar Lander (LM) used for Apollo Guidance Computer (AGC), for Apollo 11. The AGC is a digital computer produced for the Apollo program that was installed on board each Apollo command module (CM) and Apollo Lunar Module (LM). The AGC provided computation and electronic interfaces for guidance, navigation, and control of the spacecraft. The AGC software was written in AGC assembly language and stored on rope memory. The AGC was designed at the MIT Instrumentation Laboratory under Charles Stark Draper, with hardware design led by Eldon C. Hall. The flight hardware was fabricated by Raytheon, whose Herb Thaler was also on the architectural team. The codes were used to implement sine and cosine functions: see here for the command module and here for the lunar lander (it looks like it is the same code). # Page 1102 BLOCK 02 COUNT* $$/INTER SPCOS AD HALF # ARGUMENTS SCALED AT PI SPSIN TS TEMK TCF SPT CS TEMK SPT DOUBLE TS TEMK TCF POLLEY XCH TEMK INDEX TEMK AD LIMITS AD TEMK TS TEMK TCF POLLEY TCF ARG90 POLLEY EXTEND MP TEMK TS SQ MP C5/2 AD C3/2 MP SQ AD C1/2 MP TEMK TS TEMK TC Q ARG90 INDEX A CS LIMITS TC Q # RESULT SCALED AT 1. Below is an explanation of the code. Hats off to Nathan Tuggy at stackexchange The comment indicates, that the following is indeed an implementation of the sine and cosine functions. Information about the type of assembler used, can be found on Wikipedia. Partial explanation of the code: The subroutine SPSIN actually calculates sin(πx)sin⁡(πx), and SPCOS calculates cos(πx)cos⁡(πx). The subroutine SPCOS first adds one half to the input, and then proceeds to calculate the sine (this is valid because of cos(πx)=sin(π(x+12))cos⁡(πx)=sin⁡(π(x+12))). The argument is doubled at the beginning of the SPT subroutine. That is why we now have to calculate sin(π2y)sin⁡(π2y) for y=2xy=2x. The subroutine POLLEY calculates an almost Taylor polynomial approximation of sin(π2x)sin⁡(π2x). First, we store x2x2 in the register SQ (where xx denotes the input). This is used to calculate the The values for the constants can be found in the same GitHub repository and are which look like the first Taylor coefficients for the function 12sin(π2x)12sin⁡(π2x). These values are not exact! So this is a polynomial approximation, which is very close to the Taylor approximation, but even better (see below, also thanks to @uhoh and @zch). Finally, the result is doubled with the DDOUBL command, and the subroutine POLLEY returns an approximation to sin(π2x)sin⁡(π2x). Note that for this polynomial approximation you only need four multiplications and two additions (MP and AD in the code). For the Apollo Guidance Computer, memory and CPU cycles were only available in small numbers. There are some ways to increase accuracy and input range, which would have been available for them, but it would result in more code and more computation time. For example, exploiting symmetry and periodicity of sine and cosine, using the Taylor expansion for cosine, or simply adding more terms of the Taylor expansion would have improved the accuracy and would also have allowed for arbitrary large input values. Below is an old NASA video which shows how the DSKY worked, about 6 minutes into the program: Trending Now
{"url":"https://techstartups.com/2020/06/01/1969-apollo-11-used-30-lines-assembly-language-code-calculate-transcendental-functions-navigation/","timestamp":"2024-11-11T09:35:46Z","content_type":"text/html","content_length":"110133","record_id":"<urn:uuid:369acf87-8994-4bf8-bb13-643dffa64695>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00196.warc.gz"}
Aptitude Q&A on Ratios and Proportions In this page you can find out the “Aptitude Q&A on Ratios and Proportions“. Answers of these Aptitude questions are given at the bottom of this page. If you want to download this “Ratios and proportions”, please search in our Aptitude free download section. 11) Rs. 850 was divided among three sons Prasath, Baskar, Chadru. If each of them had received Rs 35 less, their shares would have been in the ratio of 2:3:5. What was the amount received by Prasath? A) 184 B) 174 C) 164 D) 154 12) Prasath divided Rs. 2500 and gave it to his three kids A, B, C. If their shares are reduced by Rs. 5, Rs. 10 and Rs. 15 respectively the ratio of the remaining will be 3:4:5. Find out A ’s share. A) 600 B) 617.50 C) 627 D) 618 13) 75 kg of alloy A is mixed with 100kg of alloy B. If alloy A has lead and tin in the ratio of 3:5 and alloy B has tin and copper in the ratio of 2:5, then what is the amount of tin in new alloy? A) 79 B) ~75 C) ~77 D) None of these 14) The sides of a triangle are in the ratio 1/3:1/4:1/5 and its perimeter is 204 cm. What is the length of the longest triangle? A) 76.8 B) 85.8 C) 86.8 D) 98 15) Tin and Zinc are melted together in the ratio of 9:11. What is the weight of melted mixture if 25.5 kg of zinc has been consumed? A) 66.6 B) 76.6 C) 54.6 D) 56.6 16) There are totally three bottles which contains milk and water together. The ratio of the volumes of the three bottles are 2:3:4. The mixture contains milk and water in the ratio of 3:1,4:1, 5:1respectively. If the contents are poured together in another bottle then what is the price of ratio between milk and water in the fourth bottle? A) 218:52 B) 217:53 C) 215:54 D) 219:59 17) Two numbers are respectively 60% and 40% more than a third number. What is the ratio between two numbers? A) 7:8 B) 9:8 C) 8:7 D) 6:5 18) One piece of cloth 21 meters long is to be cut into two pieces, with the lengths of the pieces being in a 2 : 5 ratio. What are the lengths of the pieces? A) 15 B) 18 C) 19 D) 20 19) If 12 inches correspond to 30.48 cm, how many number of Centimeters are there in 30[thirty] inches? A) 76.2 B) 77.2 C) 78.2 D) 88.2 20) The sum of three numbers is 98. If the ratio of the first to second is 2 :3 and that of the second to the third is 5 : 8. What is the value of second number? A) 30 B) 40 C) 50 D) 60 ANSWERS : 11) A 12) B 13) B 14) C 15) D 16) B 17) C 18) A 19) A 20) A
{"url":"http://students3k.com/aptitude-sub/aptitude-qa-on-ratios-and-proportions","timestamp":"2024-11-10T22:46:28Z","content_type":"text/html","content_length":"36173","record_id":"<urn:uuid:ca0d6e8b-d65b-4713-8439-1e9634eec186>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00427.warc.gz"}
Probability interpretations: Examples — LessWrong (Written for Arbital in 2016.) Betting on one-time events Consider evaluating, in June of 2016, the question: "What is the probability of Hillary Clinton winning the 2016 US presidential election?" On the propensity view, Hillary has some fundamental chance of winning the election. To ask about the probability is to ask about this objective chance. If we see a prediction market in which prices move after each new poll — so that it says 60% one day, and 80% a week later — then clearly the prediction market isn't giving us very strong information about this objective chance, since it doesn't seem very likely that Clinton's real chance of winning is swinging so rapidly. On the frequentist view, we cannot formally or rigorously say anything about the 2016 presidential election, because it only happens once. We can't observe a frequency with which Clinton wins presidential elections. A frequentist might concede that they would cheerfully buy for $1 a ticket that pays $20 if Clinton wins, considering this a favorable bet in an informal sense, while insisting that this sort of reasoning isn't sufficiently rigorous, and therefore isn't suitable for being included in science journals. On the subjective view, saying that Hillary has an 80% chance of winning the election summarizes our knowledge about the election or our state of uncertainty given what we currently know. It makes sense for the prediction market prices to change in response to new polls, because our current state of knowledge is changing. A coin with an unknown bias Suppose we have a coin, weighted so that it lands heads somewhere between 0% and 100% of the time, but we don't know the coin's actual bias. The coin is then flipped three times where we can see it. It comes up heads twice, and tails once: HHT. The coin is then flipped again, where nobody can see it yet. An honest and trustworthy experimenter lets you spin a wheel-of-gambling-odds — reducing the worry that the experimenter might know more about the coin than you, and be offering you a deliberately rigged bet — and the wheel lands on (2 : 1). The experimenter asks if you'd enter into a gamble where you win $2 if the unseen coin flip is tails, and pay $1 if the unseen coin flip is heads. On a propensity view, the coin has some objective probability between 0 and 1 of being heads, but we just don't know what this probability is. Seeing HHT tells us that the coin isn't all-heads or all-tails, but we're still just guessing — we don't really know the answer, and can't say whether the bet is a fair bet. On a frequentist view, the coin would (if flipped repeatedly) produce some long-run frequency of heads that is between 0 and 1. If we kept flipping the coin long enough, the actual proportion of observed heads is guaranteed to approach arbitrarily closely, eventually. We can't say that the next coin flip is guaranteed to be H or T, but we can make an objectively true statement that will approach to within epsilon if we continue to flip the coin long enough. To decide whether or not to take the bet, a frequentist might try to apply an unbiased estimator to the data we have so far. An "unbiased estimator" is a rule for taking an observation and producing an estimate of , such that the expected value of is . In other words, a frequentist wants a rule such that, if the hidden bias of the coin was in fact to yield 75% heads, and we repeat many times the operation of flipping the coin a few times and then asking a new frequentist to estimate the coin's bias using this rule, the average value of the estimated bias will be 0.75. This is a property of the estimation rule which is objective. We can't hope for a rule that will always, in any particular case, yield the true from just a few coin flips; but we can have a rule which will provably have an average estimate of , if the experiment is repeated many times. In this case, a simple unbiased estimator is to guess that the coin's bias is equal to the observed proportion of heads, or 2/3. In other words, if we repeat this experiment many many times, and whenever we see heads in 3 tosses we guess that the coin's bias is , then this rule definitely is an unbiased estimator. This estimator says that a bet of $2 vs. $1 is fair, meaning that it doesn't yield an expected profit, so we have no reason to take the bet. On a subjectivist view, we start out personally unsure of where the bias lies within the interval [0, 1]. Unless we have any knowledge or suspicion leading us to think otherwise, the coin is just as likely to have a bias between 33% and 34%, as to have a bias between 66% and 67%; there's no reason to think it's more likely to be in one range or the other. Each coin flip we see is then evidence about the value of , since a flip H happens with different probabilities depending on the different values of , and we update our beliefs about using Bayes' rule. For example, H is twice as likely if than if so by Bayes's Rule we should now think is twice as likely to lie near as it is to lie near . When we start with a uniform prior, observe multiple flips of a coin with an unknown bias, see heads and tails, and then try to estimate the odds of the next flip coming up heads, the result is Laplace's Rule of Succession which estimates () : () for a probability of . In this case, after observing HHT, we estimate odds of 2 : 3 for tails vs. heads on the next flip. This makes a gamble that wins $2 on tails and loses $1 on heads a profitable gamble in expectation, so we take the bet. Our choice of a uniform prior over was a little dubious — it's the obvious way to express total ignorance about the bias of the coin, but obviousness isn't everything. (For example, maybe we actually believe that a fair coin is more likely than a coin biased 50.0000023% towards heads.) However, all the reasoning after the choice of prior was rigorous according to the laws of probability theory, which is the only method of manipulating quantified uncertainty that obeys obvious-seeming rules about how subjective uncertainty should behave. Probability that the 98,765th decimal digit of π is What is the probability that the 98,765th digit in the decimal expansion of π is 0? The propensity and frequentist views regard as nonsense the notion that we could talk about the probability of a mathematical fact. Either the 98,765th decimal digit of π is or it's not. If we're running repeated experiments with a random number generator, and looking at different digits of π, then it might make sense to say that the random number generator has a 10% probability of picking numbers whose corresponding decimal digit of π is . But if we're just picking a non-random number like 98,765, there's no sense in which we could say that the 98,765th digit of π has a 10% propensity to be , or that this digit is with 10% frequency in the long run. The subjectivist considers probabilities to just refer to their own uncertainty. So if a subjectivist has picked the number 98,765 without yet knowing the corresponding digit of π, and hasn't made any observation that is known to them to be entangled with the 98,765th digit of π, and they're pretty sure their friend hasn't yet looked up the 98,765th digit of π either, and their friend offers a whimsical gamble that costs $1 if the digit is non-zero and pays $20 if the digit is zero, the Bayesian takes the bet. Note that this demonstrates a difference between the subjectivist interpretation of "probability" and Bayesian probability theory. A perfect Bayesian reasoner that knows the rules of logic and the definition of π must, by the axioms of probability theory, assign probability either 0 or 1 to the claim "the 98,765th digit of π is a " (depending on whether or not it is). This is one of the reasons why perfect Bayesian reasoning is intractable. A subjectivist that is not a perfect Bayesian nevertheless claims that they are personally uncertain about the value of the 98,765th digit of π. Formalizing the rules of subjective probabilities about mathematical facts (in the way that probability theory formalized the rules for manipulating subjective probabilities about empirical facts, such as which way a coin came up) is an open problem; this in known as the problem of logical uncertainty. "The propensity and frequentist views regard as nonsense the notion that we could talk about the probability of a mathematical fact" - couldn't a frequentist define a reference class using all the digits of Pi? And then assume that the person knows nothing about Pi so that they throw away the place of the digit? Having read through the above discussion, I don't think you have distinguished between the claim that there are mathematical entities, and the claim that there are mathematical facts. The latter can mean nothing more than different mathematicians will find the same solutions to a given problem, which you accept. Call the second claim epistemological realism, and the first metaphysical realism. To argue that convergence on a set of facts can only be, or be explained by, form of metaphysical realism is to give to much credence to realism. Metaphysical realism about mathematical entities , Platonism, is much more controversial than realism about physical bodies. Right, never mind, for a moment what your discourse style is. Disengaging. New Comment 23 comments, sorted by Click to highlight new comments since: A perfect Bayesian reasoner that knows the rules of logic and the definition of π must, by the axioms of probability theory, assign probability either 0 or 1 to the claim "the 98,765th digit of π is a 0" (depending on whether or not it is). This is one of the reasons why perfect Bayesian reasoning is intractable. A subjectivist that is not a perfect Bayesian nevertheless claims that they are personally uncertain about the value of the 98,765th digit of π. The term "perfect Bayesian" sounds misleading, there is nothing perfect about one's inability to make good probability estimates. This is like saying a "perfect two-boxer". On a related note, what you call the open problem of logical uncertainty is one of the cases where postulating an objective reality (in this case, a mathematical reality), also known on this site as "the territory" runs into limitations. Once you stop insisting that any yet unmeasured value or an unproven theorem is either true or false (or undecidable), but go with the more intuitionist approach, the made-up contradiction between "but there is a 98,765th digit of π out there that has a definite value" and "before calculating the 8,765th digit of π (in effect, making an observation) the best model of π predicts equal probability of all digits" dissolves. I think I understand what your view means with respect to physical uncertainty, but I’m not sure what it means w.r.t. logical uncertainty. Surely, there must be some fact of the matter about what the ratio of a circle’s circumference to its diameter is? Or is there not? And if there is, does that not imply some fact of the matter about any given digit of π, even if I don’t know what said digit Surely, there must be some fact of the matter about what the ratio of a circle’s circumference to its diameter is? This is exactly the issue at hand. You believe in external mathematical "facts", ideal platonic objects. The mathematical territory. This is a useful belief at times, but not in this case, as it gets in the way of making otherwise obvious predictions about observations, such as "how likely that a randomly picked digit of π is zero, once it is picked, but not yet calculated?" Well, let me put it another way. Suppose that I calculate the 98,765th digit of π. And my friend Hasan, who lives on the other side of the world, also, separately, calculates the 98,765th digit of π. Can we get different results? (Other than by making some mistake in writing the code that does the calculation, or some such.) Is that a thing that can happen? What is the probability of the 98,765th digit of π being one thing when calculated by one person, but something else when calculated by someone else, elsewhere? (And if nonzero, how far does this go—could the 1,500th digit of π vary from person to person? The 220th? The 30th? The 3rd?!) If you say that this sort of thing can happen, well, then you’re certainly saying something novel and strange. I guess all I have to say to that is “[citation needed]”. But, if (as seems more likely) you agree that such a thing cannot happen, then my question is: just what exactly is it that makes the 98,765th of π be the same thing when calculated by me, or by Hasan, or by anyone else? Whatever that thing is, what is wrong with calling it “a fact of the matter about what the 98,765th digit of π is”? You seem to be conflating two different questions: What is your best estimate of probability of the currently unknown to you 98,765th digit of π coming out zero, once someone calculates it? What is your best estimate of probability of the 98,765th digit of π calculated by two different people being different? Once enough people reliably do the same calculation (or if there is another reliable way to perform the observation of the 98,765th digit of π), then it can be added to the list of performed observations and, if needed used to predict future observations. just what exactly is it that makes the 98,765th of π be the same thing when calculated by me, or by Hasan, or by anyone else? Whatever that thing is, what is wrong with calling it “a fact of the matter about what the 98,765th digit of π is” This goes back to realism vs anti-realism, not anything I had invented. Anti-realism is a self-consistent epistemology, it pops up in many areas independently. According to Wikipedia, in science an example of it in science is instrumentalism, and in math it is intuitionism: "there are no non-experienced mathematical truths". There is no difference between logical uncertainty and environmental uncertainty in anti-realism. OP seems to have reinvented the juxtaposition of realism and anti-realism in the setting of the probability theory, calling it "perfect Bayesianism" and "subjective Bayesianism" respectively. And "perfect Bayesianism" runs into trouble with logical vs environmental uncertainties, because of the extra (and unnecessary, in the anti-realist view) postulate of objective reality. I still don't think you've answered Said's question. The question is whether two people can observe different values of pi. Or, to put it differently, why is it that, whenever anyone computes a value of pi, it seems to come out to the same value (3.14159...). Doesn't that indicate that there is some kind of objective reality, to which our mathematics corresponds? One of the questions that Wigner brings up in The Unreasonable Effectiveness of Mathematics in the Natural Sciences is why does our math work so well at predicting the future? I would put the same question to you, but in a more general form. If there is no such thing as non-experienced mathematical truths, then why does everyone's experience of mathematical truths seem to be the same? Doesn't that indicate that there is some kind of objective reality, to which our mathematics corresponds? A reality behind repeatable observations is a good model, as long as it works. My point is that it doesn't always work, like in the confusion about logical uncertainty. And I disagree with the assumptions behind the Wigner's question, "why does our math work so well at predicting the future?", specifically that math's effectiveness is "unreasonable". Human and animal brains do complicated calculations all the time in real time to get through life, like solving what amounts to non-linear partial differential equations to even get a bite of food into your mouth. Just because it is subconscious, it is no less of a math than proving theorems. What most humans mean by math is constructing conscious, not subconscious meta-models and using them in multiple contexts. But we subconscious meta-modeling like this all the time in other areas of human experience, so my answer to Wigner's question is "you are committing a mind projection fallacy, the apparently unreasonable effectiveness of mathematics is a statement about human mind, not about the world". If there is no such thing as non-experienced mathematical truths, then why does everyone's experience of mathematical truths seem to be the same? In general, however, your questions about the intuitionist approach to math is best directed to professional mathematicians who are actually intuitionists, though. Human and animal brains do complicated calculations all the time in real time to get through life, like solving what amounts to non-linear partial differential equations to even get a bite of food into your mouth. Just because it is subconscious, it is no less of a math than proving theorems. I agree. So if there is no "objective" reality, apart from that which we experience, then why is it that we all seem to experience the same reality? When I shoot a basketball, or hit a tennis ball, both I and the referee see the same trajectory and are in approximate agreement about where the ball lands. When I lift a piece of food to my mouth and eat it, it would surprise me if someone across the table said that they saw it spill from my fork and stain my shirt. In the absence of an external reality, why is it that everyone's model of the world appears to be in such concordance with everyone else's? So if there is no "objective" reality, apart from that which we experience, then why is it that we all seem to experience the same reality? I am not saying that there is no objective reality, just that I am agnostic about it. In the example you describe, it is a useful meta-model, though not all the time. You may notice that, despite a video review and slow motion hi-res cameras, fans of different teams still argue about what happened, and the final decision is in the hands of a referee. You and your partner (especially ex partner) may disagree about "what really happened" and there is often no way to tell "who is right". One instead has to accept that what one person experienced is not necessarily what another did, and, at least instrumentally, arguing about whose reality is the "true" is likely to be not useful at all. One may as well accept the model where somewhat different things happened to different actors. In the absence of an external reality, why is it that everyone's model of the world appears to be in such concordance with everyone else's? Does it? Who won the World War II, Americans, British or Russians? Is Trump a hero or a villain? Did Elon Musk disclose material information or not in his tweets? Do mathematical infinities exist? Are the laws of physics invented or discovered? Was Jesus a son of God? The list of disagreements about "objective reality" is endless. Sure, there is some "concordance" between different people's views of the world, but it is much less strong than one naively assumes. The examples you use reinforce my point. We argue about extremely fine details. When supporters of opposing teams argue over whether a point was or was not scored, they're disputing whether the ball was here or there by a few millimeters. You won't find very many people arguing that actually, the ball was clear on the other side of the field and in reality, the disputed point is one that would have been scored by the other team. Similarly, we might argue about whether the British, Americans or Russians were primarily responsible for the United Nations' victory in World War 2, but I don't think you'll find very many people arguing that actually it was the Italians who won World War 2. The fact that our perceptions of reality match each other 99.999% of the time, to me, indicates that there's something out there that exists regardless of whether I perceive it or not. I call that I can see your point, and it's the one most people implicitly accept. Observations are predictable, therefore there is a shared reality out there generating those observations. It works most of the time. But in the edge cases (or "extremely fine details") this implicit assumption breaks down. Like in the case of "objective mathematical facts waiting to be discovered", such as the 98,765th of π before you measure it. So why insist on applying this assumption outside of its realm of applicability? Isn't it sort of like insisting that if you shoot a bullet from a ship moving with nearly the speed of light, it will travel faster than light? You seem to be saying that "external shared reality" is an approximation in the same way that Newtonian mechanics is an approximation for Einsteinian relativity. That's fine. So what is "external shared reality" an approximation of? Just what exactly is out there generating inputs to my senses, and by what mechanism does it remain in sync with everyone else (approximately)? Just what exactly is out there generating inputs to my senses, and by what mechanism does it remain in sync with everyone else (approximately)? Sometimes the "out there" can be modeled as a shared reality, sure. The key word is "modeled". Sometimes this model is not a good one. If you insist on privileging one model over all others to be the true objective external reality valid everywhere, you pay the price where it fails. Like in the OP's case. "Sometimes this model is not a good one." What do you mean by "good" here? And, given some definitiin of good, what alternative model is better in that sort of situation? By "good" I mean (as always) "fitting the available observations and producing accurate predictions". In the OP's case of the 98,765th digit of π, the model is that "A randomly picked digit is uniformly distributed" and it is a "good" (i.e. accurate) one. The 98,765th digit of π ..isn't a random digit, it's the 98,765th digit. There's a puzzle about how probability theory would apply would apply to something that's basically determinate, but the question of how randomly selected digits of pi are distributed isn't it, because the process of picking a digit randomly bring indeterminacy in. People pose the problem with a specific digit to make the problem determinate, and focus on the paradoxical aspect. The paradox only arises if you ignore the view I've been presenting. The 98,765th digit of π is a random digit in the same way that a 98,765th reading of rand() is. Until you do some work to measure it, it's not determined. It is determined in the sense of having only one possible value. The same applies to a call to rand() ,so long as it is a deterministic PRNG. We don't know what the answer is , until we have done some work, in either case, but that doesn't mean anything indeterministic is going on. Determinism is defined in terms of inevitability, ie. lack of possible alternatives. We do not regard the future as undeterminedjust because it has not happened yet. Determinism is defined in terms of inevitability, ie. lack of possible alternatives. We do not regard the future as undetermined just because it has not happened yet. I don't argue with that, in fact, the statement above makes my point: there is no difference between an as-yet-unknown to you (but predetermined) digit of pi and anything else that is not yet known to you, like the way a coin lands when you flip it. It doens't make your point, since I don't agree with it. Given any degree of realism, you can differentiate between determined but unknown things and undetermined things. Well, you're an anti realist. But that doesn't give you the right to interpret what other people, if there are any other people, are saying in anti-realist terms.
{"url":"http://www.lesswrong.com/s/FYMiCeXEgMzsB5stm/p/xW9wGN9GN3guB9bZK","timestamp":"2024-11-09T09:08:38Z","content_type":"text/html","content_length":"756366","record_id":"<urn:uuid:3401a5e2-1314-44d5-8aae-d6ebad947520>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00757.warc.gz"}
111 research outputs found In this text we describe the spectral nature (pure point or continuous) of a self-similar Sturm-Liouville operator on the line or the half-line. This is motivated by the more general problem of understanding the spectrum of Laplace operators on unbounded finitely ramified self-similar sets. In this context, this furnishes the first example of a description of the spectral nature of the operator in the case where the so-called "Neumann-Dirichlet" eigenfunctions are absent.Comment: 20 pages, 1 figur We consider random walks in random Dirichlet environment (RWDE) which is a special type of random walks in random environment where the exit probabilities at each site are i.i.d. Dirichlet random variables. On ${\mathbb Z}^d$, RWDE are parameterized by a 2d-uplet of positive reals called weights. In this paper, we characterize for $d\ge 3$ the weights for which there exists an absolutely continuous invariant probability for the process viewed from the particle. We can deduce from this result and from [27] a complete description of the ballistic regime for $d\ge 3$.Comment: 18 pages. arXiv admin note: text overlap with arXiv:1205.5709 by other authors without attributio The aim of this text is to establish some relations between Markov chains in Dirichlet Environments on directed graphs and certain hypergeometric integrals associated with a particular arrangement of hyperplanes. We deduce from these relations and the computation of the connexion obtained by moving one hyperplane of the arrangement some new relations on important functionals of the Markov chain.Comment: 6 pages, preliminary not We consider random walks in a random environment of the type p_0+\gamma\xi_z, where p_0 denotes the transition probabilities of a stationary random walk on \BbbZ^d, to nearest neighbors, and \xi_z is an i.i.d. random perturbation. We give an explicit expansion, for small \gamma, of the asymptotic speed of the random walk under the annealed law, up to order 2. As an application, we construct, in dimension d\ge2, a walk which goes faster than the stationary walk under the mean environment.Comment: Published at http://dx.doi.org/10.1214/009117904000000739 in the Annals of Probability (http:// www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org Random Walks in Dirichlet Environment (RWDE) correspond to Random Walks in Random Environment (RWRE) on $\Bbb{Z}^d$ where the transition probabilities are i.i.d. at each site with a Dirichlet distribution. Hence, the model is parametrized by a family of positive weights $(\alpha_i)_{i=1, \ldots, 2d}$, one for each direction of $\Bbb{Z}^d$. In this case, the annealed law is that of a reinforced random walk, with linear reinforcement on directed edges. RWDE have a remarkable property of statistical invariance by time reversal from which can be inferred several properties that are still inaccessible for general environments, such as the equivalence of static and dynamic points of view and a description of the directionally transient and ballistic regimes. In this paper we give a state of the art on this model and several sketches of proofs presenting the core of the arguments. We also present new computation of the large deviation rate function for one dimensional RWDE.Comment: 35 page This paper states a law of large numbers for a random walk in a random iid environment on ${\mathbb Z}^d$, where the environment follows some Dirichlet distribution. Moreover, we give explicit bounds for the asymptotic velocity of the process and also an asymptotic expansion of this velocity at low disorder.Comment: Change in theorem 6 pages, preliminary note.International audienceThe aim of this text is to establish some relations between Markov chains in Dirichlet Environments on directed graphs and certain hypergeometric integrals associated with a particular arrangement of hyperplanes. We deduce from these relations and the computation of the connexion obtained by moving one hyperplane of the arrangement some new relations on important functionals of the Markov chain
{"url":"https://core.ac.uk/search/?q=author%3A(Sabot%2C%20Christophe)","timestamp":"2024-11-03T09:49:29Z","content_type":"text/html","content_length":"125258","record_id":"<urn:uuid:0eb783de-1e6a-4f5e-9d9b-8c08a711e248>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00628.warc.gz"}
Statistics for Research Students Chapter Nine – Nonparametric Statistics Hello everyone, and welcome to the ninth chapter of the University of Southern Queensland’s online, open access textbook. The aim of this ninth chapter is to discuss the idea of nonparametric statistics. Nonparametric statistics are types of test statistics with related formulas that can be used to estimate associations between two or more variables without basing these associations on changes from the mean. The arithmetic mean can be seriously influenced by extreme values and values that are dispersed in non-normal ways. Essentially if collections of data are not arranged according to the normal distribution, and when researchers can be reasonably sure that the actual distribution of variable values in a population is not normal, nonparametric statistics can then be used to better estimate associations between variables. There are some slides that appear via links within Chapter Nine. Please look for these as you review the current chapter.
{"url":"https://usq.pressbooks.pub/statisticsforresearchstudents/part/chapter-9-nonparametric-statistics/","timestamp":"2024-11-12T09:42:31Z","content_type":"text/html","content_length":"75040","record_id":"<urn:uuid:18b4e1cf-c7d8-44ae-a5de-05f85e7da3a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00636.warc.gz"}
How Much Energy and Cost Does It Take To Make An Ice Cube? Watching things make things is fascinating, but also leads to my mind keeping me up about the strangest of things, the latest being “how much power does it use to make a ice cube?” Thankfully I’m a nerd and have an expensive university degree so lets figure this out! Join me on this adventure down the rabbit hole! How Much does an Ice Cube Weigh? You’d think think this would be an easy answer wouldn’t you? Unfortunately not being a drug dealer and having a set of small digital scales we will have to work this out manually. Amazon suggests an Ice Cube tray make ice cubes that are 4.4cm x 3cm x 2.6cm. First we must calculate the volume of the ice cube using the tray dimensions Volume =4.4cm × 3cm× 2.6cm = 34.32 cubic centimeters 2. Ice is less dense than water, hence why Ice cubes float in water! We correct this by multiplying the volume by 0.92 grams per cubic centimeter to find the mass (weight) of the ice cube. mass =0.92 g/cm3 × 34.32 cm3 = 31.6 grams We now know that if you perfectly fill an Amazon Ice Cube tray, an ice cube will weigh 31.6g. Because its 2am in the morning when I wrote this and because everyone spills ice cube trays, lets settle on 30 grams. Lets agree on some basic details • A standard ice-cube is 30 grams • Room temperature is 25c • The cost of energy is 23.4 p/kWh • Water is free • Our freezer or ice maker is 100% efficient, which it sadly isn’t Now onto the fun calculations! 1. Required Energy to Cool Water to Freezing Point To freeze water we must first cool the water 25c down to 0c for water to freeze. 30g × 4.186J/g°C (Specific heat capacity) × 25°C= 3139.5 J 2. Required Energy to Freeze Water at 0°C To turn the liquid water into a solid, it must ‘phase change’ through a process called latent heat which requires 334j of energy per gram. 30g × 334 J/g = 10020 J 3. Combine both energy values of cooling the water and turning it into a solid 3139.5 J + 10020 J = 13159.5 J By combining the two values we now know it takes 13159.5 Joules of energy to make a single ice cube! This is equivalent to running a 60w light bulb for 2 minutes 39 seconds or 3.14 calories in food speak. How much does it cost to make an ice cube? We can now express this in kw/h 13159.5 J / 3,600,000 J (energy in 1 kw) = 0.00365 kWh Convert into British pounds 0.00365 kWh × 23.4 p/kWh ≈ 0.085p (£0.00085) It costs £0.00085 to make a single ice cube! So many many ice cubes can we make for £1 (100p)? 100p / 0.085p = 1176 ice cubes We can make 1176 ice cubes for £1! We can also calculate that it costs £0.0217 to make 1kg of ice. I’m not sure what conclusions and practical uses this article has, but the take away is creating ice is simply the sum of the energy it takes to cool and freeze the water (which sounds obvious on reflection) and isn’t very expensive. At scale energy costs could certainly add up.
{"url":"https://romangreenway.uk/how-much-energy-and-cost-does-it-take-to-make-an-ice-cube/","timestamp":"2024-11-02T08:17:36Z","content_type":"text/html","content_length":"42981","record_id":"<urn:uuid:c91c7ad8-6341-4f2c-a044-88e5ce81ba87>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00365.warc.gz"}
CEILING.MATH Function in Excel - Usage with Examples Home » Functions CEILING.MATH Function in Excel – Usage with Examples The CEILING.MATH Function is one of the important Mathematical Functions in Excel. The Function can be used in Excel 2003 and later on Excel Versions. Let’s see how this function works When to Use CEILING.MATH Function? The CEILING.MATH Function finds out the upper-value round-off for a number. For Example:- When we apply CEILING>MATH Function with 23.19 as input parament, it returns 24. 24 is the upper value ( ceiling ) round-off. The additional functionality is provided by CEILING.MATH Function is that:- • We can get the round off to the nearest multiple of a number. • We can decide the direction for rounding up the negative numbers. The rounded-off result can either go away from zero or towards zero. Syntax and Arguments The following points will explain to you the input parameters required by the CEILING.MATH formula:- • number – This is the number that we want to round off. • [significant] – In this argument, we pass the significant digit multiple. The rounded-off result must be a multiple of this number. This is an optional argument. Its default value is 1. • [mode] – It has an effect only when we supply a negative number to the function. It can have two values:- □ FALSE – The default value is zero. This means the negative digit will be rounded off towards zero. i.e =CEILING.MATHS(-0.1,1,0) will return 0. □ TRUE – The negative digit will be rounded off away from zero. For example =CEILING.MATHS(-0.1,1,0) will return -1 Examples to learn the CEILING.MATHS formula In this section of the blog, we would learn to apply CEILING.MATHS Formula in Excel Example 1 – Simplest Example to Learn CEILING.MATHS formula In this example, we would learn to round off the positive and negative numbers with the default values of significant and mode argument. Here we have assigned B1, B2, and cell B3 to contain the number, significant, and mode argument respectively. We take the value in cell B1 as 65.9 and pass it as the number argument of the function As a result, the function returns the ceiling round off as 66. Example 2 – Using CEILING.MATH Function for Negative Numbers There are two cases defined in the mode argument for the negative numbers. if we specify this mode as 1 (TRUE ) then the direction of round-off is away from zero. Else if the mode argument is FALSE ( 0 is the default value ) then the direction of round off is towards zero. For example, if we use the following values of the function arguments in the CEILING.MATH formula in cell B6:- As a result, the function returns 0. If we replace the cell B3 value is TRUE then, the function returns -1. The different results are due to rounding off directions. This is explained in the figure below. Example 3 – Rounding off to Multiple of N We can get the round-off result as a multiple of number N defined in the significant argument. The result will be the multiple of number defined in significant. For Example:- The round-off for 4.1 is 5. But if we specify that we want this result to be a multiple of 3, then the function will give different results. As a result, the function returns the nearest round-off for 4.1 which is also a multiple of 3 as 6. This brings us to the end of the blog. Thank you for reading. Leave a Comment
{"url":"https://excelunlocked.com/ceiling-math-function-in-excel","timestamp":"2024-11-04T05:50:51Z","content_type":"text/html","content_length":"226808","record_id":"<urn:uuid:1b1c7d8a-3f29-4759-af33-46dfe28a0520>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00321.warc.gz"}
Using approximate Bayesian inference for a “steps and turns” continuous-time random walk observed at regular time intervals 1INIBIOMA (CONICET-Universidad Nacional del Comahue), Rio Negro, Argentina 2Facultad de Ciencias Económicas, Universidad Nacional de Rosario, Rosario, Argentina 3Department of Statistics, North Carolina State University, Raleigh, United States of America 4Department of Forestry and Environmental Resources, North Carolina State University, Raleigh, NC, United States of America 5Universidad de la República, Montevideo, Uruguay Academic Editor Subject Areas Animal Behavior, Computational Biology, Ecology, Zoology, Statistics Movement Ecology, Approximate Bayesian Computation, Observation Time-Scale, Random walk, Simulated Trajectories, Animal Movement © 2020 Ruiz-Suarez et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, reproduction and adaptation in any medium and for any purpose provided that it is properly attributed. For attribution, the original author(s), title, publication source (PeerJ) and either DOI or URL of the article must be cited. Cite this article 2020. Using approximate Bayesian inference for a “steps and turns” continuous-time random walk observed at regular time intervals. PeerJ 8:e8452 https://doi.org/10.7717/peerj.8452 The study of animal movement is challenging because movement is a process modulated by many factors acting at different spatial and temporal scales. In order to describe and analyse animal movement, several models have been proposed which differ primarily in the temporal conceptualization, namely continuous and discrete time formulations. Naturally, animal movement occurs in continuous time but we tend to observe it at fixed time intervals. To account for the temporal mismatch between observations and movement decisions, we used a state-space model where movement decisions (steps and turns) are made in continuous time. That is, at any time there is a non-zero probability of making a change in movement direction. The movement process is then observed at regular time intervals. As the likelihood function of this state-space model turned out to be intractable yet simulating data is straightforward, we conduct inference using different variations of Approximate Bayesian Computation (ABC). We explore the applicability of this approach as a function of the discrepancy between the temporal scale of the observations and that of the movement process in a simulation study. Simulation results suggest that the model parameters can be recovered if the observation time scale is moderately close to the average time between changes in movement direction. Good estimates were obtained when the scale of observation was up to five times that of the scale of changes in direction. We demonstrate the application of this model to a trajectory of a sheep that was reconstructed in high resolution using information from magnetometer and GPS devices. The state-space model used here allowed us to connect the scales of the observations and movement decisions in an intuitive and easy to interpret way. Our findings underscore the idea that the time scale at which animal movement decisions are made needs to be considered when designing data collection protocols. In principle, ABC methods allow to make inferences about movement processes defined in continuous time but in terms of easily interpreted steps and turns. The way in which animals move is of fundamental importance in ecology and evolution. It plays important roles in the fitness of individuals and in gene exchange (Nathan et al., 2008), in the structuring of populations and communities (Turchin, 1998; Matthiopoulos et al., 2015; Morales et al., 2010), and in the spread of diseases (Fvre et al., 2006). The study of movement is challenging because it is a process modulated by many factors acting at different spatial and temporal scales (Gurarie & Ovaskainen, 2011; Hooten et al., 2017). In general, the process of animal movement occurs in continuous time but we observe individual locations at time intervals dictated by logistical constrains such as battery life. When modeling trajectories, it is common to assume that the time scale at which animals decide to change their movement is the same as that of the observation. However, it is not always in that way, and it is necessary to be aware of this fact in order to avoid drawing conclusions about the movement process that depend on the time scale at which the observations were taken. In this context, state-space models provide a convenient tool for movement data analysis (Patterson et al., 2008). The main idea is to estimate the latent movement process or characteristics of it given the observational process. Thus, they consist of two stochastic models: a latent (or state) model and an observation model. The first describes the state of the animal (it could be the location, behaviour, etc.) and the second one describes the observation of the state, possibly with some measurement error. In the case of this study, the aim is to estimate the parameters that governs the state process given the information from the observations. Several state-space models have been proposed for animal movement differing primarily in the temporal conceptualization of the movement process, namely discrete and continuous time formulations ( McClintock et al., 2014). On the one hand, discrete-time models describe movement as a series of steps and turns (or movement directions) that are performed at regular occasions (Morales et al., 2004 ; Jonsen, Flemming & Myers, 2005; McClintock et al., 2012). The advantage of this approach is that it allows the dynamics involved in the movement process to be conceptualized in a simple and intuitive way, which facilitates implementation and interpretation. Typically, in these models the temporal scales of both the state and the observation process are the same. Thus, the observation times coincide with the times in which the animal are assumed to make movement decisions. On the other hand, continuous-time models have been proposed (Blackwell, 1999; Jonsen, Flemming & Myers, 2005 ; Harris & Blackwell, 2013) in which the movement process is defined for any time and are usually expressed through stochastic differential equations that account for the dependence between successive locations. Consequently, they are not dependent on a particular timescale and, within reasonable limits, a continuous-time analysis yields the same results regardless of the temporal resolution of observations. The continuous-time approach has the advantage of being more realistic and that it avoids dependence on a particular timescale. A possible drawback is in the interpretation of instantaneous movement parameters (e.g., those related to Ornstein–Uhlenbeck processes and other diffusion models). However, as it is mentioned in McClintock et al. (2014), for any continuous- or discrete-time approach to be useful, the temporal resolution of the observed data must be relevant to the specific movement behaviors of interest. While both approaches have advantages and disadvantages, the discrete-time models are more intuitive and easy to interpret yet can be considered less realistic than continuous time models (McClintock et al., 2014). We present a state-space model that formulates the movement process in continuous time and the observation in discrete time (regular intervals). Although the formulation is based on steps and turns, the probability of a change in the trajectory is positive for all times t, and thus the movement process is continuous over time. In most cases, when trying to make inferences about movement processes, only location data observed at regular times is available. For example, it is common to obtain GPS fixes at time intervals that are dictated by logistics constraints such as battery life. Therefore, the number and times at which there were changes in movement between successive recorded locations is information that is not available and that it is usually ignored. It is common to assume that the time scale at which animals make their movement decisions is the same as the time scale at which the location data was taken. However, this is not always the case and, under certain circumstances, this assumption may lead to incorrect inferences and interpretations of the movement process. Our goals here are to combine the ease of interpretation of models based on steps and turns with the realism of continuous-time models, and to analyze the relationship between the scales of observations and movement decisions. We use a random walk where the movement decisions (steps and turns) can be made at any point in time while the movement process is observed at regular time intervals. In this model there are two different time scales: one for the state process and one for the observation. The advantage here is that this model allows us to differentiate between the times in which the animals make movement decisions and the times in which the observations are made. We then assessed the capacity of Approximate Bayesian Computation (ABC) methods recovering the parameters that govern the state process as a function of these two time scales. Parton & Blackwell (2017) present a continuous time movement model which has some similarities to the model we present here. Parton and Blackwell’s model is formulated using stochastic differential equations in terms of bearings and speed. The animal’s locations are then observed at discrete times with certain error. Both models are similar as they assume that changes in the direction of the trajectory can be at any time, that the trajectory is correlated, and that observations are at discrete times. However, the formulations are different as in Parton and Blackwell’s model speed is stochastic and there is an observation error. They notice that the likelihood is intractable due to the complicated relationship between the locations and parameters when the changes in bearing and speed are unobserved. For inference they develop a Markov Chain Monte Carlo algorithm which involves augmenting observed locations with a reconstruction of the underlying movement process. As in Parton and Blackwell’s model, our proposed model formulation has an intractable likelihood. However, simulating the movement and its observation is straightforward, suggesting that likelihood-free methods such as Approximate Bayesian Computation could be useful (Beaumont, 2010; Csilléry et al., 2010). These techniques have been used to fit models that involve movement processes such as overall rate of movement (Van der Vaart et al., 2015), or movement among patches in individual based models of meta-population dynamics (Sirén et al., 2018). However, as far as we are aware, they have not been used to make inferences about movement trajectories. Here we describe, formalize, and expose the possible complications of a state-space movement model with two different temporal scales. We use stochastic simulations to evaluate the ability of three ABC techniques to recover the parameter values driving the movement process. Keeping in mind the ecological purpose behind implementing such a model, we assess the quality of these estimations with regard to the relationship between the two temporal scales. Finally, we apply the model to a high resolution trajectory of sheep to evaluate the performance of the ABC inference with real data. Movement model with random time between movement decisions Many animal movement studies assume discrete time correlated random walks (CRW) as building blocks to more complex models (Turchin, 1998; Morales et al., 2004). At every time step, an individual chooses a turning angle (difference between the previous and new movement direction) and a speed. The turning angle distribution is concentrated around zero, resulting in short-term correlations in the direction of movement (persistence in movement direction). Here, we allow changes in direction to occur at any point in time. An individual moves in a certain direction for a certain period of time, and then it makes a turn and starts moving in a new direction for another period of time. Our movement model is a form of the velocity jump process in which the motion consists of a sequence of ”runs” separated by reorientations, during which a new velocity is chosen (Othmer, Dunbar & Alt, 1988; Codling, Plank & Benhamou, 2008). In our model, the speed of movement during the active phase is constant, and the temporal scale of the waiting time of the reorientation phase is considered instantaneous. Since, in practice, the path of an animal is usually observed at particular sampling occasions, we consider that the observation occurs at regular time intervals. Therefore the observation lies in the location of the individual every time Δt. As a simplification, we assume that there is no observation error. Assuming constant movement speed, let the variable M[i] describes the position of the state process at step i, presented in x–y coordinates, i.e., M[i] = (m[i,1], m[i,2]) where i represents an index of the time over the steps for i = 0, …, N[steps]. Given, m[0,1] = 0 and m[0,2] = 0, we have for i = 1, …, N[steps] that, (1)$\begin{array}{cc}\hfill {m}_{i,1}=\hfill & \hfill {m}_{i-1,1}+cos\left({\ varphi }_{i-1}\right){t}_{i-1}\hfill \\ \hfill {m}_{i,2}=\hfill & \hfill {m}_{i-1,2}+sin\left({\varphi }_{i-1}\right){t}_{i-1}\hfill \\ \hfill {\varphi }_{i}=\hfill & \hfill \sum _{k=1}^{i}{\omega }_ {i}\hfill \end{array}$where t[i] is the duration of step i, and ω[i] is the turning angle between steps i and i + 1, so that ϕ[i] represents the direction of the step i. Each t[i] is assumed to be independently drawn from an exponential distribution with parameter λ and each ω[i] from a von Mises distribution with a fixed mean ν = 0 and parameter κ for the concentration (Wu et al., 2000; Codling, Plank & Benhamou, 2008). While the model can be extended to allow κ and λ to depend on the landscape, environment, or animal behaviour, for this work we only consider the initial case as an starting point. Next, we define the observation and its links with the state movement process (Eq. (2)). Let O[j] = (o[j,1], o[j,2]) denote the position of observation j in x–y coordinates, with j = 0, …, N[obs]. A second index j is used for the time over the observations. We defined T[i] as the time in which the change of direction i took place: $\begin{array}{c}{T}_{0}=0\hfill \\ {T}_{i}=\sum _{k=0}^{i-1}{t}_ {k}\phantom{\rule{5.69046pt}{0ex}}\text{for}\phantom{\rule{5.69046pt}{0ex}}i=1,\dots ,{N}_{steps}.\hfill \end{array}$ In addition it is necessary to determine the number of changes in direction that occurred before a given observation, we define N[j] as the number of steps (or changes in direction) that the animal took from time 1 to time jΔt. O[0] = M[0] And for j = 1, …, N[obs] (2)$\begin{array}{c}\hfill {o}_{j,1}={m}_{{N}_{j},1}+cos\left({\varphi }_{{N}_{j}}\right)\left(j\Delta t-{T}_{{N}_{j}}\right)\hfill \\ \hfill {o}_{j,2}={m}_{{N}_{j},2}+sin\ left({\varphi }_{{N}_{j}}\right)\left(j\Delta t-{T}_{{N}_{j}}\right).\hfill \end{array}$ Note that N[j] is the index that corresponds to the maximum time T[i] less or equal to jΔt, i.e., ${N}_{j}=max\left\{m∕{T}_{m}\le j\Delta t\right\}$. Therefore, the location j is the last location of the state process given by N[j] plus the difference between jΔt and the time at which the step N[j] was produced in the direction ϕ[N[j]]. To better understand this relationship consider a minimal example of a few steps. Assuming the duration of steps and turning angles of Table 1 and Δt=0 .5, in Fig. 1 we present a short trajectory showing the locations of changes in direction and observations at regular times. In that case N[1] = 2, because T[1] = t[0] = 0.2 ≤ 1Δt, T[2] = t[0] + t[1] = 0.4 ≤ 1Δt but T[3] = t[0] + t[1] + t[2] = 1.1⁄ ≤ 1Δt. With the same reasoning N[2] = 2, N [3] = 4, N[4] = 5, N[5] = 5, etc. Duration of steps (${t}_{i}^{\prime }s$) 0.2 0.2 0.7 0.4 0.4 0.8 Turning angle (${\omega }_{i}^{\prime }s$) 0.32 5.65 5.81 0.02 0.11 5.81 Example of few steps. Minimal example of a simulation from the continuous-time “steps and turns” model. Changes in direction are indicated with gray dots, and observations with red crosses. The trajectory starts at coordinates (0, 0). Before the first and the second red crosses (the initial one does not count, i = 0) there are two grey points, then the number of changes before observation 1 and 2 are N[1] = N [2] = 2. Before the third red cross there are four grey points, then N[3] = 4, etc. Expression for the likelihood function In order to construct the likelihood we first present the complete-data likelihood, i.e., we assume that we know all the values for t[i] and ω[i]. In that case, as O|M is deterministic, we have $L\ left(\kappa ,\lambda ;M,O\right)=P\left(O,M|\kappa ,\lambda \right)=P\left(O|M,\kappa ,\lambda \right)P\left(M|\kappa ,\lambda \right)=P\left(M|\kappa ,\lambda \right)=P\left({t}_{1},...,{t}_{{N}_ {steps}}|\lambda \right)P\left({\omega }_{1},...,{\omega }_{{N}_{steps}}|\kappa \right)$ Suppose that we do not know the values of t[i] and ω[i], but do know the number of steps that the animal took between consecutive observations, N[j]∀j. In that case, it would be necessary to obtain the distributions of M[i] (Eq. (1)) (or at least of a proportion of them), which are not available in closed form. Finally, to formulate the marginal likelihood, L(κ, λ), it is further necessary to integrate over all the possible values of N[j], by determining P(N[j] = r) for r ∈ ℕ, which can be, in principle, infinite. Obtaining the expression for and evaluation of the likelihood results to be a complex task. Likelihood-free methods that allow one to circumvent the need to evaluate the likelihood, such as ABC, have proven to be useful in these cases. Inference using Approximate Bayesian Computation Approximate Bayesian Computation (ABC) is a family of simulation-based techniques to obtain posterior samples in models with an intractable likelihood function. In recent years, ABC has become popular in a diverse range of fields (Sisson, Fan & Beaumont, 2018) such as molecular genetics (Marjoram & Tavar, 2006), epidemiology (Tanaka et al., 2006; McKinley, Cook & Deardon, 2009; Lopes & Beaumont, 2010), evolutionary biology (Bertorelle, Benazzo & Mona, 2010; Csilléry et al., 2010; Baudet et al., 2015), and ecology (Beaumont, 2010; Sirén et al., 2018). This approach is also useful when the computational effort to calculate the likelihood is large compared to that of the simulation of the model of interest. The likelihood function described earlier turns out to be complex to calculate, yet it is easy to simulate trajectories from the statistical model, based on independent draws from exponential and von Mises distributions combined with the observations at regular time Let θ denote the vector of parameter of interest and y denote the observed data. The posterior distribution p(θ∣y) is proportional to the product of the prior distribution p(θ) and the likelihood function p(y∣θ) p(θ∣y) ∝ p(θ)p(y∣θ) The basic idea of ABC methods is to obtain simulations from the joint distribution, p(y, θ) and retain the parameter values that generate simulated data close to the observed data, (y). In this way, ABC methods aim to replace the likelihood function with a measure of similarity between simulated data and actual data. The rejection algorithm is the simplest and first proposed method to perform ABC (Tavaré et al., 1997; Pritchard et al., 1999). It can be described as follows: 1. Compute a vector of summary statistics with observed data, S(y). 2. Simulate parameters θ[∗] sampled from p(θ) and data y[∗] sampled from p(.∣θ[∗]). 3. Compute a vector of summary statistics with simulated data, S(y[∗]). 4. θ[∗] is accepted as a posterior sample, if ρ(S(y[∗]), S(y)) < δ, for some distance measure ρ and threshold δ. 5. Repeat 2–4 K times. The above rejection algorithm produces samples from p(θ∣ρ(S(y), S(y[∗]))) < δ which is an approximation of p(θ∣y). In particular, when the summary statistics are sufficient or near-sufficient for ρ, the approximate posterior distribution converges to the true posterior distribution as δ goes to 0 (Marjoram et al., 2003). Instead of selecting a value for δ, it is a common practice to set a threshold ϵ as a tolerance level to define the proportion of accepted simulations. For a complete review of ABC methods and techniques see (Csilléry et al., 2010; Beaumont, 2010; Sisson, Fan & Beaumont, 2018). We consider two regression-based correction methods. These implement an additional step to correct the imperfect match between the accepted and observed summary statistics. One of these use local linear regression (Beaumont, Zhang & Balding, 2002), and the other is based on neural networks (Blum & François, 2010). To make the correction, both methods use the regression equation given by θ[i] = r(S(y[i])) + ξ[i] where r is the regression function and the ξ[i]’s are centered random variables with equal variance. For the linear correction r is assumed to be linear function and for the neural network correction r is not necessary linear. A weight K[d(S(y[i]), S(y[0]))] (for K a statistical kernel) is assigned to each simulation, so those closer to the observed summary statistics are given greater weight. The r and ξ values can be estimated by fitting a linear regression in the first case and a feed-forward neural network regression in the second case. Then, a weighted sample from the posterior distribution is obtained by considering ${\theta }_{i}^{corr}$ as follows ${\theta }_{i}^{corr}=\stackrel{ˆ}{r}\left(S\left({y}_{0}\right)\right)+{\stackrel{ˆ}{\xi }}_{i}$ where $\stackrel{ˆ}{r}\left(.\right)$ is the estimated conditional mean and the ${\stackrel{ˆ}{\xi }}_{i}$s are the empirical residuals of the regression. After a preliminary analysis, in which 20 summary statistics were assessed, we choose four that characterize the trajectories according to parameter values. Looking for summaries that capture diverse features of the movement, we plotted the proposed summaries against known parameters and decided to keep those summary statistics that changed monotonically with parameters values. The plots of all the summaries assessed are provided in the Appendix S1. Finally, the four selected summaries were: the inverse of the observed average step length (where an observed step is the distance between positions of consecutive observed times); a point estimator for κ, calculated as the inverse function of the ratio of the first and zero order Bessel functions of the first kind evaluated at the mean of the cosine of the observed turning angles (where observed turning angles were determined as the difference between consecutive directions in the observations); the standard deviation of the observed turning angles and lastly, the standard deviation of the observed step lengths (Table 2). Summary statistic Formula (1)The inverse of the $1∕{\sum }_{j=0}^{{N}_{obs}}\sqrt{{\left({o}_{j+1,1}-{o}_{j,1}\right)}^{2}+{\left({o}_{j+1,2}-{o}_{j,2}\right)}^{2}}$, observed average step (2) Point estimate for κ ${A}^{-1}\left(\frac{{\sum }_{j=1}^{{N}_{obs}}cos\left({\omega }_{obs,j}\right)}{{N}_{obs}}\right)$ Where ${\omega }_{obs,j}=arctan\left(\frac{{o}_{j+1,1}-{o}_{j,1}}{{o}_ {j+1,2}-{o}_{j,2}}\right)$ and $A\left(x\right)=\frac{{I}_{1}\left(x\right)}{{I}_{0}\left(x\right)}$Hornik & Grün (2014) (3) Standard deviation of $\sqrt{\frac{{\sum }_{j=0}^{{N}_{obs}}{\omega }_{obs,j}-\stackrel{̄}{{\omega }_{obs}}}{{N}_{obs}-1}}$ the turning angle (4)Standard deviation of $\sqrt{\frac{{\sum }_{j=0}^{{N}_{obs}}{t}_{obs,j}-\stackrel{̄}{{t}_{obs}}}{{N}_{obs}-1}}$ the step length We used the R package “abc” (Csilléry, Franois & Blum (2012), http://cran.r-project.org/web/packages/abc/index.html) to perform the analysis. This package uses a standardized Euclidean distance to compare the observed and simulated summary statistics. We present results for the two regression-based correction methods and for the basic rejection ABC method. We did two simulation experiments. First we assessed the performance of the three ABC methods for our model. Then, we evaluated how well these methods approximate posterior probabilities depending on the relation between the temporal scales of simulated trajectories and their observations. For both experiments we used a set of one million simulated trajectories, with parameters κ (dispersion parameter for the turning angles) and λ (parameter for the times between changes of direction) drawn from the priors p(κ) = U[0, 100] and p(λ) = U[0, 50]. The choice of uniform priors was because the aim of our simulation study is to analyze the performance of the ABC methods to fit the model under a diversity of scenarios. Using uniform priors it is possible to consider different values for the parameters with equal probability. The number of simulated steps was such that all the trajectories had at least 1500 observations. All trajectories were observed at regular times of Δt=0 .5. The R code is available in https://github.com/sofiar/ABC-steps-and-turns-. Assessment of the inference capacity of the ABC methods We assessed the performance of the three ABC versions: simple rejection, rejection corrected via linear regression and rejection corrected via neural network. For different values of threshold (ϵ) and for each algorithm version we conducted an ABC cross validations analysis. That is, we selected one trajectory from the million reference set and used it as the real one. We did this selection in a random manner but with the condition that the parameters chosen were not close to the upper limit of the prior distribution. In order to avoid the bias produced by the use of uniform priors in the artificial upper limits of them, we only considered the estimations of λ ≤ 25 and κ ≤ 70. Then, parameters were estimated using different threshold values (ϵ) with the three algorithms and using all simulations except the chosen one. This process was replicated N[rep] = 100 times. For each method and ϵ value, we recorded the posterior samples obtained for both λ and κ. We then calculated the prediction error as (3)$\sqrt{\frac{\sum _{i}{\left(\stackrel{̃}{{\theta }_{i}}-{\theta }_{i}\right)}^{2}}{{N}_{rep}}}\phantom{\rule{56.9055pt}{0ex}}\theta =\left(\lambda ,\kappa \right);i=1,\dots , {N}_{rep}$where θ[i] is the true parameter value of the ith synthetic data set and ${\stackrel{̃}{\theta }}_{i}$ is the posterior median of the parameter. We also compute a dispersion measure of the errors in relation to the magnitude of the parameters for each method and tolerance value. We call dispersion index (DI) to this quantity and we calculate it as (4)$DI=\left[\sum _{i}\frac{|\stackrel {̃}{{\theta }_{i}}-{\theta }_{i}|}{{\theta }_{i}}\right]∕{N}_{rep}\phantom{\rule{56.9055pt}{0ex}}\theta =\left(\lambda ,\kappa \right);i=1,\dots ,{N}_{rep}$ Furthermore, in order to assess whether the spread of the posterior distributions were not overly large or small, we computed the empirical coverage of the α = 95 percent credible interval for the two parameters and for different thresholds (ϵ). The empirical coverage is the proportion of simulations for which the true value of the parameter falls within the α% highest posterior density interval (HPD). If the nominal confidence levels were accurate, this proportion should have been near 0.95. If this is true for all α, it is said that the analysis satisfies the coverage property. A way to test this property is by performing the Coverage Test and it is also a useful way to choose the threshold value ϵ. This test was first introduced by Prangle et al. (2014) and the basic idea is to perform ABC analyses under many data sets simulated from known parameter values and for each of them compute p, the proportion of the estimated posterior distribution smaller than the true parameter. Ideally these values should be distributed as a U(0, 1). For a complete description of this test see (Prangle et al., 2014). In order to analyze all possible α values, we performed this test using the package “abctools” (Nunes & Prangle (2015); https://cran.r-project.org/web/packages/abctools/index.html). Relative scale of observations and accuracy of the posterior density We continued the analysis evaluating how well these methods approximate posterior probabilities as a function of the ratio, R, between the temporal scale of observation (Δt) and the temporal scale for changes in directions (1∕λ) (Fig. 2). For instance, if R = 1 then λ = 1∕Δt, which means that the time between consecutive observations is equal to the mean time between changes in direction. Conversely, if R < 1 then the time scale between consecutive observations is smaller than the time scale at which animals decide to change directions (over-sampling case), and the opposite occurs if R > 1 (sub-sampling case) (Fig. 2). We considered different values of R (between 0.06 and 5) and for each simulated 50 trajectories with values of κ ∈ {10, 20, 30, …, 70}. Then, using the original million trajectories, the estimations for the three methods of ABC were computed considering these new trajectories as the true observations. We calculated the predictor error for κ and λ for every combination of R and κ. Simulations examples. Sampling schemes and the temporal scale of changes in direction. With black lines the movement process and with red points the observation. (A) Over-sampling case, (B) balanced case, (C) sub-sampling Sheep data example In the case that there is only location data available, i.e., GPS data at each time Δt, we have concluded that as it is not possible to know how many changes of directions were between consecutive observations, the likelihood function resulted to be intractable, making ABC techniques useful methods to make inference about the model. Suppose that there exists a source of information that permits estimation of the values of t[i] and ω[i] ∀i. Considering these estimated values as the truth, we have samples t[1], …, t[n] and ω[1], …, ω[n] and therefore the estimation of κ and λ is We compared both estimations for a real trajectory. As we have presented thus far, we fit the model using only information from the GPS locations with ABC techniques. In addition, we used information from DailyDiary devices in order to reconstruct the high-resolution trajectory. This allowed us to infer the times at which the animal changes direction, obtain samples of t and ω and use MCMC to obtain draws from the posterior distributions of the parameters. The data used were collected from one sheep in Bariloche, Argentina, during February and March of 2019. The sheep was equipped with a collar containing a GPS (CatLog-B, Perthold Engineering, http://www.perthold.de; USA), that was programmed to record location data every five minutes, and a DailyDiary (DD, Wilson, Shepard & Liebsch, 2008), that was programmed to record 40 acceleration data per second (frequency of 40 Hz) and 13 magnetometer data per second (frequency of 13 Hz). The DD are electronic devices that measure acceleration and magnetism in three dimensions, which can be described relative to the body of the animal. Such data allow the Dead-Reckoned (DR) path of an animal to be reconstructed at high resolution. In some cases there is no such data, and only information of the location at certain times is available. The goal here is to compare the ability of the ABC’s methods to estimate the parameters of the model when only information of the location at certain times is available, with the estimation of MCMC techniques when extra information is available. From the original data, we randomly selected one segment of 6 hours. Using the DD information, we first estimated the path traveled (pseudotrack) by the sheep using the dead-reckoning technique. ( Wilson & Wilson, 1988; Wilson et al., 2007). In this step we made use of the R package “TrackReconstruction” (https://cran.r-project.org/web/packages/TrackReconstruction/index.html). After that, we corrected the bias of the estimations using the data from the GPS (Liu et al., 2015). This correction was made using the R package “BayesianAnimalTracker” (https://cran.r-project.org/web/packages/ BayesianAnimalTracker/index.html). In this manner, we obtained a trajectory sampled with a resolution of 1 s. To satisfy the hypotheses of the model, we selected part of that trajectory that appeared to come from the same behaviour, i.e., we selected a piece of the trajectory that visually appeared to have the same distribution of turn angles and step lengths. To determine the points at which there was a change of movement direction, we applied the algorithm proposed by Potts et al. (2018) that detects the turning points of the trajectory using data of the animal headings and subsequently calculated steps and turning angles. With that information it is straightforward to infer the parameter’s values via MCMC techniques and obtain samples from the joint posterior distribution directly using the software Stan as the likelihood can be written in closed form (Carpenter et al., 2017). We also calculated the summary statistics of the trajectory observed at dt = 50 secs (1 observation every 50 of the reconstructed trajectory) and applied the three ABC algorithms. We finally compared both estimations. The data used is available at Figshare with DOI (10.6084/m9.figshare.9971642.v2). Assessment of the inference capacity of the ABC methods Figure 3 shows the values of the prediction errors and the dispersion index (DI) for each method and ϵ value obtained from the ABC cross validation analysis. In all cases the prediction errors decreased when the value of the threshold (ϵ) decreased. However, for the algorithms corrected via linear regression and neural networks larger threshold levels (ϵ) can produce lower prediction errors. Something similar happened with the DI values: lower threshold values imply lower values of this index (Fig. 3). These values give us an idea of the width of the posterior distributions. It is evident that for the case of the rejection algorithm the posterior distributions are quite wide, especially for ϵ = 0.1. However, for the corrected algorithms we can assume that the difference between the estimated and true parameters are up to approximately 1.3 units for κ and in 0.3 for λ in the best case (ϵ = 0.001). We also analyzed the relationship between the true parameter values and the median of the estimated posterior (Fig. 4). The estimate of λ improves when it takes lower values, especially for the algorithm corrected via linear regression. That behaviour is not so clear for the parameter κ. We further elaborate this point in the discussion. Based on these results, the algorithm corrected via linear regression seems to perform the best. Prediction errors and dispersion index from the ABC cross validation analysis. Values for the prediction errors (Eq. (3)) and for the dispersion index DI (Eq. (4)) for: the rate parameter for the duration of steps λ (A–B), and the concentration parameter for the turning angle between consecutive steps κ (C–D), in each method and threshold ϵ. Cross validation analysis for the Rejection ABC algorithm and the two corrections. In (A) and (B) the results for both parameters for the rejection ABC; in (C) and (D) for the linear ABC; and in (E) and (F) for the Neural Network ABC. The relationship between the true parameter values and the median of the estimated posterior are shown. Colors denote the results for different threshold values (ϵ): purple for ϵ = 0.1, blue for ϵ = 0.01, green ϵ = 0.005 and red for ϵ = 0.001. The black line indicates the line x = y—the ideal relation. Coverage analyses for parameter estimation. Coverage test for λ (rate parameter for the duration of steps) and κ (concentration parameter for the turning angle between consecutive steps). Is is shown the relative frequency (p) of accepted parameter values that were less than the true value in ABC analyses. If the spread of the posterior distributions are not overly large or small, these values should be distributed as a U(0, 1). From (A) to (X): by column the results for different ϵ values. The first two rows correspond to the results of the Rejection algorithm, the second two to the ABC with linear correction, and the last two to the ABC corrected via neural networks. We estimated the empirical coverage of the 95% HPD intervals for both parameters (κ and λ), for the three ABC algorithms and for different threshold (ϵ) values. Almost always, these indices were greater than 95, except in the case of the highest threshold value (ϵ = 0.1) for the simple rejection algorithm and the one corrected via neural network, for which the empirical coverages were slightly below 0.95. The plots for this analysis are provided in the Appendix S1. Finally, in order to check the coverage property we performed a coverage test for both parameters (Fig. 5). In most cases, the distributions obtained do not show a clear approximation to a U(0, 1). However there is an evident difference between the histograms obtained with the simple rejection ABC and those obtained with the other two algorithms. The shapes of the rejection ABC are those that are farthest from being uniform: for both parameters the distributions of the p values are left skewed indicating that the algorithm tends to overestimate the parameters. For the other two algorithms the left skewed is much moderate, and in the case of the lowest ϵ values for the linear algorithm the histograms are more uniform, indicating that its coverage could be being reached. However, not rejecting that coverage holds does not unequivocally demonstrate that the ABC posterior approximation is accurate. If the empirical data is uninformative, the ABC will return posterior distributions very similar to the priors, which would produce uniform coverage plots. Relative scale of observations and accuracy of the posterior density In order to evaluate the importance of the relationship between the time scale of the observation and the time at which changes occur in the movement process, we evaluated how well the two parameters fit in relation to the R ratio. The prediction errors for λ increased as the value of R increased (Fig. 6). For the case of the prediction errors for κ this relation can be seen when the true value of this parameter takes larger values (Fig. 7). Prediction errors for λ. Scatter plot-and-smoother graph (by local regression) of the prediction errors for the rate parameter for the duration of steps (λ) for different ratios between the temporal scale of observation and the scale for changes in directions (R). In (A) the results for the Rejection ABC, in (B) for the Linear ABC, and in (C) for the Neural Network ABC. High R values indicate that the temporal scale of the observation is higher than the temporal scale of the the movement decision process. The black dots are the prediction errors for λ for each R value. The blue line is the smoothed curve for those values. The 95% intervals are shown in gray. The vertical dotted line indicates R = 1. Prediction errors for κ. From (A) to (O) the scatters plot-and-smoother graphs (by local regression) of the prediction errors for the dispersion parameter of the turning angles (κ) for different ratios between the temporal scale of observation and the scale for changes in directions (R). Rows correspond to different values of κ and columns to different ABC algorithms. High R values indicate that the temporal scale of the observation is larger than the temporal scale of the the movement decision process. Black dots are the prediction errors of κ for each R value. The blue line is the smoothed curve for those values. The 95% intervals are shown in gray. The vertical dotted line indicates R = 1. For the case of the prediction errors for κ this relation can be seen when the true value of this parameter takes larger values. Again, the corrected algorithms have the smallest errors for both According to the results shown in Figs. 6 and 7, it is evident that there is a relationship between the ratio R and the capacity of these methods to estimate the parameters. For rates approximately less than 5 the errors are small and it is possible to obtain good estimates. This necessitates that the time scale of the observation be approximately less than 5 times the time-scale at which the animals decide to change direction. For higher values of Δt it would be more difficult to make inferences using this technique. Sheep data The selected trajectory was from February 27th, 2019 from 19:01:21hs to 20:02:00hs, a total of 1.01 hours (Fig. 8). For the estimations made using the reconstruct high resolution trajectory, the posterior distribution of each parameter was estimated from a sample of 3 × 1000 MCMC draws. As they were in the lower limits of the prior distributions, to conduct inference via ABC with only the location data, we simulate a new set of trajectories with priors of p(κ) = U[0, 10] and p(λ) = U[0, 10]. Trajectory from February 27th, 2019 from 19: 01: 21hs to 20: 02: 00hs, reconstructed with one second resolution by dead-reckoning and corrected using the GPS information. Draws from the posterior distributions obtained through MCMC and through the ABC algorithms gave similar results (Fig. 9). Again, the rejection ABC algorithm produced the estimation which is less exact, i.e., the posterior is the furthest from the one obtained by MCMC. Although this trajectory is just a simple example, it shows that it is possible to apply this model to actual animal Results for the sheep data example. Comparison between parameter estimates for the sheep movement trajectory based on the detailed trajectory reconstructed using magnetometer and GPS data, and those estimated based on the trajectory sampled at regular time intervals. With red, the posteriors obtained for both parameters κ and λ through MCMC on the detailed trajectory and with gray those obtained with each ABC algorithm on the observed locations. The purple lines correspond to the priors distributions. (A–C) The results for λ for each ABC algorithm; (D–F) the results for κ for each ABC algorithm. Animal movement modeling and analysis is either considered in continuous or discrete time. Continuous time models are more realistic but often harder to interpret than the discrete versions ( McClintock et al., 2014). A compromise between these approaches is to model movement as steps and turns but to have step duration (or the times at which turns are made) occur in continuous time. In this manner one can have the best of both worlds, so to speak. Here we considered that the underlying movement process, evolving in continuous time, is observed at regular time intervals as would be standard for a terrestrial animal fitted with a GPS collar. The likelihood function resulted to be intractable, but it is feasible to quickly generate simulations from the process and observation models. Thus, we proposed to use ABC methods. Even though these techniques showed certain limitations, it was possible to obtain accurate parameter estimates when the temporal scale of observations was not too coarse compared to the scale of changes in direction. Our simulation study showed that simple rejection ABC does not perform well for the proposed state-space model but the two corrected versions of this algorithm improve estimations (Figs. 3 and 4). Overall, the best performance was obtained with the linear correction. However, the applicability of these methods depends strongly on the rate between the observation’s scale and the mean time between changes in movement direction. We found that when this ratio is smaller than 5 it is possible to make inferences about the parameters (Figs. 6 and 7). That is, it would be necessary that the observations are less than 5 times the average of the times between changes of directions in order to be able to generate good estimations. Beyond our findings about the capacity to make inference with these techniques in a simulation study, it is important to note that in an applied case more informative priors could be considered. Here, our aim was to evaluate the performance of the ABC techniques considering several parameter combinations generating trajectories and then sampling from those trajectories. In order to optimize computing time, we simulated a million trajectories sampling their parameters from uniform distributions and then we randomly choose one of them as observed data while the rest of the simulations was used to perform the ABC computations. That justify the use of uniform priors for our parameters. As we did in our real data example, in applied cases it wold be relatively straightforward to come up with more informative priors, especially for the expected time for changes in movement direction. As new technologies allow us to obtain very detailed movement data, we can have better estimates of the temporal scales at which animals make movement decisions. As we did in our real data example, high-frequency data from accelerometers and magnetometers combined with GPS data can be used to obtain trajectories with sub-second temporal resolution to then detect fine-scale movement decisions such as changes in direction. These detailed trajectories could be used to elicit informative priors to use when only location data at longer time frequencies is available. The movement model presented here is quite simple as we assume constant movement speed and turning angles with zero mean. Nevertheless, the model is an improvement over discrete-time versions where the temporal scale of movement has to match the scale of observations. Further developments of these methods should be considered, for example adding a stochastic error term in the observations, or allowing for the presence of missing values in them. It should also be important to contemplate additional features that are common in movement studies such as the inclusion of more than one movement behavior and the effect of habitat features on both movement parameters and changes among behaviors (Morales et al., 2004; Hooten et al., 2017). Even though these extensions would mean estimating several parameters, such models will imply further structure in the trajectories that could be used as part of the summary statistics used to characterize the data. Hence, it might reduce the combination of parameters values capable of reproducing features present in the observations, allowing for ABC inference. In general, the processes behind the realized movement of an individual and the processes that affect how we record the trajectory are usually operating at different time scales, making it challenging to analyze and to understand the former given the latter. The state-space model used here allowed us to connect these two scales in an intuitive and easy to interpret way. Our results indicate that, when designing data collection protocols, it is crucial to be aware of the differences between the time scale in which animals make their movement decisions and the time scale at which the data will be collected, in order to avoid incorrect interpretations of the system. In addition, very fine time scales may not be necessary to have good estimates of certain movement processes.
{"url":"https://peerj.com/articles/8452/","timestamp":"2024-11-11T08:14:56Z","content_type":"text/html","content_length":"309945","record_id":"<urn:uuid:eb6198cb-3939-43c9-8544-9ceab047ccf2>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00656.warc.gz"}
© 2005,2010-2013,2015,2018 John Abbott, Anna M. Bigatti GNU Free Documentation License, Version 1.2 CoCoALib Documentation Index User documentation for SmallFpImpl The class SmallFpImpl is a very low level implementation class for fast arithmetic in a small, prime finite field. It is not intended for use by casual CoCoALib users, who should instead see the documentation in QuotientRing (in particular the function NewZZmod), or possibly the documentation in RingFp, RingFpLog, and RingFpDouble. The class SmallFpImpl offers the possibility of efficient arithmetic in small, prime finite fields. This efficiency comes at a cost: the interface is rather unnatural. The emphasis is on speed rather than convenience; this speed depends on many functions being inlined. The overall structure is modelled on that of ring and RingElem: namely, operations on values are via member functions of SmallFpImpl. The class SmallFpImpl records the modulus, while the actual values are of type SmallFpImpl::value, and record only the residue class. Also see below for the special type SmallFpImpl::NonRedValue. Constructors and pseudo-constructors The ctor for a SmallFpImpl object takes 1 or 2 args: • SmallFpImpl(p) - create a SmallFpImpl for prime p; error if p is not prime, or too large. • SmallFpImpl(p,conv) - specify export convention conv: either SymmResidues or NonNegResidues The default export convention is SymmResidues (unless changed in the GlobalManager). This convention may be either GlobalSettings::SymmResidues or GlobalSettings::NonNegResidues; the default convention is determined by the GlobalManager. Note if the first argment is of type SmallPrime then the constructor skips testing for primality. Queries and views Let ModP be a SmallFpImpl object. • SmallFpImpl::IsGoodCtorArg(p) -- returns true if p is a valid SmallFpImpl ctor arg; otherwise false • SmallFpImpl::ourMaxModulus() -- returns largest ctor arg allowed by the implementation • ModP.myModulus() -- returns the prime p (as a long) • ModP.myMaxIters() -- see section on unnormalized computation Operations on Values All operations (except for zero, one, IsZero, IsOne, == and !=) must be effected by calling member functions of the SmallFpImpl class. The member function myReduce is effectively a ctor. Here is a brief summary. long n; BigInt N; BigRat q; SmallFpImpl::value a, b, c; a = zero(SmallFp); // equiv to a = ModP.myReduce(0); b = one(SmallFp); // equiv to b = ModP.myReduce(1); IsZero(a); // equiv to (a == ModP.myReduce(0)) IsOne(b); // equiv to (b == ModP.myReduce(1)) a == b; // test for equality a != b; // logical negation of (a == b) ModP.myReduce(n); // reduce mod p ModP.myReduce(N); // reduce mod p ModP.myReduce(q); // reduce mod p ModP.myExportNonNeg(a); // returns the least non negative preimage (of type long), between 0 and p-1. ModP.myExportSymm(a); // returns a symmetric preimage (of type long), between -p/2 and p/2. ModP.myExport(a); // returns a preimage (of type long) between -p/2 and p-1; see note below! ModP.myNegate(a); // -a mod p, additive inverse ModP.myRecip(a); // inv(a), multiplicative inverse ModP.myAdd(a, b); // (a+b)%p; ModP.mySub(a, b); // (a-b)%p; ModP.myMul(a, b); // (a*b)%p; ModP.myDiv(a, b); // (a*inv(b))%p; where inv(b) is inverse of b ModP.myPower(a, n); // (a^n)%p; where ^ means "to the power of" ModP.myIsZeroAddMul(a,b,c) // a = (a+b*c)%p; result is (a==0) ModP.myAddMul(a,b,c) // (a+b*c)%p We suggest using the function myExport principally for values to be printed; in other contexts we recommend using myExportNonNeg if possible. Code calling myExport should assume only that the value returned is between -p/2 and p-1; the actual range of return values is determined by the convention specified when the SmallFpImpl object was constructed. Advanced Use: Unnormalized Computation The normal mod p arithmetic operations listed above always produce a normalized result, but this normalization incurs a run-time cost. In some loops (e.g. for an inner product) it may be possible to compute several iterations before having to normalize the result. SmallFpImpl supports this by offering the type SmallFpImpl::NonRedValue for unnormalized values; this type is effectively an unsigned integer, and such values may be added and multiplied without normalization (but also without overflow checks!) using the usual + and * operators (and also += and *=). SmallFpImpl offers the following three functions to help implement a delayed normalization strategy. SmallFpImpl::NonRedValue a; ModP.myNormalize(a); -- FULL normalization of a, result is a SmallFpImpl::value ModP.myHalfNormalize(a); -- *fast*, PARTIAL normalization of a, result is a NonRedValue ModP.myMaxIters(); -- see comment below The value of myMaxIters() is the largest number of unnormalized products (of normalized values) which may safely be added to a "half normalized" value without risking overflow. The half normalization operation is quick (at most a comparison and a subtraction). Naturally, the final result must be fully normalized. See example program ex-SmallFp1.C for a working implementation. Maintainer documentation for SmallFpImpl Most functions are implemented inline, and no sanity checks are performed (except when CoCoA_DEBUG is enabled). The constructor does do some checking. SmallFpImpl::value_t must be an unsigned integral type; it is a typedef to a type specified in CoCoA/config.H -- this should allow fairly easy platform-specific customization. This code is valid only if the square of myModulus can be represented in a SmallFpImpl::value_t; the constructor checks this condition. Most functions do not require myModulus to be prime, though division becomes only a partial map if it is composite; and the function myIsDivisible is correct only if myModulus is prime. Currently the constructor rejects non-prime moduli. The code assumes that each value modulo p is represented as the least non-negative residue (i.e. the values are represented as integers in the range 0 to p-1 inclusive). This decision is linked to the fact that SmallFpImpl::value_t is an unsigned type. The constants myResidueUPBValue and myIterLimit are to allow efficient exploitation of non-reduced multiplication (e.g. when trying to compute an inner product modulo p). See example program The return type of NumBits is int even though the result is always non-negative -- I do not like unsigned values. Bugs, Shortcomings, and other ideas Should there be a myIsMinusOne function?
{"url":"http://cocoa.altervista.org/cocoalib/doc/html/SmallFpImpl.html","timestamp":"2024-11-02T14:49:48Z","content_type":"text/html","content_length":"10438","record_id":"<urn:uuid:e0044514-40d0-4bbf-9e93-f42815b05e8d>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00022.warc.gz"}
Genetic Programming Anyone? 01-30-2015, 05:03 PM Post: #1 Namir Posts: 1,115 Senior Member Joined: Dec 2013 Genetic Programming Anyone? Has anyone who visits this site done some work or studied Genetic programming? I am curious to see programming example on how GP works to, for example, develop a good regression model. 01-31-2015, 01:12 PM Post: #2 Csaba Tizedes Posts: 609 Senior Member Joined: May 2014 RE: Genetic Programming Anyone? Hi Namir, I think this paper will interesting for you: 01-31-2015, 03:09 PM Post: #3 Jeff_Kearns Posts: 148 Member Joined: Dec 2013 RE: Genetic Programming Anyone? That is a very interesting paper - and topic. 01-31-2015, 09:26 PM Post: #4 Namir Posts: 1,115 Senior Member Joined: Dec 2013 RE: Genetic Programming Anyone? The trick is to be able to dynamically alter the expression tree! It's NOT easy. The alternative approach that I have used is to apply test regression models between multiple variables with EACH variable having a set of different possible transformations. this approach can test hundreds if not THOUSANDS of regression models. The approach is deterministic as it gives you the same exact results whne repeated. Genetic Programming (GP for short), variant of Genetic Algorithm, has a stochastic element in it. So repeating calculations, which involve random numbers, most likely give you different results. Matrices can easily model graphs and trees--both of which contain nodes. The graphs in GP contain nodes that are either constants, variables, operators, and even functions. Dynamically knowing what sub-graph to add and what part of the current graph to remove remains a mystery to me. My goal is to learn and master teh algorithm that dynamically grows and prunes expression trees. 02-04-2015, 12:42 PM Post: #5 Martin Hepperle Posts: 419 Senior Member Joined: May 2014 RE: Genetic Programming Anyone? while not running on an HP calculator I have used a program which was intially called "Eureqa" (like the old Borland math tool) and now sails under the name "Nutonian formulize" This is a handy tool to create analytical approximations of data sets. It constructs and evaluates equations from basic building blocks using a genetic algorthm. You can watch it at work and finally select a compromise between accuracy and complexity for your approximation equation. Maybe you can get some ideas for an implementation on a calculator. I would expect rather high memory requirements 02-04-2015, 02:44 PM Post: #6 Massimo Gnerucci Posts: 2,684 Senior Member Joined: Dec 2013 RE: Genetic Programming Anyone? (02-04-2015 12:42 PM)Martin Hepperle Wrote: while not running on an HP calculator I have used a program which was intially called "Eureqa" (like the old Borland math tool) and now sails under the name "Nutonian formulize" Borland's Eureka -> Real Software's Mercury -+×÷ ↔ left is right and right is wrong User(s) browsing this thread: 1 Guest(s)
{"url":"https://hpmuseum.org/forum/thread-2970.html","timestamp":"2024-11-13T06:41:25Z","content_type":"application/xhtml+xml","content_length":"30869","record_id":"<urn:uuid:d792f5e6-462a-4e12-a744-12a8e5b986eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00379.warc.gz"}
Analytical Mechanics The course is not on the list Without time-table Code Completion Credits Range 02ANM Z,ZK 4 2P+2C In order to register for the course 02ANM, the student must have successfully completed or received credit for and not exhausted all examination dates for the course 02MECHZ. The course 02ANM can be graded only after the course 02MECHZ has been successfully completed. Course guarantor: The course is an introduction to analytical mechanics. The students acquire knowledge of the basic concepts of the Lagrange and Hamiltonian formalism as well as diferent approaches to description of dynamics (Newton’s, Lagrange, Hamilton and Hamilton-Jacobi equations). The efficiency of these methods is illustrated on elementary examples like the two-body problem, the motion of a system of constrained mass points, and of a rigid body. Advanced parts of the course cover differential and integral principles of mechanics. Syllabus of lectures: 1. Mathematical formalism 2. Newtonian mechanics 3. The Lagrange function, constraints, generalized coordinates 4. Lagrange equations 5. Symmetries of the Lagrange function and conservation laws 6. Static equilibrium, the principle of virtual displacements 7. Differential principles 8. Integral principles 9. Hamilton's formalism 10. Poisson bracket and conservation laws 11. Canonical transformations 12. Hamilton-Jacobi equation Syllabus of tutorials: Solving problems to illustrate the theory from the lecture. Study Objective: Learn the basics of analytical mechanics, Lagrange and Hamilton formalism. Solving problems in mechanics with Lagrange and Hamilton formalism Study materials: Key references: [1] L.D. Landau, E.M. Lifšic, Course of Theoretical Physics, Elsevier, 2013 [2] F. Strocchi, A Primer of Analytical Mechanics, Springer International, New York 2018 Recommended references: [3] G. Joos, I. Freeman: Theoretical Physics, Courier Corp. 2013. Further information: No time-table has been prepared for this course The course is a part of the following study plans:
{"url":"https://bilakniha.cvut.cz/en/predmet6897906.html","timestamp":"2024-11-11T07:34:52Z","content_type":"text/html","content_length":"9111","record_id":"<urn:uuid:5fc45ae4-9698-43ac-8503-214305641d74>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00890.warc.gz"}
Definitions and Types of Integral Equations - Solving Integrals Definitions and Types of Integral Equations In this article, I will explain what Integral Equations are, how they are structured and what are certain types of Integral Equations. What is an Integral Equation? An integral equation is an equation in which an unknown function appears under one or more integration signs. Any integral calculus statement like — $ y= \int_a^b \phi(x) dx$ can be considered as an integral equation. If you noticed, I have used two integration limits (a and b) in the above integral equation – they are special in this discussion, and their significance will be discussed later in the article. Linear Integral Equations A general type of integral equation, $ g(x) y(x) = f(x) + \lambda \int_a^\Box K(x, t) y(t) dt$ is called linear integral equation as only linear operations are performed in the equation. The one, which is not linear, is obviously called a "Non-linear integral equation". We generally mean linear integral equation when we say integral equation. When a non-linear integral equation is to be called, it is called exclusively. In the general type of the linear equation $ g(x) y(x) = f(x) + \lambda \int_a^\Box K(x, t) y(t) dt$ we have used a 'box $ \Box$' to indicate the higher limit of the integration. Types of Integral Equations Integral Equations can be of two types according to whether the box $ \Box$ (the upper limit) is a constant, $b$ or a variable, $x$. The first type of integral equations which involve constants as both the limits — are called Fredholm Type Integral equations. On the other hand, when one of the limits is a variable ($x$, the independent variable of which $y$, $f$ and $K$ are functions), such integral equations are called Volterra’s Integral Equations. • $ g(x) y(x) = f(x) + \lambda \int_a^b K(x, t) y(t) dt$ is a Fredholm Integral Equation and • $ g(x) y(x) = f(x) + \lambda \int_a^x K(x, t) y(t) dt$ is a Volterra Integral Equation. In an integral equation, $ y$ is to be determined with $ g$, $ f$ and $ K$ being known and $ \lambda$ being a non-zero complex parameter. The function $ K (x,t)$ is called the ‘kernel’ of the integral equation. Structure of an Integral Equation Types of Fredholm Integral Equations As the general form of Fredholm Integral Equation is $ g(x) y(x) = f(x) + \lambda \int_a^b K(x, t) y(t) dt$, there may be following other types of it according to the values of $ g$ and $ f$ : 1. Fredholm Integral Equation of First Kind —when — $ g(x) = 0$ $ f(x) + \lambda \int_a^b K(x, t) y(t) dt=0$ 2. Fredholm Integral Equation of Second Kind —when — $ g(x) =1$ $ y(x) = f(x) + \lambda \int_a^b K(x, t) y(t) dt$ 3. Fredholm Integral Equation of Homogeneous Second Kind —when $ f(x)=0$ and $ g(x)=1$ $ y(x) = \lambda \int_a^b K(x, t) y(t) dt$ 4. The general equation of Fredholm equation is also called Fredholm Equation of Third/Final kind, with $ f(x) \neq 0, 1 \neq g(x)\neq 0$. Types of Volterra Integral Equations As the general form of Volterra Integral Equation is $ g(x) y(x) = f(x) + \lambda \int_a^x K(x, t) y(t) dt$, there may be following other types of it according to the values of $ g$ and $ f$ : 1. Volterra Integral Equation of First Kind —when — $ g(x) = 0$ $ f(x) + \lambda \int_a^x K(x, t) y(t) dt=0$ 2. Volterra Integral Equation of Second Kind —when — $ g(x) =1$ $ y(x) = f(x) + \lambda \int_a^x K(x, t) y(t) dt$ 3. Volterra Integral Equation of Homogeneous Second Kind —when $ f(x)=0$ and $ g(x)=1$ $ y(x) = \lambda \int_a^x K(x, t) y(t) dt$ 4. The general equation of Volterra equation is also called the Volterra Equation of Third/Final kind, with $ f(x) \neq 0, 1 \neq g(x)\neq 0$. Singular Integral equations In the general Fredholm/Volterra Integral equations, there arise two singular situations: • the limit $ a \to -\infty$ and $ \Box \to \infty$. • the kernel $ K(x,t) = \pm \infty$ at some points in the integration limit $ [a, \Box]$. Then, such integral equations are called Singular (Linear) Integral Equations. Based on these two singular situations, here are two examples of the Singular Integral equations. $ a \to -\infty$ and $ \Box \to \infty$ General Form: $ g(x) y(x) = f(x) + \lambda \int_{-\infty}^{\infty} K(x, t) y(t) dt$ Example: $ y(x) = 3x^2 + \lambda \int_{-\infty}^{\infty} e^{-|x-t|} y(t) dt$ Type-2: $ K(x,t) = \pm \infty$ at some points in the integration limit $ [a, \Box]$ Example: $ y(x) = f(x) + \int_0^x \dfrac{1}{(x-t)^n} y(t)$ is a singular integral equation as the integrand reaches to $ \infty$ at $ t=x$. The nature of the solution of integral equations solely depends on the nature of the Kernel of the integral equation K(x,t). Kernels are of the following special types: Symmetric Kernel When the kernel $ K(x,t)$ is symmetric or complex symmetric or Hermitian, if $ K(x,t)= \bar{K}(t,x)$ . Here bar $ \bar{K}(t,x)$ denotes the complex conjugate of $ K(t,x)$. That’s if there is no imaginary part of the kernel, then $ K(x, t) = K(t, x)$ implies that $ K$ is a symmetric kernel. For example $ K(x,t)= \sin (x+t)$ is symmetric kernel. Separable or Degenerate Kernel A kernel $ K(x,t)$ is called separable if it can be expressed as the sum of a finite number of terms, each of which is the product of ‘a function’ of x only and ‘a function’ of t only, i.e., $ K(x,t) = \displaystyle{\sum_{n=1}^{\infty}} \phi_i (x) \psi_i (t)$ Difference Kernel When $ K(x,t) = K(x-t)$, the kernel is called difference kernel. Resolvent or Reciprocal Kernel The solution of the integral equation $ y(x) = f(x) + \lambda \int_a^\Box K(x, t) y(t) dt$ is of the form $ y(x) = f(x) + \lambda \int_a^\Box \mathfrak{R}(x, t;\lambda) f(t) dt$. The kernel $ \mathfrak{R}(x, t;\lambda)$ of the solution is called resolvent or reciprocal kernel. Integral Equations of Convolution Type The integral equation $ g(x) y(x) = f(x) + \lambda \int_a^\Box K(x, t) y(t) dt$ is called of integral equation of convolution type when the kernel $ K(x,t)$ is difference kernel, i.e., $ K(x,t) = K Let $ y_1(x)$ and $ y_2(x)$ be two continuous functions defined for $ x \in E \subseteq\mathbb{R}$ then the convolution of $ y_1$ and $ y_2$ is given by $$ y_1 * y_2 = \int_E y_1 (x-t) y_2(t) dt$$ $$=\int_E y_2 (x-t) y_1(t) dt$$ For standard convolution, the limits are $ -\infty$ and $ \infty$. Eigenvalues and Eigenfunctions of Integral Equations The homogeneous integral equation $ y(x) = \lambda \int_a^\Box K(x, t) y(t) dt$ has the obvious solution $ y(x)=0$ which is called the zero solution or the trivial solution of the integral equation. Except this, the values of $ \lambda$ for which the integral equation has non-zero solution $ y(x) \neq 0$, are called the eigenvalues of integral equation or eigenvalues of the kernel. Every non-zero solution $ y(x)\neq 0$ is called an eigenfunction corresponding to the obtained eigenvalue $ \lambda$. • Note that $ \lambda \neq 0$ • If $ y(x)$ an eigenfunction corresponding to eigenvalue $ \lambda$ then $ c \cdot y(x)$ is also an eigenfunction corresponding to $ \lambda$ Leibnitz Rule of Differentiation under the integral sign Let $ F(x,t)$ and $ \dfrac{\partial F}{\partial x}$ be continuous functions of both x and t and let the first derivatives of $ G(x)$ and $ H(x)$ are also continuous, then $ \dfrac{d}{dx} \displaystyle {\int_{G(x)}^{H(x)}} F(x,t) dt$ $= \displaystyle {\int_{G(x)}^{H(x)}}\dfrac{\partial F}{\partial x} dt + F(x, H(x)) \dfrac{dH}{dx} – \\ F(x, G(x)) \dfrac{dG}{dx}$. This formula is called Leibnitz’s Rule of differentiation under integration sign. In a special case, when G(x) and H(x) both are absolute (constants) –let $ G(x) =a$, $ H(x)=b \iff dG/dx =0=dH/dx$ ; $ \dfrac{d}{dx} \displaystyle {\int_a^b} F(x,t) dt = \displaystyle {\int_a^b}\dfrac{\partial F}{\partial x} dt$ Changing Integral Equation with Multiple Integrals into Standard Simple Integral (Multiple Integral Into Simple Integral — The magical formula) Say, the integral of order n is given by $ \displaystyle{\int_{\Delta}^{\Box}} f(x) dx^n$ We can prove that $ \displaystyle{\int_{a}^{t}} f(x) dx^n = \displaystyle{\int_{a}^{t}} \dfrac{(t-x)^{n-1}}{(n-1)!} f(x) dx$ Example: Solve $ \int_0^1 x^2 dx^2$ Solution: $ \int_0^1 x^2 dx^2$ $ = \int_0^1 \dfrac{(1-x)^{2-1}}{(2-1)!} x^2 dx$ (since t=1) $ =\int_0^1 (1-x) x^2 dx$ $ =\int_0^1 (1-x) x^2 dx$ $ =\int_0^1 (x^2-x^3) dx =1/12$
{"url":"https://gauravtiwari.org/solving-integral-equations-1-definitions-and-types/","timestamp":"2024-11-08T05:06:08Z","content_type":"text/html","content_length":"148446","record_id":"<urn:uuid:b0d71cf5-f301-4261-9d30-d7f4503caa05>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00764.warc.gz"}
Collaborative Filtering Techniques for Recommender Systems ( KNN – continued ) Quite recently, I started dabbling in Recommender Systems. My interest was piqued as a result of the pervasiveness of these systems in major technology platforms. Think of any site you use which offers suggestions to you based on your past activity: Amazon recommends books similar to ones you have liked and purchased; Netflix recommends movies similar to ones you have already enjoyed watching; Instagram’s explore feature curates images analogous to ones you have interacted with; the list goes on. Underlying these recommender systems is the basic idea that if a user likes a kind of item, we can recommend similar items to them which they haven’t yet encountered. Recommender systems are a subclass of Information Filtering Systems. IFSs filter a stream of data using some dynamic logic to ensure that data which the user encounters is relevant to them, based on the user’s characteristics or preferences. Currently, there are three major types of recommender systems: 1. Collaborative Filtering Systems 2. Content-Based Filtering Systems 3. Hybrid Systems In this note, I chose to explore Collaborative Filtering Systems. Collaborative Filtering is a method of filtering information to be presented to a user by drawing on insights extracted from different sources. Some CF techniques could make recommendations to a user based on information drawn from other user’s preferences. Instagram for instance, uses collaborative filtering to draw on accounts you follow, and use the preferences of those user accounts to recommend images in your explore page. Collaborative Filtering techniques assume that if two users share similar preferences on a bunch of subjects, then they are very likely to share preferences on other subjects. Pretty cool huh? Here’s what it looks like: An illustration of Collaborative Filtering Steps to Collaborative Filtering 1. Gather information on user’s preferences on the different items. (You will need to design an efficient representation of preferences) 2. System should match user preferences and identify users that are most similar in terms of preference (more on this later) 3. Recommend items to user that were rated highly by other similar users In practice, we could have a tabular matrix comprising users on the y-axis and items on the x-axis. Each row instance is a vector of the user’s rating for each item. Such a vector space is bound to have a very high number of dimensions. This is because we have created a new dimension for each item that can be rated by users. In a previous note, I delved into the Euclidean Distance algorithm for evaluating the distance between two vectors. For very high dimensional spaces, Euclid’s algorithm can fall short, because the differences in distance among vectors are somewhat negligible. We will need a more appropriate method for evaluating similarity. One method that can work is the Cosine Similarity method. Cosine Similarity of two vectors evaluates how much the vectors are in alignment. The method is more interested in vector alignment as opposed to magnitude of distance. So that means two vectors that are very closely aligned will have a small angle between them which will give them a cosine similarity value closer to 1. The value ranges from -1 to +1 with cos(0) = 1 for perfectly similar orientation, cos(90) = 0 for perpendicular orientation, cos (180) = -1 for perfectly opposite orientation. With our users organised as vectors, we can evaluate cosine similarity among users and identify the N most similar users to a user U, in terms of item preferences (ratings). For those items not yet encountered by User U, we can employ the ratings of the N most similar users and apply an average on those ratings to result in a ‘pseudo-rating’ for User U for that item. We then recommend the items with the highest ratings to User U. Using the movie recommender problem as an example, let’s define an elaborate algorithm for employing collaborative filtering. 1. Organise a matrix of user ratings for movies. Each row instance is a vector of user’s ratings per movie. Each column belongs to a movie. We assume a rating scale of 1-5. With 5 meaning the user really loved the movie. 2. When a user rates a movie, insert the actual rating value in the matrix. 3. For a user (Henry) to whom we wish to recommend movies, compute cosine similarity between Henry’s ratings vector and every other user’s ratings vector 4. Filter the most N similar users to Henry 5. For each of the movies Henry has not yet watched, compute the average of the ratings from the N most similar users and set that value as Henry’s pseudo rating. 6. Recommend movies with the highest pseudo-ratings to Henry. a snapshot of User’s movie ratings Disadvantages of Collaborative Filtering. 1. We cannot make proper recommendations until a substantial number of users have made ratings. 2. System is not reliable with few users on the system There’s another way to apply this Collaborative Filtering, where we organise our ratings matrix to have movies on the vertical axis, and users on the horizontal axis. Something like this: What this means is we now have our rows as movie vectors, with each vector comprising all user ratings for that movie. Using the idea of Collaborative Filtering, we can assume that when two movies have very similar user ratings, they will be more closely aligned in the vector space by cosine similarity. With this knowledge, I was able to assemble some code to parse two CSV files from movielens archive: movies.csv containing the movie data, ratings.csv containing user ratings. The goal is to build a matrix of movie vectors from the inputted data samples. The matrix which is really a scipy sparse matrix will be used to train a KNN model from which we will make our recommendations. I also went ahead to utilise fuzzy search algorithm to allow us input a search query which will be run against the movie titles. Then I select the first search result and use it to query the KNN model for the nearest neighbours. The last 2 paragraphs mention a few concepts that will need to be expatiated. To understand a Scipy Sparse Matrix, first we need to appreciate why it exists. As you may know, a matrix is a 2 dimensional array of items usually denoted m by n, where m is the number of rows and n is the number of columns. When working with matrices for real world problem solving, more often than not, we encounter sparse matrices; ie matrices with A LOT of zero values. In fact, these sparse matrices have so many zero values, it becomes computationally inefficient to hold these values in memory as though they were part of a dense matrix. Hence, Scipy Sparse Matrices allow us efficiently represent 2Dimensional arrays in memory. Fuzzy Search is an algorithm that employs a fairly loose approach to string matching. As opposed to regular expressions, or substring matching algorithms; Fuzzy search is less rigid and can match misspelled words or even words that appear out of order. I rely on the fuzzywuzzy package for implementing fuzzy search. Fuzzy search is very similar to a more popular algorithm: Levenshtein distance though it appears to be more powerful. Another thing I tried to do was restrict the number of movies involved in the recommendation process to those movies that received a sufficient number of recommendations by users. Why this is necessary is to avoid unpopular movies clogging our recommendation process; and further prune our sparse matrix so it only has most relevant data. Here’s what the code looks like: import pandas as pd from sklearn.neighbors import NearestNeighbors from scipy.sparse import csr_matrix from fuzzywuzzy import fuzz class DataFrameReader: Class reads the movie and ratings files and organizes them into Pandas DataFrames. def __init__(self, movies_path, ratings_path): self.movies_path = movies_path self.ratings_path = ratings_path def load_data(self): movies_df = pd.read_csv( self.movies_path, usecols=['movieId', 'title'], dtype={'movieId': 'int', 'title': 'str'} ratings_df = pd.read_csv( self.ratings_path, usecols=['userId', 'movieId', dtype={'userId': 'int', 'movieId': 'int', 'rating': return movies_df, ratings_df class MovieMatrixBuilder: Builds a Scipy Sparse Matrix holding ratings by users on most rated movies def __init__(self, movie_ratings_threshold): self.movie_ratings_threshold = movie_ratings_threshold def build_knn_matrix(self, movies_df, ratings_df): filtered_movies = ratings_df[ movies_matrix = filtered_movies.pivot( index='movieId', columns='userId', values='rating' sparse_movies_matrix = csr_matrix(movies_matrix.values) return sparse_movies_matrix def get_filtered_movies(self, ratings_df): filters movies by the supplied threshold which is applied to the count of ratings made on each movie movies_df_count = pd.DataFrame( most_rated_movies = movies_df_count.query( 'count >= @self.movie_ratings_threshold' return ratings_df.movieId.isin(most_rated_movies).values class FuzzyWuzzyMatcher: def __init__(self, ratio_threshold): self.ratio_threshold = ratio_threshold def fuzzy_search(self, movie, movie_map): matching_items = [] for index, title in movie_map.items(): fuzz_ratio = fuzz.ratio(title.lower(), movie.lower()) if (fuzz_ratio >= self.ratio_threshold): matching_items.append((title, index)) return matching_items class MovieRecommender: def __init__(self, n_neighbors, movies_path, ratings_path, self.n_neighbors = n_neighbors data_reader = DataFrameReader(movies_path, ratings_path) self.movies_df, self.ratings_df = data_reader.load_data() self.matrix_builder = MovieMatrixBuilder( def prepare_model(self): self.sparse_matrix = self.matrix_builder.build_knn_matrix( self.movies_df, self.ratings_df) self.movie_map = { index: row.title for index, row in self.model = NearestNeighbors( def recommend(self, movie_query, search_threshold): matcher = FuzzyWuzzyMatcher(search_threshold) search_results = matcher.fuzzy_search( movie_query, self.movie_map print(f'Search Results: {search_results}') if not search_results: print('No Results matching Search Query.') movieIndex = search_results[0][1] distances, indices = self.model.kneighbors( return [ [self.movie_map.get(key) for key in indexRow] for indexRow in indices # read relevant parameters from user input movies_path = input('Path to Movies CSV: ') ratings_path = input('Path to User Ratings CSV: ') search_threshold = input('Fuzzy Search Threshold: ') neighbors_count = input('Number of Neighbors: ') movie_ratings_threshold = input('Movie Ratings Threshold: ') # apply unpacking operator to initialize movie # recommender with named arguments recommender = MovieRecommender( 'n_neighbors': int(neighbors_count), 'movies_path': movies_path, 'ratings_path': ratings_path, 'movie_ratings_threshold': int(movie_ratings_threshold) movie_query = input('Movie Name: ') recommender.recommend(movie_query, int(search_threshold)) Here’s a link to the file on github: movie_recommender_main.ipynb Wrapping Up What I implemented in this note is far from a production grade recommender system. Although it introduces the basic concepts of Collaborative Filtering. Real world systems are usually Hybrid; employing a mix of Collaborative, Content Based filtering, and even more advanced methods. It would be great to explore these in subsequent notes. Further Reading
{"url":"https://julianduru.com/collaborative-filtering-techniques-for-recommender-systems-knn-continued/","timestamp":"2024-11-02T18:53:49Z","content_type":"text/html","content_length":"47267","record_id":"<urn:uuid:4394b266-285a-4b70-b79e-94e881308498>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00421.warc.gz"}
DF-DN - Ensuring Deadlock-Freedom in Low-Diameter InfiniBand networks The DF-DN [1] provides deadlock-freedom for low-diameter InfiniBand networks. It uses the idea of incrementing the VL on every hop. Changing the VL within a switch is a feature of the InfiniBand network that is not exploited by deadlock-free routing algorithms currently available in InfiniBand. Instead currently available algorithms rely on "layering". If the channel-dependency graph induced by the routing function contains a cycle, entire paths from source to destination HCA are moved into a new VL (initially all paths are in e.g., VL 0). This process is repeated for each VL until there is no cycle left in any VL. This approach has some drawbacks: Finding the optimal assignment of paths to VLs (such that the number of used VLs is minimized) is NP-complete [2]. There is no known upper bound on the number of necessary VLs. If the network uses more than one VL, injecting hosts have to utilize PathRecord queries in order to find out which SL value they should use for each destination. Our heuristic instead is based on the idea of incrementing the VL at every hop, thus there is a trivial lower bound on the number of required VLs, which is the diameter of the network (assuming minimal path routing). Also our heuristic has lower complexity than DFSSSP and LASH, and is much faster in practice. Low-diameter networks are popular since they offer high bandwidth and low latency. For diameter two networks, e.g., Slim Fly, the DF-D2 heuristic does not require path queries. We evaluated the DF-DN and DF-D2 heuristic for several low-diameter topologies. Our experiments show that our heuristic uses a smaller number of VLs for many networks than deadlock-free routing algorithms currently available in OpenSM. Our algorithm is up to three times faster than DF-SSSP and orders of magnitudes faster than LASH. The DF-DN toolchain, which includes a patched version of OpenSM as well as a patched version of ib_flit_sim can be downloaded here. [1] T. Schneider, O. Bibartiu, T. Hoefler: HOTI'16 Ensuring Deadlock-Freedom in Low-Diameter InfiniBand Networks In Proceedings of the 24th Annual Symposium on High-Performance Interconnects (HOTI'16), Aug. 2016, Best Student Paper at [2] J. Domke, T. Hoefler, W. Nagel: IPDPS'11 Deadlock-Free Oblivious Routing for Arbitrary Topologies In Proceedings of the 25th IEEE International Parallel \& Distributed Processing Symposium (IPDPS), presented in Anchorage, AL, USA, pages 613--624, IEEE Computer Society, ISBN: 0-7695-4385-7, May 2011, (acceptance rate: 19.6%, 112/571)
{"url":"https://spcl.inf.ethz.ch/Research/Scalable_Networking/DFDN/","timestamp":"2024-11-06T09:25:17Z","content_type":"text/html","content_length":"12283","record_id":"<urn:uuid:8c96b4b9-0f0c-4301-a3f8-66db7aad0c0a>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00016.warc.gz"}
Tag: markov • In this post, I will just solve for expected value of a probabilistic model with as many methods as I can. You can encountered these types of problem in data science and quant interviews. Problem Assume there is a 2×2 grid, as shown in the figure below. You can randomly walk to a neighboring block…
{"url":"https://muditb.com/tag/markov/","timestamp":"2024-11-04T11:06:08Z","content_type":"text/html","content_length":"77674","record_id":"<urn:uuid:8cb1ff86-ac27-47e9-8cb1-9fb27cc2153e>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00321.warc.gz"}
Excel Combine Multiple Chart Series 2024 - Multiplication Chart Printable Excel Combine Multiple Chart Series Excel Combine Multiple Chart Series – You could make a multiplication chart in Stand out simply by using a web template. You can find several samples of layouts and discover ways to format your multiplication chart using them. Here are some tricks and tips to make a multiplication graph. Once you have a template, all you have to do is backup the method and mixture it in the new mobile phone. You can then take advantage of this formula to increase a series of numbers by an additional set up. Excel Combine Multiple Chart Series. Multiplication kitchen table design You may want to learn how to write a simple formula if you are in the need to create a multiplication table. Very first, you have to secure row one of many header line, then grow the quantity on row A by mobile phone B. A different way to develop a multiplication table is to apply mixed recommendations. In cases like this, you would get into $A2 into line A and B$1 into row B. The outcome can be a multiplication kitchen table with a formula that really works for both rows and columns. If you are using an Excel program, you can use the multiplication table template to create your table. Just wide open the spreadsheet together with your multiplication table template and change the title on the student’s name. You can also adjust the page to suit your person demands. It comes with an choice to affect the shade of the tissues to change the appearance of the multiplication kitchen table, too. Then, you may alter the plethora of multiples for your needs. Making a multiplication graph or chart in Stand out When you’re making use of multiplication desk computer software, you can easily produce a easy multiplication dinner table in Excel. Basically develop a page with rows and columns numbered from a single to thirty. In which the rows and columns intersect is the answer. If a row has a digit of three, and a column has a digit of five, then the answer is three times five, for example. The same thing goes for the other way around. Initial, you can enter into the figures that you should grow. If you need to multiply two digits by three, you can type a formula for each number in cell A1, for example. To create the numbers greater, select the tissue at A1 and A8, and after that click on the appropriate arrow to decide on a range of cells. You can then type the multiplication formula within the tissues in the other columns and rows. Gallery of Excel Combine Multiple Chart Series Excel Create Bar Chart With Multiple Series Focus Excel Stacked Bar Chart Multiple Series JosieEsha Combining Charts In Excel The JayTray Blog Leave a Comment
{"url":"https://www.multiplicationchartprintable.com/excel-combine-multiple-chart-series/","timestamp":"2024-11-13T21:32:39Z","content_type":"text/html","content_length":"49808","record_id":"<urn:uuid:4f90cd9d-c194-40cc-81d2-89700f7d9462>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00274.warc.gz"}
A Detailed Proof of the Riemann Mapping Theorem This post Is intended to supply a detailed proofs of the Riemann mapping theorem. Riemann mapping theorem. Every simply connected region $\Omega \subsetneq \mathbb{C}$ is conformally equivalent to the open unit disc $U$. Fortunately the proof can be found in many textbooks of complex analysis, but the proof is fairly technical so it can be painful to read. This post can be considered as a painkiller. In this post you will see the proof being filled with many details. However, the writer still encourage the reader to reproduce the proof by their own pen and paper. The writer also hopes that this post can increase the accessibility of this theorem and the proof. However, there is a bar. We need to assume some background in complex analysis, although they are very basic already. Minimal prerequisite is being able to answer the following questions. • Contour integration, Cauchy’s formula. • Almost uniform convergence. Let $\Omega \subset \mathbb{C}$ be open and suppose that $f_j \in H(\Omega)$ for all $j=1,2,\dots$, and $f_j \to f$ uniformly on every compact subset $K \subset \ Omega$. Does $f \in H(\Omega)$? What is the uniform limit of $f’_j$? Informally, we call the phenomenon that a sequence of functions uniformly converges on every compact subset almost uniform convergence. This has nothing to do with almost everywhere in integration theory. In fact, this post does not require background in Lebesgue integration theory. • Open mapping theorem (complex analysis version). • Maximum modulus principle and some variants. • Rouché’s theorem. Or even more, the calculus of residues. Despite of the prerequisites, we still need some preparation beforehand. Simply Connected Definition 1. Let $X$ be a connected topological space. We say $X$ is simply connected if every curve is null-homotopic. Let $\gamma:[0,1] \to X$ be a closed curve, i.e., it is a continuous map such that $\gamma(0)=\gamma(1)$. We say $\gamma$ is null-homotopic if it is homotopic to a constant map $\gamma_0:[0,1] \to \{x\}$ with $x \in X$. Intuitively, if $X$ is simply connected, then $X$ contains no “hole”. For example, the unit disc $U$ is simply connected. However, $U \setminus \{0\}$ is not. On the other hand, $U \setminus [0,1)$ is still simply connected. Another satisfying result is that every convex and connected open set is simply connected. This is up to a convex combination. There are a lot of good properties of simply connected region, which will be summarised below. Proposition 1. For a region (open and connected subset of $\mathbb{R}^2$), the following conditions are equivalent. Each one can imply other eight. 1. $\Omega$ is homeomorphic to the open unit disc $U$. 2. $\Omega$ is simply connected. 3. $\operatorname{Ind}_\gamma(\alpha)=0$ for every path $\gamma$ in $\Omega$ and $\alpha \in S^2 \setminus \Omega$, where $S^2$ is the Riemann sphere. 4. $S^2 \setminus \Omega$ is connected. 5. Every $f \in H(\Omega)$ can be approximated by polynomials, almost uniformly.. 6. For every $f \in H(\Omega)$ and every closed path $\gamma$ in $\Omega$, 7. Every $f \in H(\Omega)$ has anti-derivative. That is, there exists an $F \in H(\Omega)$ such that $F’=f$. 8. If $f \in H(\Omega)$ and $1/f \in H(\Omega)$, then there exists a $g \in H(\Omega)$ such that $f=\exp{g}$. 9. For such $f$, there also exists a $\varphi \in H(\Omega)$ such that $f=\varphi^2$. 5~9 are pretty much saying, calculus is fine here and we are not worrying about nightmare counterexamples, to some extent. Most of the implications $n \implies n+1$ are not that difficult, but there are some deserve a mention. 4 implying 5 is a consequence of Runge’s theorem. In the implication of 7 to 8, one needs to use the fact that $\Omega$ is connected. When we have $f=\exp{g}$, then we can put $\varphi=\exp\frac{g}{2}$ from which we obtain $f=\varphi^2$. 9 implying 1 is partly a consequence of the Riemann mapping theorem. Indeed, if $\Omega$ is the plane then the homeomorphism is easy: $z \mapsto \frac{z}{1+|z|}$ is a homeomorphism of $\Omega$ onto $U$. But we need the Riemann mapping theorem to give the remaining part, when $\Omega$ is a proper subset. If you know the definition of sheaf, you will realise that $(\mathbb{C},H(\cdot))$ is indeed a sheaf. For each open subset $\Omega \subset \mathbb{C}$, $H(\Omega)$ is a ring, even more precisely, a $ \mathbb{C}$-algebra. The exponential map $\exp:g \mapsto e^g$ is a sheaf morphism. However, we now see that it is surjective if and only if $\Omega$ is simply connected. I hope this can help you figure out an exercise in algebraic geometry. You know, that celebrated book by Robin Hartshorne. Since we haven’t prove the Riemann mapping theorem, we cannot use the equivalence above yet. However, we can use 9 right away. This gives rise to Koebe’s square root trick. Equicontinuity & Normal Family Equicontinuity is quite an important concept. You may have seen it in differential equation, harmonic function, maybe just sequence of functions. We will use it to describe a family of functions, where almost uniform convergence can be well established. Definition 2. Let $\mathscr{F}$ be a family of functions $(X,d) \to \mathbb{C}$ where $(X,d)$ is a metric space. We say that $\mathscr{F}$ is equicontinuous if, to every $\varepsilon>0$, there corresponds a $\delta>0$ such that whenever $d(x,y)<\delta$, we have $|f(x)-f(y)|<\varepsilon$ for all $f \in \ mathscr{F}$. In particular, by definition, all functions in $\mathscr{F}$ are uniformly continuous. We say that $\mathscr{F}$ is pointwise bounded if, to every $x \in X$, there corresponds some $0 \le M(x) < \infty$ such that $|f(x)| \le M(x)$ for every $f \in \mathscr{F}$. We say that $\mathscr{F}$ is uniformly bounded on each compact subset if, to each compact $K \subset X$, there corresponds a number $M(K)$ such that $|f(z)| \le M(K)$ for all $f \in \mathscr{F}$ and $z \in K$. These concepts are talking about “a family of” continuity and boundedness. In our proof of the Riemann mapping theorem, we do not construct the map explicitly, instead, we will use these concepts above to obtain one (which is a limit) that exists. In this post we simply put $X=\Omega \subset \mathbb{C}$, a simply connected region and $d$ is the natural one. A famous result of equicontinuity is Arzelà-Ascoli, which says that pointwise boundedness and equicontinuity implies almost uniform convergence. Theorem 1 (Arzelà-Ascoli) Let $\mathscr{F}$ be a family of complex functions on a metric space $X$, which is pointwise bounded and equicontinuous. $X$ is separable, i.e., it contains a countable dense set. Then every sequence $\{f_n\}$ in $\mathscr{F}$ has then a subsequence that converges uniformly on every compact subset of $X$. Here is a self-contained proof. Certainly it is OK to let $X$ be a subset of $\mathbb{R}$, $\mathbb{C}$ or their product. We use this in real and complex analysis for this reason. We will need this almost uniform convergence to establish our conformal map. To specify its application in complex analysis, we introduce the concept of normal family. Definition 3. Suppose $\mathscr{F} \subset H(\Omega)$, for some region $\Omega \subset \mathbb{C}$. We call $\mathscr{F}$ a normal family if every sequence of members of $\mathscr{F}$ contains a subsequence, which converges uniformly on every compact subset of $\mathscr{F}$. The limit function is not required to be in $\mathscr{F}$. We now apply Arzelà-Ascoli to complex analysis. Theorem 2 (Montel). Suppose $\mathscr{F} \subset H(\Omega)$ is uniformly bounded, then $\mathscr{F}$ is a normal family. Proof. We need to show that $\mathscr{F}$ is “almost” equicontinuous, since uniformly boundedness clearly implies pointwise boundedness, we can apply Arzelà-Ascoli later. Let $\{K_n\}$ be a sequence of compact sets such that (1) $\bigcup_n K_n = \Omega$ and (2) $K_n \subset K^\circ_{n+1} \subset K_{n+1}$, the interior of $K_{n+1}$. Then for every $z \in K_n$, there exists a positive number $\delta_n$ such that where $D(a,r)$ is the disc centred at $a$ with radius $r$. If such $\delta_n$ does not exist, then there exists a point $z \in K_{n}$ such that whenever $\delta>0$, $D(z,\delta) \setminus K_{n+1} \ne \varnothing$, which is to say, $z$ is a boundary point. But this is impossible because $z$ lies in the interior of $K_{n+1}$ by definition. For such $\delta_n$, we pick $z’,z’’ \in K_n$ such that $|z’-z’’| < \delta_n$. Let $\gamma$ be the positively oriented circle with centre at $z’$ and radius $2\delta_n$, i.e. the boundary of $D(z’,2\ delta_n)$. Recall that the Cauchy formula says We will make use of this. By the formula above, we have Now we make use of our choice of $z’$, $z’’$ and $\gamma$. By definition, for $\zeta \in \gamma^\ast$ (the range of $\gamma$), we have $|\zeta-z’|=2\delta_n$. Since $|z’-z’’|<\delta_n$, we have $|\ zeta-z’|=2\delta_n=|\zeta-z’’+z’’-z|\le |\zeta-z’’|+|z’’-z’|$. Therefore $|\zeta-z’’| \ge 2\delta_n-|z’’-z’|>\delta_n$. Bearing this in mind, we see This may looks confusing so we explain it a little more. Since $D(z’,2\delta_n) \subset K^\circ_{n+1}$, we must have $\overline{D}(z’,2\delta_n) \subset K_{n+1}$, therefore whenever $\zeta \in \gamma ^\ast=\partial D(z’,2\delta_n)$, we have $|f(\zeta)| \le M(K_{n+1})$. This is where we use the hypothesis of uniformly bounded. we have $|(\zeta-z’)(\zeta-z’’)|>2\delta_n\delta_n$. The integral of the norm of the integrand $\frac{f(\zeta)}{(\zeta-z’)(\zeta-z’’)}$, is therefore bounded by $\frac{M(K_{n+1})}{2\delta_n^2}$. The integral over $\gamma$ is therefore bounded by $\frac{M(K_{n+1})}{2\ delta_n^2}$ times $2\pi\delta_n$ and the result follows. What does this inequality imply? For $\varepsilon>0$, if we pick $\delta=\min\{\delta_n,\frac{2\delta_n\varepsilon}{M(K_{n+1})}\}$, then $|f(z’)-f(z’’)|<\varepsilon$ for every $f \in \mathscr{F}$ and $|z’-z’’|<\delta$. That is, for each $K_n$, the restrictions of the members of $\mathscr{F}$ to $K_n$ form an equicontinuous family. Now consider a sequence $\{f_j\}$ in $\mathscr{F}$. For each $n$, we apply Arzelà-Ascoli theorem to the restriction of $\mathscr{F}$ to $K_n$, and it gives us an infinite subset $S_n \subset \mathbb {N}$ such that $f_j$ converges uniformly on $K_n$ as $j \to \infty $ and $j \in S_n$. Note we can make sure $S_n \supset S_{n+1}$ because if the subsequence converges uniformly within $S_{n+1}$ then it converges uniformly within $S_n$ as well. Pick a new sequence $\{s_j\}$ where $s_j \in S_j$, then we see $\lim_{j \to \infty}f_{s_j}$ converges uniformly on every $K_n$ and therefore on every compact subset $K$ of $\Omega$. The statement is now proved. $\square$ Remarks. We have no idea what the limit is, and this happens in our proof of the Riemann map theorem as well. The sequence $\{K_n\}$ can be constructed explicitly, however. In fact, for every open set $\Omega$ in the plane there is a sequence $\{K_n\}$ of compact sets such that • $\bigcup_n K_n=\Omega$. • $K_n \subset K_{n+1}^\circ$. • For every compact $K \subset \Omega$, there is some $n$ such that $K \subset K_n$. • Every component of $S^2 \setminus K_n$ contains a component of $S^2 \setminus \Omega$. The set is constructed as follows and can be verified to satisfy what we want above. or each $n$, define Then $K=S^2 \setminus V_n$ is what we want. The Schwarz Lemma Is another important tool for our proof of the Riemann mapping theorem. We need this lemma to establish important inequalities. This lemma as well as its variants show the rigidity of holomorphic maps. We make use of the maximum modulus theorem. For simplicity, let $H^\infty$ be the Banach space of bounded holomorphic functions on $U$, equipped with supremum norm $| \cdot |_\infty$. Theorem 3 (Schwarz lemma). Suppose $f:U \to \mathbb{C}$ is a holomorphic map in $H^\infty$ such that $f(0)=0$ and $|f|_\infty \le 1$, then on the other hand, if $|f(z)|=|z|$ holds for some $z \in U \setminus \{0\}$, or if $|f’(0)|=1$ holds, then $f(z)=\lambda{z}$ for some complex constant $\lambda$ such that $|\lambda|=1$. Proof. Since $f(0)=0$, $f(z)/z$ has a removable singularity at $z=0$. Hence there exists $g \in H(U)$ such that $f(z)=zg(z)$. Fix $0<r<1$. For any $z \in U$ such that $|z|<r$, we have Therefore when $r \to 1$, we see $|g(z)| \le 1$ for all $z \in U$. Therefore $|f(z)| \le |z|$ follows. On the other hand, if $|g(z)|=1$ at some point, the maximum modulus forces $g(z)$ to be a constant, say $\lambda$, from which it follows that $|\lambda|=|g(z)|=1$ and $f(z)=\lambda{z}$. $\square$ There are many variances of the Schwarz lemma, and we will be using Schwarz-Pick. Definition 4. For any $\alpha \in U$, define This family is a subfamily of Möbius transformation, but we are not paying very much attention to this family right now. We need the fact that such $\varphi_\alpha$ is always a one-to-one mapping which carries $S^1$ (the unit circle) onto $S^1$ and $U$ onto $U$ and $\alpha$ to $0$. This requires another application of the maximum modulus theorem. A direct computation shows that Theorem 4 (Schwarz-Pick lemma). Suppose $\alpha,\beta \in U$, $f \in H^\infty$ and $| f|_\infty \le 1$, $f(\alpha)=\beta$. Then Proof. Consider We see $g \in H^\infty$ and $|g|_\infty \le 1$. What’s more important, $g(0)=\varphi_\beta \circ f(\alpha)=\varphi_\beta(\beta)=0$. By the Schwarz lemma, $|g’(0)| \le 1$. On the other hand, we see and therefore In particular, equality holds if and only if $g(z)=\lambda{z}$ for some constant $\lambda$. If this is the case, then The story can go on but we halt here and continue our story of the Riemann mapping theorem. The Riemann Mapping Theorem Each $z \ne 0$ determines a direction from the origin, which can be described by Let $f:\Omega \to \mathbb{C}$ be a map. We say $f$ preserves angles at $z_0 \in \Omega$ if exists and is independent of $\theta$. Conformal mappings preserves angles in a reasonable way. A function $f$ is conformal if it is holomorphic and $f’(z) \ne 0$ everywhere. We have a theorem describes that, but it is pretty elementary so we are not including the proof in this post. Theorem 5. Let $f$ map a region $\Omega$ into the plane. If $f’(z_0)$ exists at some $z_0 \in \Omega$ and $f’(z_0) \ne 0$, then $f$ preserves angles at $z_0$. Conversely, if the differential $Df$ exists and is different from $0$ at $z_0$, and if $f$ preserves angles at $z_0$, then $f’(z_0)$ exists and is different from $0$. There is no confusion about $f’(z_0)$. By differential $Df$ we mean a linear map $L:\mathbb{R}^2 \to \mathbb{R}^2$ such that, writing $z_0=(x_0,y_0)$, we have where $\eta(x,y) \to 0$ as $x \to 0$ and $y \to 0$. To prove this, one can assume that $z_0=f(z_0)=0$. When the differential exists, one writes We say that two regions $\Omega_1$ and $\Omega_2$ are conformally equivalent if there is a conformal one-to-one mapping of $\Omega_1$ onto $\Omega_2$. The Riemann mapping theorem states that Theorem 6 (Riemann mapping theorem). Every proper simply connected region $\Omega$ in the plane is conformally equivalent to the open unit disc $U$. As a famous example, the upper plane $\mathbb{H}$ is conformally equivalent to $U$ by the Cayley transform. As one may expect, this theorem asserts that the study of a simply connected region $\Omega$ can be reduced to $U$ to some extent. But a conformal equivalence is not just about homeomorphism. If $\ varphi:\Omega_1 \to \Omega_2$ is a conformal one-to-one mapping, then $\varphi^{-1}:\Omega_2 \to \Omega_1$ is also a conformal mapping. In the language of algebra, such a mapping $\varphi$ induces a ring isomorphism Therefore, the ring $H(\Omega_2)$ is algebraically the same as $H(\Omega_1)$. The Riemann mapping theorem also states that, if $\Omega$ is a simply connected region, then $H(\Omega) \cong H(U)$. From this we can exploit much more information on top of homeomorphism. One can also extend the story to $S^2$, the Riemann sphere, but that’s another story. The Proof by Arguing A Normal Family The proof is fairly technical. But it is a good chance to attest to our skill in complex analysis. The bread and butter of this proof is the following set: Our is to prove that there is some $\psi \in \Sigma$ such that $\psi(\Omega)=U$. Note, once the non-emptiness is proved, since $|\psi|<1$ uniformly, we see $\Sigma$ is a normal family. Step 1 - Prove Non-emptiness Using Koebe’s Square Root Trick Pick $w_0 \in \mathbb{C} \setminus \Omega$. Then $g(z)=z-w_0 \in H(\Omega)$ and what is more important, $\frac{1}{g} \in H(\Omega)$. By 9 of proposition 1, there exists $\varphi \in H(\Omega)$ such that $\varphi^2(z)=g(z)$, i.e., informally, $\varphi(z)=\sqrt{z-w_0}$ in $\Omega$. If $\varphi(z_1)=\varphi(z_2)$, then $\varphi(z_1)^2=\varphi(z_2)^2=z_1-w_0=z_2-w_0$ and then $z_1=z_2$. Therefore $ \varphi$ is one-to-one. On the other hand, if $\varphi(z_1)=-\varphi(z_2)$, we still have $\varphi^2(z_1)=\varphi^2(z_2)=z_1-w_0=z_2-w_0$, and $z_1=z_2$. This is shows that the “square-root” is well-defined here. This is the Koebe’s square root trick. Since $\varphi$ is an open mapping, there is an open disc $D(a,r) \subset \varphi(\Omega)$, where $a \in \varphi(\Omega)$, $a \ne 0$ and $0<r<|a|$. But by arguments above we have $-a \not\in \varphi (\Omega)$, and therefore $D(-a,r) \cap \varphi(\Omega) = \varnothing$. For this reason, we can put It follows that and therefore $\psi(\Omega) \subset U$. Since $\varphi$ is one-to-one, $\psi$ is one-to-one as well and we deduce that $\psi \in \Sigma$, this set is not empty. Remark. You may have trouble believing that $D(-a,r) \cap \varphi(\Omega)=\varnothing$. But if we pick any $w \in D(-a,r) \cap \varphi(\Omega)$, we have some $z’ \in \Omega$ such that $\varphi(z’)= w$. We also have $|-a-w|<r$ but this implies $|a-(-w)|=|a+w|=|-a-w|<r$, and therefore $-w \in D(a,r) \subset \varphi(\Omega)$. There exists some $z’’ \in \Omega$ such that $\varphi(z’’)=-w$. Hence $-w=w=0$. It follows that $|a|<r$ and this is a contradiction. Since $D(-a,r) \cap \varphi(\Omega)=\varnothing$, we have $|\varphi(z)-(-a)|>r$ for all $z \in \Omega$ and therefore $|\psi(z)|<1$ is not a problem either. Step 2 - Enlarge the Range If $\psi \in \Sigma$ and $\psi(\Omega) \subsetneqq U$, and $z_0 \in \Omega$, then there exists a $\psi_1 \in \Sigma$ such that $|\psi_1’(z_0)|>|\psi’(z_0)|$. This step shows that we can “enlarge” the range in some way. For convenience we use the Möbius transformation Pick $\alpha \in U \setminus \psi(\Omega)$. Then $\varphi_\alpha \circ \psi \in \Sigma$ and $\varphi_\alpha \circ \psi$ has no zero in $\Omega$. Hence there is some $g \in H(\Omega)$ such that Since $\varphi_\alpha \circ \psi$ is one-to-one, another application of Koebe’s square root trick shows that $g$ is one-to-one. Therefore we have $g \in \Sigma$ as well. If $\psi_1=\varphi_\beta \ circ g$ where $\beta=g(z_0)$, we have $\psi_1 \in \Sigma$ (one-to-one). In particular, $\psi_1(z_0)=0$. By putting $s(z)=z^2$, we have If we put $F(z)=\varphi_{-\alpha} \circ s \circ \varphi_{-\beta}(z)$, then the chain rule shows that (Note we used the fact that $\psi_1’(z_0)=0$.) If we can prove that $0<|F’(0)|<1$ then this step is complete. Note $F$ satisfy the condition in Schwarz-Pick lemma and therefore The first equality does not hold because $F$ is not of the form $\varphi_{-\sigma}(\lambda\varphi_{\eta}(z))$ for $|\lambda|=1$. On the other hand we have Therefore $0<|F’(0)|<1$ and the this step is complete. Step 3 - Find the Function with Largest range, Namely the Disc We take the contraposition of step 2: Fix $z_0 \in \Omega$. If $h \in \Sigma$ is an element such that $|h’(z_0)| \ge |\psi’(z_0)|$ for all $\psi \in \Sigma$, then $h(\Omega)=U$. The proof is complete once we have found such a function! To do this, we use the fact that $\Sigma$ is a normal family. Put By definition of $\eta$, there is a sequence $\{\psi_n\}$ such that $|\psi_n’(z_0)| \to \eta$ in $\Sigma$. By normality of $\Sigma$, we pick a subsequence $\varphi_k=\psi_{n_k}$ that converges uniformly on compact subsets of $\Omega$. Put the uniform limit to be $h \in H(\Omega)$. It follows that $|h’(z_0)|=\eta$. Since $\Sigma \ne \varnothing$ and $\eta \ne 0$, $h$ cannot be a constant. Since $\varphi_n(\Omega) \subset U$, we must have $h(\Omega) \subset \overline{U}$. But since $h$ is open, we are reduced to $h(\Omega) \subset U$. It remains to show that $h$ is one-to-one. Fix distinct $z_1, z_2 \in \Omega$. Put $\alpha=h(z_1)$ and $\alpha_n=\varphi_n(z_1)$, then $\alpha_n \to \alpha$. Let $\overline{D}$ be a closed disc in $\ Omega$ centred at $z_2$ with interior denoted by $D$ such that • $z_1 \not\in \overline{D}$. • $h-\alpha$ has no zero point on the boundary of $\overline{D}$. We see $\varphi_n -\alpha_n$ converges to $h-\alpha$, uniformly on $\overline{D}$. They have no zero in $D$ since they are one-to-one and have a zero at $z_1$. By Rouché’s theorem, $h-\alpha$ has no zero in $D$ either, and in particular $h(z_2)-\alpha = h(z_2)-h(z_1) \ne 0$. This completes the proof. $\square$ Remark. First of all, such a $\overline{D}$ is accessible. This is because zero points of $h-\alpha$ has no limit point in $\Omega$, i.e., they are discrete (when defining $\overline{D}$, we don’t know how many are there yet). Our choice of $\overline{D}$ enables us to use Rouché’s theorem (chances are you didn’t get it). Since $h-\alpha$ has no zero on the boundary, we have $\zeta=\inf_{z \in \partial D}|h(z)-\alpha|>0$. When $n$ is big enough, we see The second inequality is another application of the maximum modulus theorem. Rouché’s theorem applies here naturally as well. $\square$ This proof is a reproduction of W. Rudin’s Real and Complex Analysis. For a comprehensive further reading, I highly recommend Tao’s blog post. A Detailed Proof of the Riemann Mapping Theorem
{"url":"https://desvl.xyz/2022/04/15/riemann-mapping-theorem-proof/","timestamp":"2024-11-13T13:05:57Z","content_type":"text/html","content_length":"50568","record_id":"<urn:uuid:de7f74d9-c054-489c-a05d-e527b95f82b6>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00191.warc.gz"}
Python Import Function From File Last Updated : Mar 11, 2024 IN - Python | Written & Updated By - Pragati In this article we will show you the solution of python import function from file, developers can use Python, a high-level interpreted language with simple syntax. Despite its simplicity, Python offers reliable tools and utilities for creating extremely scalable and complicated programs. Python enables us to use modular programming, where we may create independent logic and import it into different program sections. Let's start by defining that function in order to comprehend how to import and call a function from another file in Python. Python gives us the option to import a particular module's function. Even if you just have one function, this may seem superfluous, but it has advantages to importing all of the functions from a We will now discuss the idea of how to import functions from a file in python with an example. Step By Step Guide On Python Import Function From File :- Code 1 from scipy.integrate import quad # Defining a simple function def f(x): return x**2 # Performing numerical integration integral, error = quad(f, 0, 1) 1. The code performs numerical integration of the straightforward function f(x) = x**2 using the quad() function from the scipy.integrate package. 2. The lower limit of integration (f), higher limit of integration (f), and function to integrate are the three arguments for the quad() function. 3. The quad() function produces two values: an estimate of the result's absolute error (error) and the integral's estimated value (integral). 4. In this illustration, the function f(x) = x**2 is being integrated by the code over the range of 0 to 1. The variables integral and error are used to store the outcome. Code 2 import pandas as pd from numpy import mean # Use pandas to load a CSV file into a DataFrame df = pd.read_csv('data.csv') # Use numpy to calculate the mean of a specific column average = mean(df['column_name']) 1. The necessary libraries, pandas and numpy, are first imported. 2. Next, we import 'data.csv', a CSV file, into a DataFrame using the pandas package. The variable df stores the dataframe. 3. The mean of a particular column in the DataFrame is then determined using the numpy module. Change "column_name" in the code above to the actual name of the column whose mean you want to compute. The variable average stores the mean value. Conclusion :- As a result, we have successfully learned how to import functions from a file in python with an example. We were successfully saw how simple and crucial it is to import functions and modules in the aforementioned tutorial. importing a complete module, with all of its features. I hope this article on python import function from file helps you and the steps and method mentioned above are easy to follow and implement.
{"url":"https://talkerscode.com/howto/python-import-function-from-file.php","timestamp":"2024-11-06T01:25:12Z","content_type":"text/html","content_length":"59360","record_id":"<urn:uuid:8a9a7282-892a-42d8-a855-6b6b75df8176>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00492.warc.gz"}
Bài giảng ECE 250 Algorithms and Data Structures - 6.01. Binary search trees Summary In this topic, we covered binary search trees – Described Abstract Sorted Lists – Problems using arrays and linked lists – Definition a binary search tree – Looked at the implementation of: • Empty, size, height, count • Front, back, insert, erase • Previous smaller and next larger objects 82 trang Chia sẻ: vutrong32 | Lượt xem: 1203 | Lượt tải: 2 Bạn đang xem trước 20 trang tài liệu Bài giảng ECE 250 Algorithms and Data Structures - 6.01. Binary search trees, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên ECE 250 Algorithms and Data Structures Douglas Wilhelm Harder, M.Math. LEL Department of Electrical and Computer Engineering University of Waterloo Waterloo, Ontario, Canada ece.uwaterloo.ca dwharder@alumni.uwaterloo.ca © 2006-2013 by Douglas Wilhelm Harder. Some rights reserved. Binary search trees 2Binary search trees Outline This topic covers binary search trees: – Abstract Sorted Lists – Background – Definition and examples – Implementation: • Front, back, insert, erase • Previous smaller and next larger objects • Finding the kth object 3Binary search trees Abstract Sorted Lists Previously, we discussed Abstract Lists: the objects are explicitly linearly ordered by the programmer We will now discuss the Abstract Sorted List: – The relation is based on an implicit linear ordering Certain operations no longer make sense: – push_front and push_back are replaced by a generic insert 6.1.1 4Binary search trees Abstract Sorted Lists Queries that may be made about data stored in a Sorted List ADT include: – Finding the smallest and largest entries – Finding the kth largest entry – Find the next larger and previous smaller objects of a given object which may or may not be in the container – Iterate through those objects that fall on an interval [a, b] 6.1.1 5Binary search trees Implementation If we implement an Abstract Sorted List using an array or a linked list, we will have operations which are O(n) – As an insertion could occur anywhere in a linked list or array, we must either traverse or copy, on average, O(n) objects 6.1.1 6Binary search trees Background Recall that with a binary tree, we can dictate an order on the two children We will exploit this order: – Require all objects in the left sub-tree to be less than the object stored in the root node, and – Require all objects in the right sub-tree to be greater than the object in the root object 6.1.2 7Binary search trees Binary Search Trees Graphically, we may relationship – Each of the two sub-trees will themselves be binary search trees 6.1.2 8Binary search trees Binary Search Trees Notice that we can already use this structure for searching: examine the root node and if we have not found what we are looking for: – If the object is less than what is stored in the root node, continue searching in the left sub-tree – Otherwise, continue searching the right sub-tree With a linear order, one of the following three must be true: a b 6.1.2 9Binary search trees Definition Thus, we define a non-empty binary search tree as a binary tree with the following properties: – The left sub-tree (if any) is a binary search tree and all elements are less than the root element, and – The right sub-tree (if any) is a binary search tree and all elements are greater than the root element 6.1.2 10 Binary search trees Examples Here are other examples of binary search trees: 6.1.2 11 Binary search trees Examples Unfortunately, it is possible to construct degenerate binary search trees – This is equivalent to a linked list, i.e., O(n) 6.1.2 12 Binary search trees Examples All these binary search trees store the same data 6.1.2 13 Binary search trees Duplicate Elements We will assume that in any binary tree, we are not storing duplicate elements unless otherwise stated – In reality, it is seldom the case where duplicate elements in a container must be stored as separate entities You can always consider duplicate elements with modifications to the algorithms we will cover 6.1.3 14 Binary search trees Implementation We will look at an implementation of a binary search tree in the same spirit as we did with our Single_list class – We will have a Binary_search_nodes class – A Binary_search_tree class will store a pointer to the root We will use templates, however, we will require that the class overrides the comparison operators 6.1.4 15 Binary search trees Implementation Any class which uses this binary-search-tree class must therefore implement: bool operator<=( Type const &, Type const & ); bool operator< ( Type const &, Type const & ); bool operator==( Type const &, Type const & ); That is, we are allowed to compare two instances of this class – Examples: int and double 6.1.4 16 Binary search trees Implementation #include "Binary_node.h" template class Binary_search_tree; template class Binary_search_node:public Binary_node { using Binary_node::element; using Binary_node::left_tree; using Binary_node::right_tree; public: Binary_search_node( Type const & ); Binary_search_node *left() const; Binary_search_node *right() const; 6.1.4 17 Binary search trees Implementation Type front() const; Type back() const; bool find( Type const & ) const; void clear(); bool insert( Type const & ); bool erase( Type const &, Binary_search_node *& ); friend class Binary_search_tree; }; 6.1.4 18 Binary search trees Constructor The constructor simply calls the constructor of the base class – Recall that it sets both left_tree and right_tree to nullptr – It assumes that this is a new leaf node template Binary_search_node::Binary_search_node( Type const &obj ): Binary_node( obj ) { // Just calls the constructor of the base class } 6.1.4 19 Binary search trees Standard Accessors Because it is a derived class, it already inherits the function: Type retrieve() const; Because the base class returns a pointer to a Binary_node, we must recast them as Binary_search_node: template Binary_search_node *Binary_search_node::left() const { return reinterpret_cast( Binary_node::left() ); } template Binary_search_node *Binary_search_node::right() const { return reinterpret_cast( Binary_node::right() ); } 6.1.4 20 Binary search trees Inherited Member Functions The member functions bool empty() const bool is_leaf() const int size() const int height() const are inherited from the bas class Binary_node 6.1.4 21 Binary search trees Finding the Minimum Object template Type Binary_search_node::front() const { if ( empty() ) { throw underflow(); } return ( left()->empty() ) ? retrieve() : left()->front(); } – The run time O(h) 6.1.4.1 22 Binary search trees Finding the Maximum Object template Type Binary_search_node::back() const { if ( empty() ) { throw underflow(); } return ( right()->empty() ) ? retrieve() : right()->back(); } – The extreme values are not necessarily leaf nodes 6.1.4.2 23 Binary search trees Find To determine membership, traverse the tree based on the linear relationship: – If a node containing the value is found, e.g., 81, return 1 – If an empty node is reached, e.g., 36, the object is not in the tree: 6.1.4.3 24 Binary search trees Find The implementation is similar to front and back: template bool Binary_search_node::find( Type const &obj ) const { if ( empty() ) { return false; } else if ( retrieve() == obj ) { return true; } return ( obj < retrieve() ) ? left()->find( obj ) : right()->find( obj ); } – The run time is O(h) 6.1.4.3 25 Binary search trees Insert Recall that a Sorted List is implicitly ordered – It does not make sense to have member functions such as push_front and push_back – Insertion will be performed by a single insert member function which places the object into the correct location 6.1.4.4 26 Binary search trees Insert An insertion will be performed at a leaf node: – Any empty node is a possible location for an insertion The values which may be inserted at any empty node depend on the surrounding nodes 6.1.4.4 27 Binary search trees Insert For example, this node may hold 48, 49, or 50 6.1.4.4 28 Binary search trees Insert An insertion at this location must be 35, 36, 37, or 38 6.1.4.4 29 Binary search trees Insert This empty node may hold values from 71 to 74 6.1.4.4 30 Binary search trees Insert Like find, we will step through the tree – If we find the object already in the tree, we will return • The object is already in the binary search tree (no duplicates) – Otherwise, we will arrive at an empty node – The object will be inserted into that location – The run time is O(h) 6.1.4.4 31 Binary search trees Insert In inserting the value 52, we traverse the tree until we reach an empty node – The left sub-tree of 54 is an empty node 6.1.4.4 32 Binary search trees Insert A new leaf node is created and assigned to the member variable left_tree 6.1.4.4 33 Binary search trees Insert In inserting 40, we determine the right sub-tree of 39 is an empty node 6.1.4.4 34 Binary search trees Insert A new leaf node storing 40 is created and assigned to the member variable right_tree 6.1.4.4 35 Binary search trees Insert template bool Binary_search_node::insert( Type const &obj, Binary_search_node *&ptr_to_this ) { if ( empty() ) { ptr_to_this = new Binary_search_node( obj ); return true; } else if ( obj < retrieve() ) { return left()->insert( obj, left_tree ); } else if ( obj > retrieve() ) { return right()->insert( obj, right_tree ); } else { return false; } } 6.1.4.4 36 Binary search trees Insert It is assumed that if neither of the conditions: obj < retrieve() obj > retrieve() then obj == retrieve() and therefore we do nothing – The object is already in the binary search tree 6.1.4.4 37 Binary search trees Insert Blackboard example: – In the given order, insert these objects into an initially empty binary search tree: 31 45 36 14 52 42 6 21 73 47 26 37 33 8 – What values could be placed: • To the left of 21? • To the right of 26? • To the left of 47? – How would we determine if 40 is in this binary search tree? – Which values could be inserted to increase the height of the tree? 6.1.4.4 38 Binary search trees Erase A node being erased is not always going to be a leaf node There are three possible scenarios: – The node is a leaf node, – It has exactly one child, or – It has two children (it is a full node) 6.1.4.5 39 Binary search trees Erase A leaf node simply must be removed and the appropriate member variable of the parent is set to nullptr – Consider removing 75 6.1.4.5 40 Binary search trees Erase The node is deleted and left_tree of 81 is set to nullptr 6.1.4.5 41 Binary search trees Erase Erasing the node containing 40 is similar 6.1.4.5 42 Binary search trees Erase The node is deleted and right_tree of 39 is set to nullptr 6.1.4.5 43 Binary search trees Erase If a node has only one child, we can simply promote the sub-tree associated with the child – Consider removing 8 which has one left child 6.1.4.5 44 Binary search trees Erase The node 8 is deleted and the left_tree of 11 is updated to point to 3 6.1.4.5 45 Binary search trees Erase There is no difference in promoting a single node or a sub-tree – To remove 39, it has a single child 11 6.1.4.5 46 Binary search trees Erase The node containing 39 is deleted and left_node of 42 is updated to point to 11 – Notice that order is still maintained 6.1.4.5 47 Binary search trees Erase Consider erasing the node containing 99 6.1.4.5 48 Binary search trees Erase The node is deleted and the left sub-tree is promoted: – The member variable right_tree of 70 is set to point to 92 – Again, the order of the tree is maintained 6.1.4.5 49 Binary search trees Erase Finally, we will consider the problem of erasing a full node, e.g., 42 We will perform two operations: – Replace 42 with the minimum object in the right sub-tree – Erase that object from the right sub-tree 6.1.4.5 50 Binary search trees Erase In this case, we replace 42 with 47 – We temporarily have two copies of 47 in the tree 6.1.4.5 51 Binary search trees Erase We now recursively erase 47 from the right sub-tree – We note that 47 is a leaf node in the right sub-tree 6.1.4.5 52 Binary search trees Erase Leaf nodes are simply removed and left_tree of 51 is set to nullptr – Notice that the tree is still sorted: 47 was the least object in the right sub-tree 6.1.4.5 53 Binary search trees Erase Suppose we want to erase the root 47 again: – We must copy the minimum of the right sub-tree – We could promote the maximum object in the left sub-tree and achieve similar results 6.1.4.5 54 Binary search trees Erase We copy 51 from the right sub-tree 6.1.4.5 55 Binary search trees Erase We must proceed by delete 51 from the right sub-tree 6.1.4.5 56 Binary search trees Erase In this case, the node storing 51 has just a single child 6.1.4.5 57 Binary search trees Erase We delete the node containing 51 and assign the member variable left_tree of 70 to point to 59 6.1.4.5 58 Binary search trees Erase Note that after seven removals, the remaining tree is still correctly sorted 6.1.4.5 59 Binary search trees Erase In the two examples of removing a full node, we promoted: – A node with no children – A node with right child Is it possible, in removing a full node, to promote a child with two children? 6.1.4.5 60 Binary search trees Erase Recall that we promoted the minimum element in the right sub-tree – If that node had a left sub-tree, that sub-tree would contain a smaller value 6.1.4.5 61 Binary search trees Erase In order to properly remove a node, we will have to change the member variable pointing to the node – To do this, we will pass that member variable by reference Additionally: We will return 1 if the object is removed and 1 if the object was not found 6.1.4.5 62 Binary search trees Erase template bool Binary_search_node::erase( Type const &obj, Binary_search_node *&ptr_to_this ) { if ( empty() ) { return false; } else if ( obj == retrieve() ) { if ( is_leaf() ) { // leaf node ptr_to_this = nullptr; delete this; } else if ( !left ()->empty() && !right()->empty() ) { // full node element = right()->front(); right()->erase( retrieve(), right_tree ); } else { // only one child ptr_to_this = ( !left()->empty() ) ? left() : right (); delete this; } return true; } else if ( obj < retrieve() ) { return left()->erase( obj, left_tree ); } else { return right()->erase( obj, right_tree ); } } 6.1.4.5 63 Binary search trees Erase Blackboard example: – In the binary search tree generated previously: • Erase 47 • Erase 21 • Erase 45 • Erase 31 • Erase 36 6.1.4.5 64 Binary search trees Binary Search Tree We have defined binary search nodes – Similar to the Single_node in Project 1 We must now introduce a container which stores the root – A Binary_search_tree class Most operations will be simply passed to the root node 6.1.5 65 Binary search trees Implementation template class Binary_search_tree { private: Binary_search_node *root_node; Binary_search_node *root() const; public: Binary_search_tree(); ~Binary_search_tree(); bool empty() const; int size() const; int height() const; Type front() const; Type back() const; int count( Type const &obj ) const; void clear(); bool insert( Type const &obj ); bool erase( Type const &obj ); }; 6.1.5 66 Binary search trees Constructor, Destructor, and Clear template Binary_search_tree::Binary_search_tree(): root_node( nullptr ) { // does nothing } template Binary_search_tree::~Binary_search_tree() { clear(); } template void Binary_search_tree::clear() { root()->clear( root_node ); } 6.1.5 67 Binary search trees Constructor, Destructor, and Clear template Binary_search_tree *Binary_search_tree::root() const { return tree_root; } template bool Binary_search_tree::empty() const { return root()->empty(); } template int Binary_search_tree::size() const { return root()->size(); } 6.1.5 68 Binary search trees Empty, Size, Height and Count template int Binary_search_tree::height() const { return root()->height(); } template bool Binary_search_tree::find( Type const &obj ) const { return root()->find( obj ); } 6.1.5 69 Binary search trees Front and Back // If root() is nullptr, 'front' will throw an underflow exception template Type Binary_search_tree::front() const { return root()->front(); } // If root() is nullptr, 'back' will throw an underflow exception template Type Binary_search_tree::back() const { return root()->back(); } 6.1.5 70 Binary search trees Insert and Erase template bool Binary_search_tree::insert( Type const &obj ) { return root()->insert( obj, root_node ); } template bool Binary_search_tree::erase( Type const &obj ) { return root()->erase( obj, root_node ); } 6.1.5 71 Binary search trees Other Relation-based Operations We will quickly consider two other relation-based queries that are very quick to calculate with an array of sorted objects: – Finding the previous and next entries, and – Finding the kth entry 6.1.6 72 Binary search trees Previous and Next Objects All the operations up to now have been operations which work on any container: count, insert, etc. – If these are the only relevant operations, use a hash table Operations specific to linearly ordered data include: – Find the next larger and previous smaller objects of a given object which may or may not be in the container – Find the kth entry of the container – Iterate through those objects that fall on an interval [a, b] We will focus on finding the next largest object – The others will follow 6.1.6.1 73 Binary search trees Previous and Next Objects To find the next largest object: – If the node has a right sub-tree, the minimum object in that sub-tree is the next-largest object 6.1.6.1 74 Binary search trees Previous and Next Objects If, however, there is no right sub-tree: – It is the next largest object (if any) that exists in the path from the root to the node 6.1.6.1 75 Binary search trees Previous and Next Objects More generally: what is the next largest entry of an arbitrary object? – This can be found with a single search from the root node to one of the leaves—an O(h) operation – This function returns the object if it did not find something greater than it template Type Binary_search_node::next( Type const &obj ) const { if ( empty() ) { return obj; } else if ( retrieve() == obj ) { return ( right()->empty() ) ? obj : right()->front(); } else if ( retrieve() > obj ) { Type tmp = left()->next( obj ); return ( tmp == obj ) ? retrieve() : tmp; } else { return right()->next( obj ); } } 6.1.6.1 76 Binary search trees Finding the kth Object Another operation on sorted lists may be finding the kth largest object – Recall that k goes from 0 to n – 1 – If the left-sub-tree has ℓ = k entries, return the current node, – If the left sub-tree has ℓ < k entries, return the kth entry of the left sub-tree, – Otherwise, the left sub-tree has ℓ > k entries, so return the (k – ℓ – 1)th entry of the right sub-tree 6.1.6.2 0 1 2 3 4 5 6 7 8 9 1011 12 13 141516 17 7 10 18 1 5 77 Binary search trees Finding the kth Object template Type Binary_search_tree::at( int k ) const { return ( k = size() ) ? Type() : root()->at( k ); // Need to go from 0, ..., n - 1 } template Type Binary_search_node::at( int k ) const { if ( left()->size() == k ) { return retrieve(); } else if ( left()->size() > k ) { return left()->at( k ); } else { return right()->at( k - left()->size() – 1 ); } } 6.1.6.2 78 Binary search trees Finding the kth Object This requires that size() returns in Q(1) time – We must have a member variable int tree_size; which stores the number of descendants of this node – This requires Q(n) additional memory template bool Binary_search_tree::size() const { return root()->size(); } – We can implement this in the Binary_node class, if we want • The constructor will set the size to 1 6.1.7 79 Binary search trees Finding the kth Object We must now update insert() and erase() to update it template bool Binary_search_node::insert( Type const &obj, Binary_search_node *&ptr_to_this ) { if ( empty() ) { ptr_to_this = new Binary_search_node( obj ); return true; } else if ( obj < retrieve() ) { return left()->insert( obj, left_tree ) ? ++tree_size : false; } else if ( obj > retrieve() ) { return right()->insert( obj, right_tree ) ? ++tree_size : false; } else { return false; } } 6.1.7 Clever trick: in C and C++, any non-zero value is interpreted as true 80 Binary search trees Run Time: O(h) Almost all of the relevant operations on a binary search tree are O(h) – If the tree is close to a linked list, the run times is O(n) • Insert 1, 2, 3, 4, 5, 6, 7, ..., n into a empty binary search tree – The best we can do is if the tree is perfect: O(ln(n)) – Our goal will be to find tree structures where we can maintain a height of Q(ln(n)) We will look at – AVL trees – B+ trees both of which ensure that the height remains Q(ln(n)) Others exist, too 6.1.7 81 Binary search trees Summary In this topic, we covered binary search trees – Described Abstract Sorted Lists – Problems using arrays and linked lists – Definition a binary search tree – Looked at the implementation of: • Empty, size, height, count • Front, back, insert, erase • Previous smaller and next larger objects 82 Binary search trees Usage Notes • These slides are made publicly available on the web for anyone to use • If you choose to use them, or a part thereof, for a course at another institution, I ask only three things: – that you inform me that you are using the slides, – that you acknowledge my work, and – that you alert me of any mistakes which I made or changes which you make, and allow me the option of incorporating such changes (with an acknowledgment) in my set of slides Sincerely, Douglas Wilhelm Harder, MMath dwharder@alumni.uwaterloo.ca Các file đính kèm theo tài liệu này:
{"url":"https://tailieu.tv/tai-lieu/bai-giang-ece-250-algorithms-and-data-structures-6-01-binary-search-trees-32104/","timestamp":"2024-11-06T04:33:08Z","content_type":"application/xhtml+xml","content_length":"34630","record_id":"<urn:uuid:da83a3fa-ce60-4cde-92ae-94f1b79c248b>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00117.warc.gz"}
Cite as Ajay D. Kshemkalyani, Manish Kumar, Anisur Rahaman Molla, and Gokarna Sharma. Brief Announcement: Agent-Based Leader Election, MST, and Beyond. In 38th International Symposium on Distributed Computing (DISC 2024). Leibniz International Proceedings in Informatics (LIPIcs), Volume 319, pp. 50:1-50:7, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024) Copy BibTex To Clipboard author = {Kshemkalyani, Ajay D. and Kumar, Manish and Molla, Anisur Rahaman and Sharma, Gokarna}, title = {{Brief Announcement: Agent-Based Leader Election, MST, and Beyond}}, booktitle = {38th International Symposium on Distributed Computing (DISC 2024)}, pages = {50:1--50:7}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-352-2}, ISSN = {1868-8969}, year = {2024}, volume = {319}, editor = {Alistarh, Dan}, publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik}, address = {Dagstuhl, Germany}, URL = {https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.DISC.2024.50}, URN = {urn:nbn:de:0030-drops-212782}, doi = {10.4230/LIPIcs.DISC.2024.50}, annote = {Keywords: Distributed algorithms, mobile agents, local communication, leader election, MST, MIS, gathering, minimal dominating sets, time and memory complexity, graph parameters}
{"url":"https://drops.dagstuhl.de/search?term=Benedikt%2C%20Michael","timestamp":"2024-11-02T05:02:18Z","content_type":"text/html","content_length":"174089","record_id":"<urn:uuid:72934564-9872-445e-9eed-6c57dd1eb081>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00360.warc.gz"}
What is the product of one negative and one positive? What is the product of one negative and one positive? The product of one positive and one negative integer is always a negative integer. What is the product of two negatives and a positive? The fact that the product of two negatives is a positive is therefore related to the fact that the inverse of the inverse of a positive number is that positive number back again. What do you do when you multiply a negative and a positive? Rule 2: A negative number times a positive number equals a negative number. When you multiply a negative number to a positive number, your answer is a negative number. It doesn’t matter which order the positive and negative numbers are in that you are multiplying, the answer is always a negative number. How do two negatives make a positive? When you have two negative signs, one turns over, and they add together to make a positive. If you have a positive and a negative, there is one dash left over, and the answer is negative. What is the product of 2 negative numbers? Answer: There are two simple rules to remember: When you multiply a negative number by a positive number then the product is always negative. When you multiply two negative numbers or two positive numbers then the product is always positive. When you multiply or divide one positive and one negative integer The answer will be? If you are multiplying/dividing one positive integers with one negative integer, your answer will be negative. How to prove that product of two negative numbers is positive? Proof that Product of Two Negative Numbers is Positive. When you multiply a negative number by another negative number, the result is a positive number. This rule is not obvious and proving it is not straightforward. However, here is a clever way to prove the rule by starting with an equation and factoring out terms. What is the product of 12 times positive 4? Well, once again, 12 times positive 4 would be 48. And we’re in the circumstance where one of these two numbers right over here is negative, this one right over here. If exactly one of the two numbers is negative, then the product is going to be negative. We are in this circumstance right over here. Do you have to multiply two numbers to find the product? If you are asked to work out the product of two or more numbers, then you need to multiply the numbers together. If you are asked to find the sum of two or more numbers, then you need to add the numbers together. How to find the two positive real numbers? How do you find the two positive real numbers whose sum is 40 and whose product is a maximum? We would like to find where the product x ⋅ y is maximum, but from the above equation we can write: x ⋅ y = x ⋅ (40 −x) = −x2 + 40x.
{"url":"https://teacherscollegesj.org/what-is-the-product-of-one-negative-and-one-positive/","timestamp":"2024-11-02T21:23:21Z","content_type":"text/html","content_length":"142444","record_id":"<urn:uuid:c20b0e48-c77c-42d1-b38f-cd780b1dc45f>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00838.warc.gz"}
III Theoretical Physics of Soft Condensed Matter - Liquid crystals hydrodynamics 8Liquid crystals hydrodynamics III Theoretical Physics of Soft Condensed Matter 8.2 Coarsening dynamics for nematics We will discuss the coarsening dynamics for nematic liquid crystals, and indicate how polar liquid crystals are different when appropriate. As before, we begin with a completely disordered phase with = 0, and then quench the system by dropping the temperature quickly. The liquid crystals then want to arrange themselves. Afterwards, we local have Q = λ 0 0 0 −λ/2 0 0 0 −λ/2 with free energy ). If the quench is deep enough, we have spinodal-like instability, and we quickly get locally ordered. Coordinate-independently, we can write Q = Since all this ordering is done locally, the principal axis can vary over space. There is then a slower process that sorts out global ordering, driven by the elastic part of the free energy. Compare this with Model B/H: at early times, we have , but we get domain walls between the ±φ The late time dynamics is governed by the slow coarsening of these domain walls. The key property of this is that it has reduced dimensionality, i.e. the domain wall is 2 dimensional while space is 3 dimensional, and it connects two different grounds states (i.e. minima of F ). The domain wall can be moved around, but there is no local change that allows us to remove it. They can only be removed by “collision” of domain walls. Analogous structures are present for nematic liquid crystals. The discussion will largely involve us drawing pictures. For now, we will not do this completely rigorously, and simply rely on our intuitive understanding of when defects can or cannot arise. We later formalize these notions in terms of homotopy groups. We first do this in two dimensions, where defects have dimension 2. There can be no line defects like a domain wall. The reason is that if we try to construct a domain wall then this can relax locally to become On the other hand, we can have point defects, which are 0-dimensional. Two basic ones are as follows: q = − q = + The charge can be described as follows — we start at a point near the defect, and go around the defect once. When doing so, the direction of the order parameter turns. After going around the defect once, in the , the order parameter made a half turn in the opposite sense to how we moved around the defect. In the q = + case, they turned in the same sense. We see that q ± are the smallest possible topological charge, and is a quantum of a charge. In general, we can have defects of other charges. For example, here are two q = +1 charges: Both of these are = +1 defects, and they can be continuously deformed into each other, simply by rotating each bar by 90 . For polar liquid crystals, the quantum of a charge is 1. If we have defects of charge greater than , then they tend to dissociate into multiple defects. This is due to energetic reasons. The elastic energy is given by |∇ · Q| |(∇ · n)n + n · ∇n| If we double the charge, we double the tensor. Since this term is quadratic in the gradient, putting two defects together doubles the energy. In general, topological defects tend to dissociate to smaller q-values. To recap, after quenching, at early stages, we locally have Q → 2λ(nn − ) is random, and tend to vary continuously. However, topological defects are present, which cannot be ironed out locally. All topological defects with |q| > dissociate quickly, and we are left with q = ± defects floating around. We then have a late stage process where opposite charges attract and then annihilate. So the system becomes more and more ordered as a nematic. We can estimate the energy of an isolated defect as |(∇ · n)n + ∇n| where ˜κ = κλ . Dimensionally, we have ∇ ∼ So we have an energy E ∼ ˜κ dr ' ˜κ log is the mean spacing and is some kind of core radius of the defect. The core radius reflects the fact that as we zoom close enough to the core of the is no longer constant and our above energy estimate fails. In fact, λ → 0 at the core. Recall that the electrostatic energy in two dimensions is given by a similar equation. Thus, this energy is Coulombic, with force . Under this force, the defects move with overdamped motion, with the velocity being proportional to the force. So R ∼ L ∝ L(t) ∼ t This is the scaling law for nematic defect coarsening in 2 dimensions.
{"url":"https://dec41.user.srcf.net/h/III_L/theoretical_physics_of_soft_condensed_matter/8_2","timestamp":"2024-11-11T10:55:35Z","content_type":"text/html","content_length":"190500","record_id":"<urn:uuid:b3c5d073-c6d1-4e71-b04e-3ef1f69f265a>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00331.warc.gz"}
Comb Sort Algorithm Comb sort is a comparison sorting algorithm. It is an exchange sort, similar to bubble sort. In comb sort, gaps (distance of two items from each other) are introduced. The gap in bubble sort is 1. The gap starts out as a large value, and, after each traversal, the gap is lessened, until it becomes 1, where the algorithm basically degrades to a bubble sort. This idea can practically kill turtles because some of them would "jump" to the beginning of the list early on. The shrink factor determines how much the gap is lessened. This value is crucial because a small value means that it would be slower for the gap to degrade to 1, slowing down the process, while a large value will not effectively kill turtles. An ideal shrink factor is 1.3. Algorithm[ ] comb_sort(list of t) gap = list.count temp as t swapped = false while gap > 1 or not swapped swapped = false if gap > 1 then gap = floor(gap/1.3) i = 0 while i + gap < list.count if list(i) > list(i + gap) temp = list(i) // swap list(i) = list(i + gap) list(i + gap) = temp i += 1 See Also[ ]
{"url":"https://algods.fandom.com/wiki/Comb_Sort_Algorithm","timestamp":"2024-11-06T00:41:53Z","content_type":"text/html","content_length":"147474","record_id":"<urn:uuid:8c1e4756-b514-4e92-8bbb-23e66280e128>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00538.warc.gz"}
Almost Finished Another One This is the one I’m thinking of doing for the next Quilt for an Hour. Still thinking . . it’s not hard but I can imagine that there will be some moaning and groaning about it. Quilted, binding is on but not finished. I have to say . . I really like this one! It needs a name. Any ideas? 1. Penny says I really like it. Not sure that I have time for another right now, I’m still trying to finish Freeze Frame so I’m glad you are keeping all the instructions on the site. As for a name for this one – what about “T-Junction”? (it was the first thing that came in to my head!) 2. Mary Carole says Paddle Wheels?? 3. Sarah says I love this one! Particularly the border – I’m hoping you do this one as a QFAH just so I can get my hands on the directions for the border! Don’t worry about those grumblers – some of them are good-natured, and the rest seem to forget sometimes that you are providing something for free which they could just choose not to do! 4. Karen says ‘Bedazzled’ is the first thing that came to mind. Gorgeous quilt. 5. barbara (in Tennessee) says I really like this one, too. It reminds me of fireworks or sparklers. 6. Susie says Oh Oh Oh please please do this one as QFAH. I love it. I started the last one and I WILL finish it but I got a little behind. Life, you know. Thanks Judy, for your blog, and for sharing your beautiful quilts 7. Cindy B says First thing I saw was splats of color, so Splat in the name would be fun. This design will appeal to the younger crowd too. Most of the younger women I know (daughters, friends of daughters) don’t like the classic quilt designs much for their homes. 8. Judi says I also love this quilt. I hope you either give it as a quilt for a hour or publish it. No help with the name sorry. 9. Tracy says Something about this quilt reminds me of Tinker Toys so that is the name I suggest. Very sharp looking quilt, but I simply can’t start another project this year…I just can’t…I just can’t. Well 10. KatieQ says When I looked at the picture of your new quilt, it reminded me of throwing a pebble in a pond. The splash moves away from where the pebble went in and causes ripples on the surface. That’s why I think “Neon Splash” is a good name. I have to admit, now that I’ve seen it in print, the name doesn’t have the same appeal. 11. Diane says Oh that is beautiful, I, too, hope you opt for the QFAH. I’m new to this blogger land thing and would really like to try one of yours. A name? maybe Crossing T’s 12. Judy C in NC says PETUNIA – I honestly cannot tell you why, but the first instant that I saw this quilt I thought “Petunia” would be a great name. I really believe that some quilts name themselves. My humble opinion from Judy C in NC 13. swooze says Twisting Tulips, Twirling Tulips, Turning Tulips….. 14. Michelle Cyr says I’m thinking Mardi Gras firecracker! 15. Bev/Mo says Wow, Kati Q…I love her suggestion of Neon Splash! Great name…. I second that suggestion. 16. Denise says Argyle Square — the way the design criss-crosses and the border remind me of argyle sock patterns. 🙂 It’s a great design Judy! □ Lydia says And I immediately thought “Argyle Sox” when I looked at it! So that’s my suggestion. –Lydia 17. Julia says No moaning and groaning from me! I love this one!!! Please share the pattern even if you decide not to do it for a QFH project. This is way better than the last one, IMHO. :~) 18. Jocelyn says Wow great quilt. It looks like Fireworks Pizazz to me 🙂 19. Jen says Wow! I think I may just have to participate in my first Quilt for an Hour project! I LOVE it!!!! I think you should do an EQ lesson on how you do those neat borders. 20. Maya says Laugh if you will, but what jumped to mind was “Rainbow Icepops”. I’d love to do this one – no groans from me either! 🙂 21. Pam Butler says Well – after reading the comments – I would combine 2 names suggested by Cindy B & Katie B and call it…. NEON SPLAT. 22. Marla Southers says What a beauty! “Gems & Jewels” is what comes to my mind! Quartz, saphires, emeralds and all! 23. Dorothy says I love it, so bright, while I like the the name Neon Splash, I think Gem and Jewels fits too! Yes, please do the pattern for a QFH! 24. Kathy says I think this quilt is just stunning! I’ve really liked all of your quilts, but there is something about this one that stands out. I’m SO busy right now, but if you do decide to to a Quilt for an Hour, I think I’d have to join in… 25. Mary C says I like “Sparkle Pop” for the name. It looks like big yummy popcicles to me. HAHAHA I just looked again at what I wrote and was reminded of “Snap, Crackle and Pop”. Is that a Copywrited name??? Will wait for the QFAH on this one too… 26. Lisa says I love this quilt. Please do it for a QFAH quilt. I have no idea what I would name it…. 27. Laurel says The colors remind me of Mardi Gras beads, so Mardi Gras. 28. Sandra (Sandy Gail) says How about CROWN JEWELS ? Looks like crowns to me in jewel tones. Beautiful color choices by the way. 29. Vicky says Royal Tee. I’m awful with naming quilts. (Remember “Liquorice”? LOL) It’s a great quilt. No moaning here either, but probably no time! 30. Deb says Love it. You are very generous for doing quilt for an hour, or any of the free patterns. You could do DOZENS of books or patterns, all of which I would BUY. You have such a gift. I think all of your patterns are so unique compared to most of what is out there now. Thanks for sharing. With every one you make, it still amazes me how much I love them ALL. This usually doenst happen with anyone’s favorite artist, music, books, even quilters, but so far you are batting 100% with me! 31. Linda says I like it too!!! Gorgeous. I keep telling myself I’m not starting anything new until I get a little more caught up, but this might be the exception. At the very least I’ll have to print out the instructions to make later. Just love the colors you chose!! 32. Sandi B says Very nice!! How about “Chained Spiked Punch” for a name? 33. Kristin Farwig says Nice quilt, Judy! the first name that popped into my head was “Pastel Punch”. 34. Nancy says Oh I love it!!! I suggest Mardi Gras!!!! It is so bright and looks like a celebration! 35. Donna says This quilt is a breath-taker! It’s so pretty! I immediately thought of jewels and crystals. Jewels because of the colors and crystals because of all the ‘facets’ the quilt has. I love your blog and am a huge fan! Please keep doing what you like to do. Donna in Tulsa 36. dawn says looks like daisy chain over fiesta to me. the white and yellow portions look like daisies to me. 37. JoanS says “After the Spring Rain” is what this quilt looks like to me! After the rain stops, the spring colors pop against the blue skies. 38. Cathy in Kansas says The quilt is beautiful. It reminds me of a “Birthday Party” with lots of presents, party hats, confetti and candles on the cake. I really enjoyed your workshop yesterday. Thank you for sharing with us. 39. Cecilia says 40. Nancy says “Fireworks.” That’s for sure!! 41. bettina says alittle off subject do any of you use anti fray spray and does it work i am currently working on a quilt but the ends are fraying and i dont know what to do 42. Linda (Petey) Fritchen says That is one great quilt…I would love to do it! Doesn’t matter what you name it, I’m going to call it ‘beautiful’. There are many good suggestions here for a name. 43. Maxi in CA says I really like this one and would like to make it. First thought for a name is “Morning Starburst.” A lot of pretty name suggestions. 44. Dianah says Oh PLEASE do this as a QFH project. I love it! I finish Freeze Frame last night and would love to start this one. 45. Sandy says Great quilt, count me in on the your next quilt for an hour. The first thing that came to mind when I looked at your quilt was “Cross Fire”. But, I do like “Tinker Toys”. Great job, great blog and great quilts. Sandy in Tulsa/Skiatook, OK 46. carol c says it is gorgeous paddle wheel of love 47. Darlene S says I love this one. I have not done a QFAH yet, but this one is definitely a tempting one to start. It reminds me of a Spring garden full of beautiful colors. Some names that come to mind are: Mardi Gras Garden, Lollipop Fiesta, Springtime in Missouri. 48. okperi says How about What’s my Name or Name That Flower, lol. 49. Kathy says My first thought was “Fireworks” 50. Sue Abrey says Christmas Crackers? or Fireworks? 51. Carol says Beautiful quilt. What an eyepopper! Love all the names. My seem kind of lame but here goes…”Jeweled Reflections” or “Laser Scatter” or “Jeweled Echoes”. Thanks again Judy for sharing with us! 52. Nancy says I love the quilt! I hope this is the next one!! the blocks look like Christmas packages and they also look like fireworks. Again, Love it! 53. Diane says Reminds me of rainbow sherbert. 54. laceflower says I really like this quilt. name suggestion “Squawk Box” 55. Carol Kimble says Yup – that one I want to do with you! 56. Barbara walsh says 57. neen says How about………..PARCHEESI 58. bert says It is a lovely quilt. Love the pattern. I am a beginner quilter. Is this going to be difficult or at least to difficult for a beginner. Also is it going to be hard to enlarge the pattern to a queen size? I did the last one and really enjoyed the challenge of the daily hour which for me was much more than an hour. 59. Trish says I hope you decide to use this quilt for the quilt for an hour project. The colors and the design really draw the eye! 60. Bessie H. says Love these colors! In the argyle vein I like “PLAID ATTACK”. 61. Rosaline says If I made this one in Christmas fabrics I would name it “Christmas Crackers”. 62. Kerri says Wowser! I love this one. It is so striking. I will do my best to keep up with the quilt for an hour! 63. Ann Sandberg` says Gorgeous quilt! Name: Spring Porcupine Tulips Best of Show Tulips Fenced in Tulips Broken Fence Tulips 64. valerie says I would love for this to be the next quilt for an hour. It is very striking. ENERGY PLAY MODERN TIMES PLACES TO GO I don’t know why but it reminds me of Futuristic travel. Thanks for all of your imaginative ideas and inspiration.
{"url":"https://patchworktimes.com/2009/10/15/almost-finished-another-one/","timestamp":"2024-11-09T06:27:52Z","content_type":"text/html","content_length":"185235","record_id":"<urn:uuid:c5fc5bac-656b-4fb4-b929-7f6f551b32ae>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00659.warc.gz"}
Leetcode 621 Task Scheduler Given a characters array tasks, representing the tasks a CPU needs to do, where each letter represents a different task. Tasks could be done in any order. Each task is done in one unit of time. For each unit of time, the CPU could complete either one task or just be idle. However, there is a non-negative integer n that represents the cooldown period between two same tasks (the same letter in the array), that is that there must be at least n units of time between any two same tasks. Return the least number of units of times that the CPU will take to finish all the given tasks. Input: tasks = ["A","A","A","B","B","B"], n = 2 Output: 8 A -> B -> idle -> A -> B -> idle -> A -> B There is at least 2 units of time between any two same tasks. Input: tasks = ["A","A","A","B","B","B"], n = 0 Output: 6 Explanation: On this case any permutation of size 6 would work since n = 0. And so on. • The total unit time is length of list + idld time. Count the frequence of each letter. Select the most frequent letter as boundary.The the number of room that between two boundaries is boundary - 1 and the idle time unit will be at least room * n. Split other letters into rooms and each room has no same letters. If all the idle time unit all use out, we are free to expand each room so we do not need more idle time unit. This will give the result as the lenght of the given list. Otherwise if the after fill all tasks into idle time unit and there are still idle time unit not been used then we add length of given list and the available idle time units to get the result. def leastInterval(self, tasks: List[str], n: int) -> int: count=[value for value in Counter(tasks).values()] while count and idle > 0: idle -= min(room, count.pop()) return max(0,idle)+len(tasks)
{"url":"https://www.dincerbakkal.com/posts/leetcode621/","timestamp":"2024-11-08T08:20:47Z","content_type":"text/html","content_length":"16106","record_id":"<urn:uuid:da867372-179b-4bd0-bb51-a85f41611ff7>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00687.warc.gz"}
Difference between the greatest and least values of the functio... | Filo Difference between the greatest and least values of the function in the interval is , then is equal to : Not the question you're searching for? + Ask your question Was this solution helpful? Video solutions (2) Learn from their 1-to-1 discussion with Filo tutors. Found 6 tutors discussing this question Discuss this question LIVE for FREE 15 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Practice questions from Advanced Problems in Mathematics for JEE (Main & Advanced) (Vikas Gupta) View more Practice more questions from Application of Derivatives View more Practice questions on similar concepts asked by Filo students View more Stuck on the question or explanation? Connect with our Mathematics tutors online and get step by step solution of this question. 231 students are taking LIVE classes Question Text Difference between the greatest and least values of the function in the interval is , then is equal to : Updated On Jul 19, 2023 Topic Application of Derivatives Subject Mathematics Class Class 12 Answer Type Text solution:1 Video solution: 2 Upvotes 344 Avg. Video Duration 16 min
{"url":"https://askfilo.com/math-question-answers/difference-between-the-greatest-and-least-values-of-the-function-fxint_0xleftcos","timestamp":"2024-11-13T22:48:52Z","content_type":"text/html","content_length":"546130","record_id":"<urn:uuid:a7e3c111-d3a4-4e31-8d94-062898971db6>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00078.warc.gz"}
AMC 12 2023 A Problem 16 Consider the set of complex numbers z satisfying \left|1+z+z^{2}\right|=4. The maximum value of the imaginary part of z can be written in the form \dfrac{\sqrt{m}}{n}, where m and n are relatively prime positive integers. What is m+n? Answer Choices A. 20 B. 21 C. 22 D. 23 E. 24
{"url":"https://forums.randommath.com/t/amc-12-2023-a-problem-16/7826","timestamp":"2024-11-03T12:59:30Z","content_type":"text/html","content_length":"19360","record_id":"<urn:uuid:cdc320c2-dce0-4c79-951a-e41102b3768d>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00397.warc.gz"}
Absolute Cell ReferenceAbsolute Cell Reference - Read A Topic Today Absolute Cell Reference Absolute cell reference in Excel is denoted by a dollar sign before the column and row coordinates, locking the cell reference in formulas. When you copy the formula to another cell, the absolute reference remains fixed. In Excel, absolute cell references are crucial for maintaining the integrity of formulas when copying them across multiple cells. By using the dollar sign to fix the reference, you ensure that the formula always points to the specified cell, regardless of its location. This eliminates errors and provides consistency in calculations. Understanding absolute cell references is fundamental for proficient Excel usage and data handling. Credit: www.youtube.com What Is An Absolute Cell Reference? An absolute cell reference in a spreadsheet refers to a cell or range of cells that remain fixed, regardless of where it is copied or filled. It is a reference that does not change when copied to another cell or range. Understanding absolute cell references is vital for building complex formulas in spreadsheets, especially when dealing with large data sets and calculations. Let’s dive into the definition and purpose of absolute cell references. An absolute cell reference is a fixed cell or range of cells in a spreadsheet formula, denoted by a dollar sign ($) before the column letter and row number (e.g., $A$1). This notation indicates that the reference should remain constant when the formula is copied to other cells. The primary purpose of using absolute cell references is to maintain a fixed point of reference within a formula. It ensures that specific cells or ranges are not affected when the formula is applied to different areas of the spreadsheet. This is particularly useful when working with large data sets and performing calculations where certain values need to remain unchanged. Credit: exceljet.net How To Use Absolute Cell References Absolute cell references are essential for maintaining fixed references when copying formulas in Excel. Let’s explore how to use absolute cell references effectively in your spreadsheets. Using The Dollar Sign To create an absolute reference in Excel, place a dollar sign ($) before the column letter and row number to lock them in place. This prevents the cell from adjusting when copied to other cells. Example 1: If you want to lock cell A1 when copied, use $A$1 as the absolute reference. Example 2: For locking only the column when copied horizontally, use $A1. Similarly, to lock the row when copied vertically, use A$1. Advantages Of Absolute Cell References Absolute cell references offer several advantages in spreadsheet applications. They allow users to easily copy formulas without changing the referenced cell, ensuring accurate data calculation and analysis. Additionally, absolute cell references provide flexibility in adjusting formulas, making data manipulation more efficient. Maintaining Fixed References Absolute cell references ensure specific cells remain constant when copied. Ease Of Copying Formulas Copying formulas with absolute references maintains accuracy. Common Mistakes With Absolute Cell References Forgetting To Use Absolute References One of the most common mistakes with absolute cell references is forgetting to use them when they are necessary. When a formula needs to be copied to other cells, not using absolute references can result in incorrect calculations. Forgetting to use the dollar sign ($) to lock the cell reference can lead to unexpected errors, especially when dealing with large datasets. Using Absolute References Unnecessarily Another mistake is using absolute references unnecessarily. Though absolute references are crucial for maintaining the reference to a specific cell when copying a formula, they may not always be needed. Using absolute references when they are not necessary can make the spreadsheet harder to understand and maintain. It’s essential to analyze the formula and determine whether an absolute reference is truly required, as unnecessary absolute references can clutter the spreadsheet and confuse other users. Tips And Tricks For Working With Absolute Cell References Discover effective strategies for utilizing absolute cell references in your work. Learn essential tips and tricks that can help you make the most of absolute cell referencing, ensuring accuracy and efficiency in your spreadsheet tasks. Mastering this technique can significantly enhance your productivity and streamline your data management processes. When it comes to working with Microsoft Excel, absolute cell references are a powerful tool that can greatly enhance your ability to create complex formulas and efficiently analyze your data. In this section, we will explore some tips and tricks that can help you effectively utilize absolute cell references in Excel. Using Absolute References In Complex Formulas Absolute cell references are especially useful when working with complex formulas that involve multiple worksheets or require copying the formula across different cells. By using an absolute reference, you can fix a specific cell in your formula, ensuring that it always refers to the same cell regardless of where the formula is copied or dragged. This ensures accuracy and consistency in your calculations. To create an absolute cell reference, you simply add a dollar sign ($) before the column letter and row number of the cell. For example, if you want to refer to cell A1 absolutely, you would write it as $A$1. When you copy the formula containing the absolute reference, the cell reference will not change, making it easy to apply the formula to multiple cells without worrying about the incorrect cell references. Here’s an example: In this formula, the dollar signs before the column and row make the reference absolute, allowing you to sum up the values in cells A1 to A10 regardless of where you copy the formula. Using Mixed References Mixed references are another technique that can be useful when working with absolute cell references. In certain cases, you may want to fix either the column or row of a cell reference, while allowing the other part to change when the formula is copied. This is where mixed references come in handy. To create a mixed reference, you only add the dollar sign ($) before either the column letter or the row number. For example, if you want to fix the column but allow the row to change, you would write it as $A1. Conversely, if you want to fix the row but allow the column to change, you would write it as A$1. Here’s an example: In this formula, the column A is fixed while the row reference B1 can change when you copy the formula to other cells. Dynamic Use Of Absolute References One of the major advantages of using absolute cell references is their ability to dynamically adjust to changes in your workbook’s data. By utilizing the INDIRECT function, you can create formulas that reference cells based on criteria or conditions. For instance, let’s say you have a worksheet with sales data for different regions, and you want to calculate the total sales for a specific region. Rather than manually changing the cell references in your formula each time, you can use the INDIRECT function along with absolute references to dynamically fetch the desired cell. Here’s an example: In this formula, the INDIRECT function retrieves the cell reference from cell B1 and performs the sum operation. If you change the value in cell B1 to, let’s say, C5, the formula will automatically adjust and sum the values in cell C5. By mastering these tips and tricks for working with absolute cell references, you can boost your productivity and efficiency in Excel, making complex calculations a breeze. Whether you’re creating financial models, analyzing large datasets, or organizing inventory, the flexibility of absolute references will save you time and effort. Credit: www.simplilearn.com Frequently Asked Questions On Absolute Cell Reference What Does ‘$’ Mean In Excel Formula? In Excel formula, ‘$’ signifies an absolute reference, preventing the cell reference from changing when copied. What Are The 3 Types Of Cell References In Excel? The 3 types of cell references in Excel are Relative, Absolute, and Mixed references. How Do You Absolute Reference A Table In Excel? To absolute reference a table in Excel, simply use a dollar sign ($) before the column letter and row number. For example, to reference cell A1, use $A$1. This locks the cell reference for formulas and ensures it doesn’t change when copied. How Do You Do Absolute Value In Excel? To find the absolute value in Excel, use the ABS function. It returns the positive value of a number, eliminating any negative sign. In utilizing absolute cell references, you empower your spreadsheet to maintain accuracy and efficiency. By understanding the significance of the dollar sign in Excel formulas, you open possibilities for smarter data management. This blog post has uncovered the basics of absolute cell references, offering key insights for enhancing your Excel proficiency.
{"url":"https://readatopic.com/absolute-cell-reference/","timestamp":"2024-11-08T08:20:30Z","content_type":"text/html","content_length":"89928","record_id":"<urn:uuid:8e2d0492-54d3-415a-883c-cfbf59f92e27>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00057.warc.gz"}
Masterstudium im Bereich Algebraische Geometrie Master of Science in Mathematics Lectures and Seminars The Essen Seminar for Algebraic Geometry and Arithmetic is one of Germany’s top research groups in this area regarding quantity as well as quality. The renowned Institute for Experimental Mathematics completes its thematic range through additional subject areas of algebra and experimental and algorithmic methods. Therefore we are able to provide a Master degree program covering numerous aspects of all these fields. Here is a list of COURSES. Language of instruction All courses are held in English unless German is favored by all participants and the lecturer. Courses are generally held in English. International Learning Environment The Essen seminar has joined the ALGANT consortium. The consortium offers a two-year world-class integrated master course and a joint doctorate program in pure mathematics, with a strong emphasis on Algebra, Geometry and Number Theory. Both programs have received the Erasmus Mundus label. Massimo Bertolini Elliptic curves, Modular forms, $L$-functions, Regulators, Algebraic Cycles Ulrich Görtz Langlands program, Shimura varieties, Moduli spaces of abelian varieties in positive characteristic Daniel Greb Birational geometry, geometric invariant theory, Kähler geometry, moduli spaces Georg Hein Moduli Spaces of Stable Vector Bundles on Curves and Surfaces Jochen Heinloth Moduli stacks, Geometric Langlands program, Moduli spaces of Bundles Jan Kohlhaase Representation theory of p-adic Lie groups, Arithmetic of p-adic Moduli spaces Marc Levine Algebraic Cycles, Algebraic K-Theory, Motives Vytautas Paskunas Representation Theory, Number Theory, p-adic Langlands program Johanes Sprang Number Theory, Transcendence Theory
{"url":"https://www.esaga.uni-due.de/master/","timestamp":"2024-11-06T08:46:12Z","content_type":"text/html","content_length":"15184","record_id":"<urn:uuid:c026aedf-a453-48e5-802a-ba2e47f57404>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00254.warc.gz"}
One post tagged with "standard deviation" | JtlReporter Performance testing is a crucial element in software development, revolving around evaluating and validating efficiency, speed, scalability, stability, and responsiveness of a software application under a variety of workload conditions. Conducted in a controlled environment, performance testing is designed to simulate real-world load scenarios to anticipate application behavior and responsiveness in terms of cyber traffic or user actions. What's a standard deviation? Standard deviation is a commonly used statistical measure that is used to assess the variability or spread of a set of data points. It is a measure of how much the data deviates from the mean or average value. It provides valuable insights into the consistency and reliability of a given metric, which can be useful in spotting potential performance bottlenecks. A low standard deviation indicates that the data is tightly clustered around the mean, while a high standard deviation indicates that the data is spread out over a wider range. Importance of Standard Deviation in Performance Testing The role of standard deviation in performance testing is profound. It provides an objective measure of the variations in system performance, thus highlighting the stability of the software application. A higher standard deviation indicates a high variation in the performance results and could be symptomatic of inherent problems within the software, while a lower or consistent standard deviation reflects well on system stability. Thus, the inclusion of standard deviation in performance testing is not just informative but also crucial for a focused and efficient optimization of system performance. It serves as a compass for test engineers, guiding their efforts towards areas that show significant deviations and require improvements. This makes the power of Standard Deviation indispensable when conducting performance Practical Examples of Standard Deviation in Performance Testing For instance, if the software's response time observations have a lower standard deviation, it conveys consistency in the response times under variable loads. If there is a higher standard deviation, as a tester, you would need to delve further into performance analysis, pinpointing the potential bottlenecks. It essentially acts as a roadmap, directing you towards the performance-related fixes required to achieve an optimal-performing website or application. The standard deviation represents the data and its distribution pattern. If the standard deviation is greater than half of its mean, it most likely means that the data is not formed in a normal distribution pattern. The closer the data is to the normal distribution pattern (bell curve), the higher the changes that the measured data do not include any suspect behavior. Incorporating Standard Deviation in Performance Testing Reports through JTL Reporter In this digital era, leveraging the power of analytical tools to assess software performance has become essential. JTL Reporter is such a captivating platform that aids in recording, analyzing, and sharing the results of performance tests. This platform effectively integrates standard deviation measurement in performance testing, offering a holistic overview of system performance and stability and, thereby, proving invaluable in making informed testing decisions.
{"url":"https://jtlreporter.site/blog/tags/standard-deviation","timestamp":"2024-11-04T18:33:48Z","content_type":"text/html","content_length":"17952","record_id":"<urn:uuid:50de6c4f-fbef-4d06-a271-a2912f93ec0d>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00263.warc.gz"}